What is the difference between sample algorithms and algoritms for estimating parameters?

I have red this in the website:

  • MCMC is a family of sampling algorithms, which means given a distribution, these algorithms return samples according to this distribution. Many problem, bayesian posterior inference for instance, require you compute the posterior distribution P(θ|D), most of the time has no close form solution, so instead of get the actually form of P(θ|D), you sample from it, after you collect the samples, you can use these samples to estimate θ. So, MCMC is a sampling algorithm, not a algorithm for estimating paramters.

What is the difference between a sampling algoritms and algorithma for estimating paramters?


Can't make spagetti
Well, you can still use MCMC to estimate parameters by doing something on the posterior distribution after you've run all your chains. On the (I'll admit, limited) work I did on this for my MA thesis, I used a type of MCMC (Gibbs sampler) to get a posterior distribution and then I calculated the MAP (Maximum A Posteriori) on that distribution to get a parameter estimate. I guess that's as opposed to something like Maximum Likelihood where after optimizing the likelihood function, you get a parameter estimate as a solution. So MCMCs by themselves won't give you a parameter estimate, but you can take one more step to get one.


Less is more. Stay pure. Stay poor.
Yes, it can be interpreted as an algorithm to get (simulate) samples used by another algorithm that calculate estimates based on those samples, which then results in a posterior distribution consisting of a distribution of estimates. So say you simulate 1,000 samples, then for each sample you run the estimate algorithm on it, so you end up with 1,000 estimates.