1) There isn't a single objective way. There are more or less objective priors called "flat priors" and these can be objective in most cases but even then, an unintended transformation can make them informative again. Now whether this is bad is another issue completely, and it depends on the influence of an informative prior on the posterior.

2) The results can be sensitive to this, in general however as n increases, the influence of the prior mean diminishes. So this is mostly a problem with sparse data. However where the prior probability is zero, no amount of data will change that - and the prior's influence is absolute. This is why Dason loves point mass priors so much . One way to asses this (influence of the prior) is to look at prior-posteriors plots (see my graphic here), if the mass of the posterior is at one end of the prior then the prior is badly chosen. Another indication that COULD indicate a strong prior influence is the distance between the posterior mean, prior mean and likelihood MLE - if the posterior mean is closer to the prior mean than towards the likelihood mean it indicates a strong prior influence.

3) Well this goes to the heart of Bayesian stats, the posterior basically tells us how much of the author's original beliefs remain after comparisons with the data. As long as the prior is fairly chosen, this can be very informative.

Prior choice can be very subjective, but Bayesian stats are not truly more prone to misuse than any other statistic.

"Lies, Grand Lies and Statistics"