Standard deviation - how many samples to I need to measure to get a "good" one?

#1
I was looking at Minitab 16's Power and Sample Size calculation tool for a one sample t-test. If you're not familiar with it, you can put in 2 of the 3 factors: sample size, differences, and power values. You also have to put in the standard deviation.

This got me thinking. How do you know how many samples you need to measure to get a reliable standard deviation? For example, say I'm manufacturing cereal. I want to put 300g +/- 5 grams in every box. To determine if I'm doing that, I would used the one sample t-test power and sample size calculation. If I didn't have a database of historical weights with a large number of samples which I could use to calculate the standard deviation, how many samples would I need to weigh to get a stastically significant and reliable standard deviation?

The same question could be asked in any number of situations in which a standard deviation is necessary for statistical analysis such as a Gage R&R study which requires a historical standard deviation.

Also, how do you go about estimating or conjecturing a standard deviation for such statistical analysis if you have little or no information as to what it might/should be?

Thank you in advance for any help.