# Quick question about sample size

#### Jmz555

##### New Member
Gentlemen,
I have recently starting working in a call center environment. All our incentives, bonuses, and quality scores are based on 30 calls out of about a 1ooo. My question is: Would only thirty calls out of 1000 be an accurate representation of the 1000? The thirty calls that are scored are pass/fail and there wouldn't be a margin of error. Scored calls are also random. Any help would be great!!!

Jeremy

#### vinux

##### Dark Knight
Hi Jeremy,

It is actually depending on your limit of margin of error( in stat terms Confidence Level). More the sample size more the precision. Usually we take 95% or 99% confidence level( % standard may change depending on the situation). There are couple of sites regarding sample size..

http://www.surveysystem.com/sscalc.htm
&
http://edis.ifas.ufl.edu/PD006

Regards VinuX aka Richie

Last edited:

#### Jmz555

##### New Member
I was more or less hoping someone could come up with a figure. Confidence interval and confidence level means nothing to me. I've tried to figure this out and am easily frustrated with this stuff. I spoke with my father who tells me that scoring 30 out of 1000 would be an adequate representation of the whole-- How can this be true. If I failed 3 out of the 30 "scored" calls then I have a 10% fail rate? What if those are the only 3 that I failed out of the 1000 calls I actually took? I guess my question is: How many calls should be "scored" out of the 1000 calls I take, to represent the whole. 30 just seems a little low to me--
Thanks again

#### godlinx21

##### New Member
You should survey at least 10% of the population. So 100 people minumum.

#### vinux

##### Dark Knight
Hi Jeremy,

You should try to understand this term.. Because the sample size is depending on the variability in the data.

For Example if there is no variation.. then 1 out of 1000 will be enough for accurate representation ( all the values will be same)
like this if there is a high variability then even sample of 300 also will not be enough( in terms of fixed Confidence Interval).

I think this make sense to you.

Regards
VinuX aka Richie

I was more or less hoping someone could come up with a figure. Confidence interval and confidence level means nothing to me. I've tried to figure this out and am easily frustrated with this stuff. I spoke with my father who tells me that scoring 30 out of 1000 would be an adequate representation of the whole-- How can this be true. If I failed 3 out of the 30 "scored" calls then I have a 10% fail rate? What if those are the only 3 that I failed out of the 1000 calls I actually took? I guess my question is: How many calls should be "scored" out of the 1000 calls I take, to represent the whole. 30 just seems a little low to me--
Thanks again

#### dmmarathe

##### New Member
sample size

Hi Jeremy, assume you are handling 1000 calls each day.For the latest day for which data is available find no. of calls made faulty.let it be p. Now the desired sample size will depend on amount of confidence level 95 % or 99 %.If you can specify both, sample size can be calculated.

#### Jmz555

##### New Member
Maybe I'm not making myself clear enough. So lets do it this way: If you have a 1000 question true/false exam. How big of a sample size, of the 1000 questions would you need to accurately represent the whole 1000 question exam? 30? 100? 500? Help me out fellas

#### TheEcologist

##### Global Moderator
Maybe I'm not making myself clear enough. So lets do it this way: If you have a 1000 question true/false exam. How big of a sample size, of the 1000 questions would you need to accurately represent the whole 1000 question exam? 30? 100? 500? Help me out fellas
Now that just made it a whole lot worse.

If your fail rate is constant and a truly random sample of 30 out of 1000 is taken, 30 should give quite acceptable estimate of the fail rate. Now how big a sample size your boss needs depends on what he/she finds an acceptable fail rate.

If he feels that only 1% of the calls may “fail” then he can probably take even less than 30. If he feels that your fail rate should be around 50%, he will need to take more samples to be just as sure.

For instance: You're about to buy a truckload of very cheap apples, 500 of them. The salesperson tells you 1 out of every 100 apples are rotten, you then pick up and inspect 10 at random. If you find 5 out of 10 to be rotten you will doubt his claim. You will certainly not believe him if he says that you “by chance” took the 5 rotten ones.

Now if he told you 40 out of 100 are rotten, and you again found 5 (out of 10) you could not as easily have refuted his claim and you would need to inspect more.

The same rational holds for your call centre fail rate, if standards are high they only need to sample very little to find fault with certainty. So the sample size of 30 probably has everything to do with what your boss feels is an acceptable fail rate.

Nobody on this forum can tell you if your boss is wrong if you don’t know the fail rate standard.

How can you tell if the apple-salesman is wrong without knowing his claim?

vinux actually said it all in the first reply:
“It is actually depending on your limit of margin of error”

#### Dragan

##### Super Moderator
Maybe I'm not making myself clear enough. So lets do it this way: If you have a 1000 question true/false exam. How big of a sample size, of the 1000 questions would you need to accurately represent the whole 1000 question exam? 30? 100? 500? Help me out fellas

What is the standard error associated with the test you're reffering to above?