Statistics Help @ Talk Stats Forum - Statistics
http://www.talkstats.com/
Statistics course and homework discussion. Elementary statistics.enTue, 21 Oct 2014 08:02:35 GMTvBulletin60http://www.talkstats.com/images/misc/rss.pngStatistics Help @ Talk Stats Forum - Statistics
http://www.talkstats.com/
CLT and sample distribution of sample mean
http://www.talkstats.com/showthread.php/58127-CLT-and-sample-distribution-of-sample-mean?goto=newpost
Tue, 21 Oct 2014 03:38:48 GMTOn a particular stretch of highway, the state police know that the average speed is 62 mph with a standard deviation of 5 mph. On a busy holiday...On a particular stretch of highway, the state police know that the average speed is 62 mph with a standard deviation of 5 mph. On a busy holiday weekend, the police are concerned that people travel too fast. So they randomly monitor speeds of a sample of 50 cars and record an average speed of 66 mph. Find the probability of obtaining a sample average speed of 66 mph or more if, in fact, the true average speed on that holiday weekend is still 62 mph? μx= σx=
Where do I even start? Im having a hard time understanding what to do? (Using minitab if that is helpful)
]]>Statisticsajchttp://www.talkstats.com/showthread.php/58127-CLT-and-sample-distribution-of-sample-meanCentral Limit Theorm Basics Help
http://www.talkstats.com/showthread.php/58120-Central-Limit-Theorm-Basics-Help?goto=newpost
Mon, 20 Oct 2014 17:57:25 GMTFirst time on boards, nice to meet you all!
I need a little help understanding my homework, some of the notations are putting roadblocks on me...First time on boards, nice to meet you all!

I need a little help understanding my homework, some of the notations are putting roadblocks on me from completing it.

So to find the probabilities we'll need use of our Z-score? x̄-µ/(σ/√n) I'm a little confused on what x̄ is supposed to be? I believe it to be the mean of the distribution sample...
]]>StatisticsMeteohttp://www.talkstats.com/showthread.php/58120-Central-Limit-Theorm-Basics-HelpUniverse size vary between original and the material to compare
http://www.talkstats.com/showthread.php/58115-Universe-size-vary-between-original-and-the-material-to-compare?goto=newpost
Mon, 20 Oct 2014 07:08:52 GMTThis might seem quite trivial to a stats whizz but I just can't wrap my head around it.

So right now, I'm working on a project to validate the accuracy of Optical Character Recognition. we have 50 samples that we've taken off the recognized material.

We do not have the ground truth, that is, the original text as a computer string so we can't compare the two by a computerized Levenshtein-distance calculation. So we've compared the scan and the recognition manually.

We do have an estimation of how many characters there are in the original.

Now we have the problem of deciding which universe to take from. The thing with OCR is that it tends to add and delete characters. And therefor, the universe size of the original isn't the same as the universe size of the recognized text.

So if the word jam in the original has been recognized as ../ja*n. we've got the inserted ../, , the m being recognized as *n which we count as an insertion and a substitution and an inserted dot ('.') at the end. If we'd set the universe to the recognized text (Out of the 8 recognized characters, 6 were wrong) we'd have an error-rate of 6/8. But if we'd rather set the universe to the original, the number of errors would still be 8 but the universe would only be 3. So then we'd have an error rate of 8/3 which is a lot worse.

Is there any way of coping with this problem? Or is it only a matter of choosing the comparation that best suits our needs.
]]>StatisticsSchweutschhttp://www.talkstats.com/showthread.php/58115-Universe-size-vary-between-original-and-the-material-to-compare