1. Re: Is the dice normal?

Originally Posted by Dason
If you flipped a coin three times and it came up heads three times in a row would your immediate conclusion be that the coin isn't fair?
if only 3times, i have no confidence to make conclusion. if flip that coin 50times i can make the conclusion, if 300times, i have strong confidence to make the conclusion.
but if i must make a conclusion, even only 3 times, "i say the coin is not fair” is closer to the fact than "i say the coin is fair".AM I RIGHT?

2. Re: Is the dice normal?

Think of your null hypothesis as being a defendant in a court case. You assume your null hypothesis (e.g. the coin is fair) is true until proven otherwise. Only with sufficient evidence can you _reject the null hypothesis_. Supposing you've flipped the coin three times, is that really enough evidence to reject the null hypothesis?

3. Re: Is the dice normal?

Originally Posted by nicegirl
if only 3times, i have no confidence to make conclusion.
The probability of getting 3 or more heads in 3 flips is approximately the same as the probability of seeing the results in your original posts with the dice. They're both have the same p-value so if one of those events isn't enough evidence to convince you (the coin flips) why should the other be enough to convince you (the dice).

4. Re: Is the dice normal?

I'm interested in testing RNGs and PRNGs for randomness. I had been playing with a simulated
(six sided) die roll, and began averageing X square results of 300 toss tests. My PRNG nicely
generated means of very nearly 5.0. I got the idea of creating a small amount of bias to see
the effects on the mean.

The raw pseudo-random source generates real numbers between zero and one. Simulated
die face 1 (for example) occurs for real numbers from zero to 0.166666. One interesting
test was to drop this from 0.16666 to 0.158 which also had the effect of increasing the
die face 2 slice from 0.16666 to 0.175. The other four slices remained at 0.16666 each.
The mean of the x squares then increased from 5 to 5.26.

Now, with the unbiased PRNG I was also keeping a record of so-called failures at the
0.01 level. I noticed that such "failures" did indeed occur about once every 100 tests,
just as expected with random sequences. In fact, one might test in this way ... for
the expected frequency at a selected p_level.

Tests such as the chi square mean test appeal to me since they don't suffer from
the contradictions inherent in testing to p_levels. It would be a interesting exercise
to discover how to formalize such a test.

Art

5. Re: Is the dice normal?

Originally Posted by ArtK
Tests such as the chi square mean test appeal to me since they don't suffer from
the contradictions inherent in testing to p_levels. It would be a interesting exercise
to discover how to formalize such a test.
I don't understand what you're saying here or which tests you're actually referring to.

6. Re: Is the dice normal?

Originally Posted by Dason
I don't understand what you're saying here or which tests you're actually referring to.
By chi square mean test I was referring to my procedure of averaging x square results of 300 tosses of a die. The
300 tosses test is repeated (say) 1,000 times and the resulting x squares are summed and the sum then divided
by 1,000. If the die is unbiased, the average (mean) is the DF which is 5 for the six sided die. If the die is biased
enough, there is a significant departure from the mean of 5.

Is that sufficient elaboration?

Art

7. Re: Is the dice normal?

What did you mean by "p_levels" and what contradictions were you referring to

8. Re: Is the dice normal?

Note: The chi-square "goodness of fit" test is certainly not without its problems. That is, no one, that I can think of would recommend it to be used as a test to see if data are normally distributed. Usually, Shapiro-Wilks, Kolmogrov-Smirnov , Anderson-Darling tests, etc. are suggested. In short, it suffers from the so-called "excessive power problem."

That said, and I'm aware that this may very well be beyond the scope of the OP's question, an approach that could be used to tackle this problem is the Bootstrap.

Specifically, what one could do is determine the population parameters associated with a Discrete Uniform probability mass function in the range of 1,...,6 for the (i) Mean = 7/2, (ii) Variance = 35/12, (iii) Skewness = 0, and (iv) Kurtosis = -222/175.

You then take your 300 data points and bootstrap, say 95% confidence intervals on the Mean, Variance, Skewness, and Kurtosis, on the data and subsequently look to see if the population parameters (above) fall within the Bootstrap confidence intervals.

This is an approach I used in the context of Benford's Law....i.e. same idea.

9. Re: Is the dice normal?

Originally Posted by Dason
What did you mean by "p_levels" and what contradictions were you referring to
A p_level is a chosen probability threshold for a test of randomness. Let's say 0.01 is chosen. A sequence is
considered "not random" if p is less than the p_level. I take issue with this since random sources are expected
to "fail" at the 0.01 level approximately once every 100 times. A proper test of a source of randomness would
involve thousands of tests to see if "failures" at the 0.01 level occur significantly more than once in 100 times
(and not just once which is meaningless).

That's one of the tests I mentioned. It was interesting to see that the unbiased generator did indeed produce
"failures" at the 0.01 level at a frequency of nearly 1 in 100 tests ... while the biased generator produced
a noticeably higher frequency.

Art

Page 2 of 2 First 1 2

 Tweet

Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts