this may take a bit of explaining, your patience appreciated. i've encountered a serious anomaly in a well-known encryption algorithm, the problem is that a) i need to statistically prove that b) i cannot walk away from this and leave it, that would be irresponsible, so my lack of knowledge of applying statistics is something that simply has to be overcome.

first, to describe what the issue is: random data contains 1s and 0s, probability is, if the algorithm is good, of exactly 0.5. however for practical reasons the tests are done in blocks (of length e.g. 1024 bits, or 1,000,000 bits) and you take a p-value from that, then do it again, and assuming that the distribution of p-values is uniform you may then take a histogram of the p-values and check that *those* are uniform.

this is exactly what nist.gov's "Statistical Test Suite" does.

the only problem with that approach is the assumption "the p-value distribution is uniform". if it's *not* uniform, then you end up with a mis-match between the histogram bucket size and the distribution of p-values, one bucket ends up with more than its fair share and the histogram is skewed.

so, what i have done so far is:

* measure the total number of zeros in a block (say length of 256 bits)

* assume a binomial distribution calculate a p-value

* create a dictionary (in c code) representing a histogram for *each* discrete p-value

* repeat for ten million blocks

we get a table as follows, which illustrates graphically that the p-values are *discrete* and *not* continuous. each p-value is specifically tied to the binomial distribution because the sample size is relatively small.

Code:

```
0.000000004 1
0.000000077 1
0.000000152 1
0.000000298 1
0.000000573 3
0.000001088 4
0.000002034 13
0.000003746 14
0.000006795 30
0.000012143 60
0.000021377 113
0.000037073 177
0.000063342 355
0.000106625 540
0.000176835 827
0.000288961 1300
0.000465258 2112
0.000738157 3321
0.001154050 5168
0.001778051 7472
0.002699796 11086
0.004040275 15609
0.005959526 22666
0.008664897 31501
0.012419331 44040
0.017548950 59411
0.024448945 79218
0.033586613 104171
0.045500264 133922
0.060792724 171872
0.080118314 216864
0.104162559 266307
0.133614403 324626
0.169131445 386622
0.211299547 457682
0.260589034 530572
0.317310508 604553
0.381573906 680090
0.453254705 754112
0.531971058 820481
0.617075077 880667
0.707660467 929469
0.802587349 965874
0.900523550 988681
1.000000000 498391
```

what i need is a way to actually prove that, statistically.

here's what i thought how to do that:

1) i figured that it should be possible, as each "block" is a binomial distribution, to calculate a "perfect" histogram, based on the fact that each 1 or 0 in a block is (must be!!!) an independent variable, you *should* be able to back-calculate, mathematically, from the block size and number of runs what the "perfect" histogram should look like.

2) it should then be possible to predict the probability of deviation (of any one discrete p-value) from the "perfect" corresponding value, and that's when i can say "yep there's definitely a problem" (or not).

actually what would do instead of (2) would be an accumulation of answers below a certain p-value, although testing individual p-values (each of which would represent an exact number of 0s/1s in a block) would be nice.

thank you for any help that you might be able to give, and i am happy to give credit for any help.

l.