- Thread starter bizan
- Start date
- Tags likelihood

Before Dason tells me I am wrong, I note that this is not true (the need for normality) in all use of ML. It is for example commonly in its use in SEM.

The problem I have is that some signals are generated from 2 distributions, one N(0,2) and the other N(2,2). I would like a measure of the probability

that the sample generated from N(0,2) could be mismatched with the one coming from N(2,2).

I am thinking of using the p-value of the Kolmogorov-Smirnov test for 2 samples.

The problem I have is that some signals are generated from 2 distributions, one N(0,2) and the other N(2,2). I would like a measure of the probability

that the sample generated from N(0,2) could be mismatched with the one coming from N(2,2).

I am thinking of using the p-value of the Kolmogorov-Smirnov test for 2 samples.

I am modifying the parameter h, to go from 0 to a very high number. When h is very high anyone who sees the signal can tell for sure the student work hard.

I could use as a measure of how easy is to tell that a student worked hard by using the difference in mean. But I would like something between 0 and 1.