Assume that is a most powerful(MP) test for a given . This just means that for any parameter in , we have used Neyman-Pearson to find a most powerful test, and (coincidentally) they all yielded the same result, if they hadn't we could not have continued. Assume in addition that , but for , we especially have (equality). Then is UMP.

We have to prove that if is any other test for our situation, and . Then .

Anyway, we especially have for the given parameter we started with that , since . Assume now that is any parameter in . Since is another test, then Neyman-Pearson says that the power for this test must be lower than the original, because the criteria for type-1 error obviously is satisfied for .

Now, p(t<-1.99)=.027 and p(t>1.99)=.027 so my p-value= 0.054.

Need help explaining what is the z ratio?

and

What p-value would be associated with the observed z ratio?

Thanks. ]]>

i say this is from a 'social science' perspective because i will make use of the classical test theory model (and its assumptions) but i'm sure its much more generalizable to other areas of statistics. it really doesn't need more than understanding the basic properties of covariance algebra and i took the bulk of it from the Bollen (1989) book, although this is the univariate case (he deals mostly with the multivariate approach and the matrix algebra can get in the way of understanding things, so i worked on these slides for when i was a TA last year).

in any case, here we go!

let us define to observed scores and as:

so stands for 'true score' and for 'error' with the properties that the error has a mean of 0, a variance and some people like to say it's normally distributed (<--- not a necessary assumption but makes some things nicer later on). it is also important to keep in mind that is UNCORRELATED with , so the errors and the true score have a correlation/covariance of 0. the errors are also UNCORRELATED among themselves. in the kind of research that we do, we want to say stuff about , the true score, because that is the variable that measures the construct of interest. everything else gets in the way so we want to minimize its impact.

this leads to some basic (yet illustrative) ideas:

a similar argument can be made for and we get our first interesting result: measurement error INFLATES variances.

now, what about covariances?

uhm... interesting. it appears that, under the previously described model, measurement error has no influence in the covariances of the observed scores and

now, with these elements, we can re-express the correlation coefficient as:

so the correlation between the

and a similar development can be made for regression coefficients in the case of trying to estimate the true regression to show that the regression coefficient is attenuated and does not estimate even if the sample size grows to infinity.

now, i know the model for classical test theory sounds constricted and artificial. but then again we're social scientists so we get a free pass! :D ]]>