P values versus slope

noetsi

Fortran must die
#1
I understand that p values don't tell you how important and effect is. But I have variables that are significant (below .05) and not (say .3) and the slope is much higher for the not significant one which I am unsure how to interpret. The scale of the predictor in both cases is the same so its not a metric issue (its spending on a given service).

If it matters I have effectively the entire population of interest.
 

Dason

Ambassador to the humans
#2
You say the scale is the same but to me it just sounds like the units are the same. Units can be the same for two different variables but they could have drastically different scales in terms of how spread out they are.
 

noetsi

Fortran must die
#5
I am not sure how that addresses the original point. Or perhaps I should say if the p value says one model is right and a second model wrong, and the second model has a larger slope (given that it is measured the say way as the first variable) which do you pay attention to?

Do you simply ignore slopes that the p value says are not significant regardless of relative effect size. The fact that I have the whole population and know that the effect size is real make it even more confusing to me that the p value says the larger effect is not significant and the smaller effect is significant.
 

hlsmith

Not a robit
#6
@noetsi

You seem not to be mentioning the SEs. You can have a big slope by a lot of variability, which plays out as doubt in the estimate. You should select variables based on what you are trying to do and the context. I may opt to control for smoking status even though it isn't significant. I shouldn't have candidate variates in the model that I don't care about or that don't have contextual meaning in the first place. So don't put anything in that you don't care about, and only remove if you have contextual reason.
 

noetsi

Fortran must die
#7
I chose my variables based on suggestions from those who work in this area (I have not found much in the way of theory in this field, despite years of searching). So they are all theoretically important.

I am sure you are right about the standard errors. The se for one of the variables, the one that had a smaller slope but was significant had a SE a hundred times smaller than the variable that was not significant but had a higher slope.

I guess the real question is if I should use p values since this is effectively a population.
 

ondansetron

TS Contributor
#8
I am not sure how that addresses the original point. Or perhaps I should say if the p value says one model is right and a second model wrong, and the second model has a larger slope (given that it is measured the say way as the first variable) which do you pay attention to?

Do you simply ignore slopes that the p value says are not significant regardless of relative effect size. The fact that I have the whole population and know that the effect size is real make it even more confusing to me that the p value says the larger effect is not significant and the smaller effect is significant.
I would recommend some reading on p-values that covers briefly what they are and then a lot of what they are not. Here is the link: https://link.springer.com/article/10.1007/s10654-016-0149-3#Sec6

I think this may help give you some more insight on p-values and avoid conclusions like "...one model is right and a second model wrong..." based on p-values.
 

noetsi

Fortran must die
#10
I am discussing this issue with a colleague. They argue.

"
However, when building a regression model, the p-values are used to determine whether or not a given predictor variable has a significant impact on the outcome that is being measured. In this context, the p-values show whether or not there is any mathematically calculated reason to keep each predictor variable in the final regression model.



Building regression model is not the same subject as ‘drawing inference from sample mean versus populationmean’. "
 
#11
Yeah, but in every instance you are making a decision based on a binary rule and why does 0.05 matter instead of 0.1 or if you have a huge dataset, 0.001? They are just making a bunch of incremental hypotheses. That would probably make a good paper, how many statistical test get ran when building a model, heterogeneity, normality, keeping a variable, etc. A lot, that is the answer!
 
#12
I am discussing this issue with a colleague. They argue.

"
However, when building a regression model, the p-values are used to determine whether or not a given predictor variable has a significant impact on the outcome that is being measured. In this context, the p-values show whether or not there is any mathematically calculated reason to keep each predictor variable in the final regression model.



Building regression model is not the same subject as ‘drawing inference from sample mean versus populationmean’. "
If you want to be really strict on your colleague:
1) "significant impact" is a nonsensical statement as "significance" is an arbitrary dichotomization of the outcome for a particular statistical significance calculation and is not a property of the relationship or thing under study (i.e. a "significant relationship" doesn't exist, because "significance" is not a property of any phenomenon which either exists or doesn't or takes on some value or doesnt, but a calculated p-value may be grouped into one of two groups on the basis of the chosen p-value for that calculation).
2) p-values are more accurately described, but still imprecisely, as a continuous, summary statistic of how different the observed data are from the expectations of a particular assumption (null hypothesis); not really a "mathematical reason" to keep variables until you apply some subjective criterion to the p-value to make a decision (and the p-value need not be part of the decision at all)
3) Building a regression model really takes two general purposes: prediction or inference (you could loosely say description if you want, but generally those two former categories) and I think the prediction and inference objectives can often overlap and blur, but are frequently different
4) if colleague did say "drawing inference from sample mean versus population mean" this doesn't really make sense because inferences are always drawn from the sample values otherwise it's not a matter of inference.
5) if you want good predictions from the model this can be very different from if you want to examine relationships and make inferences

Interested to hear what people can offer as criticisms on the accuracy of my points (even if some are picky).
 

noetsi

Fortran must die
#13
Personally my argument is that p values make no sense for a population. The null hypothesis for the parameters is that the effect size is zero. With the entire population, which we have, you know if the effect size is zero or not. So the p values don't matter. Just as importantly in some cases the population effect size is clearly not zero and the p value is well above alpha (.3 in one case).

So the p values directly contradict what the population effect size show.
 
#14
I would agree that in the event you have the population, the p-value is irrelevant and becomes a calculable artifact for sample sizes that are finite. I would think that there is no sampling distribution to even calculate a p-value because there's theoretically no variability in the sample statistic since it is the population value. The whole idea of sampling variation comes from inability to obtain the whole and recognizing that the obtained subset could have been and will likely be a slightly different group from the population if sampling occurs again. When you have the population, this is a moot point, in my opinion.
 

Dason

Ambassador to the humans
#15
That's all true. *If* you don't consider any other observations to be of any interest (which includes possible future observations). Want to consider whether the effect might hold at any point in the future? Sorry - you don't have the complete population any more.
 
#16
That's all true. *If* you don't consider any other observations to be of any interest (which includes possible future observations). Want to consider whether the effect might hold at any point in the future? Sorry - you don't have the complete population any more.
When I was typing the post I started going into "of course...assumes static population..." but then was like "you're typing too much"! (And considered talking about error in "obtaining the population" where observations are missed or inappropriately sampled (from another population).
 

noetsi

Fortran must die
#17
If you mean that the population might change in the future that is true, but then I think that population does not actually exist at this point in time. I am not sure of the logic of saying our population is a subsample of a population that does not actually exist. :p
 
#18
If you mean that the population might change in the future that is true, but then I think that population does not actually exist at this point in time. I am not sure of the logic of saying our population is a subsample of a population that does not actually exist. :p
This would be the example then that you can't, for one reason or another, obtain the population and anything you get is a sample and you need to put some measure of statistical reliability (confidence coefficient or an alpha level) on an inference, for example.