What do statistical tests really do

noetsi

Fortran must die
#21
What you can't do with absolute certainty is eliminate confounds nor can you control for of the thousands of covariates that influence real life results. That is why random selection and experimentally modifying levels of a given X are so critical.

As an analyst I accept, as do I think the great majority of analyst and academics, that correlational results reflect real forces at work. There are not a lot of options in social science research (unlike say medical research). For one thing in many areas (as in government social programs) random assignment is not even legal. But that is different than accepting philosophically that correlation shows causation. I hope it does, I can not know that for certain. All you can say is that it seems likely this is the case.
 

hlsmith

Omega Contributor
#22
The masses have been told for decades now that correlation doesn't equal causality and now it is generalized to statistics don't equal causality when it can. Just think of natural experiments, Nazis balk roads to towns (e.g. Holland) where people were starved in the winter) these people had health outcomes, no randomization and they can be their own historic controls. Would you believe a causal effect related to this?

People just like to be ignorant and say hey ice cream sells are correlated with shark attacks to mock the concept, when there are decent uses of statistics in well design studies. In addition conclusions should be made given the totality of knowledge and studies. And the idea researchers don't care about the dissemination of their results is beyond a ridiculous statement and blanket generalization.
 

rogojel

TS Contributor
#23
Hi,
I think the problem is the term "prove". I agree with noetsi, I would even generalize his statement a bit : science can not "prove" causality in the sense in which a mathematical theorem can be proven. Statistics, expriments and all the tools can give a very strong indication that a causal link exists, but there is always a residual probability, however small, that we were fooled by some unlikely combination of confounders. This will not happen in maths, so their proofs are absolute truth. The question is, how much residual probability of error are we willing to tolerate before accepting that something is practically proven?
 

rogojel

TS Contributor
#24
The only way the differences could be systematic is of course that something is causing them. Which addresses the second question.
I just mean, that the first question is legitmate in itself. First you need a strong indication that there is a difference, before you would commit any effort or resources to thinking about why there would be one-

regards
 

CowboyBear

Super Moderator
#25
A second question is, did something in the county actually cause the number of counselor to customer ratio to be different - in which case a statistical test would tell you if a given factor caused this (in a correlation sense, I do not believe causality can ever be addressed by statistics).

Having gone back and reread CWB I think that is what he is suggesting in part.
I don't know what you mean by "caused this (in a correlation sense". I mean caused as in caused. Making inferences about causality is hard, and the assumptions necessary are different from those necessary to make inferences about correlations, but yes of course you can address questions of causality with statistics! There is tons of literature on this - read articles by Pearl, Rubin, etc.

EDIT: Note also that saying that you can't infer causality from observational data is much too simplistic a statement - the reality is much more complicated than that. In neither observational nor experimental data can you "prove" causal effects with absolute certainty. Experimental studies can allow you to make more confident causal claims, but in some cases observational data can allow for reasonably confident causal claims. Again, there is absolutely tons of literature on these issues - have a look on Google Scholar.
 

hlsmith

Omega Contributor
#26
Thanks CB, I felt like I was alone there. I have been studying in depth causality for the past 5 years. There are great leaders in the area to also mention Imai, winship, van der laan, Greenland, robins, Hernan, and many more. And it is as CB mentioned, there are more assumptions related to exchangeability (exogeneity), spillover, etc. You can also use instrumental variables (mendelian randomization, regression discontinuity) and structural marginal models, not too mention a slew of semi parametrics.
 

noetsi

Fortran must die
#27
Everything I was taught at the university level including my design of experiment class argued you can not prove causality without experimental methods. Obviously some statisticians disagree with that, which is interesting in and of itself since my experience with reading on this topic showing broad agreement with what I said above. :p

A sign that there is disagreement with pretty much everything in methods I guess.
 

hlsmith

Omega Contributor
#28
To further chip away at the bedrock of your confidence in the field, I will reinvigorate the statement that all models are wrong!


-Dataset: a realization of a random variable with a probability distribution.


-Model: Assumed knowledge of the data-generating probability distribution.


What are the chances that your model was properly specified (e.g., parametric)? What if there are many covariates, what are the chances you can accurately get the underlying probability distribution right for such a multi-dimensional data generating process? Also, how do you know there is no systematic error or biases related to your knowledge depth related to model selection/options, as well as related to your conscious and subconscious human choices?


Well guess what, your model is always wrong.
 

CowboyBear

Super Moderator
#29
Everything I was taught at the university level including my design of experiment class argued you can not prove causality without experimental methods. Obviously some statisticians disagree with that, which is interesting in and of itself since my experience with reading on this topic showing broad agreement with what I said above. :p

A sign that there is disagreement with pretty much everything in methods I guess.
Hmmm... there's plenty of disagreement about lots of things in stats, sure. I don't think this is really one of those cases though. The idea that you can only draw causal conclusions from observational data is more just an oversimplification we might teach students early in their studies; later on you should have been taught the fuller, more complicated picture.
 

hlsmith

Omega Contributor
#30
Mainstream articles similar to this are why people are put off or should be put off by statistics and the idea of causality.


https://www.weforum.org/agenda/2016...al&utm_source=twitter.com&utm_campaign=buffer


But this is how results typically get disseminated. I was at a public event last fall and I could hear an elderly gentleman behind me talking about Alzheimer's disease and everything he said was a misinterpretation of information from mainstream re-dissemination. You ever wonder why infomercials are presented like news stories or with re-iterated selective re-dissemination of results that may or may not be from peer-reviewed sources. The guy behind me was talking about supplements and he was going to start regularly drinking coconut milk.


But we who should know better, should know better than to trust results not from the original source and to always critically review everything. There are quality studies out there, we just need to find the single within the overwhelming noise.
 

noetsi

Fortran must die
#31
The problem is that there is quality studies on both sides of many issues including this one.

CWB my graduate class in DOE stressed the limits of non-experimental designs. :p

My own view is simple. I don't believe you can ever be certain with observational data, in terms of causality, for many reasons including mistakes, failings to include the right variables in the model (which biases slopes) and so on. But I treat correlational data as suggesting causation even if there can not be any certainty. Because I have no practical choice.

I would guess most practitioners follow a similar approach (well I would guess most don't spend a lot of time thinking about causality). :p
 

CowboyBear

Super Moderator
#32
The problem is that there is quality studies on both sides of many issues including this one.
No, not really. A study that says you can only make causal claims using experimental data is just wrong.

My own view is simple. I don't believe you can ever be certain with observational data
This is valid, but it applies to any statistical claim - we can never be certain about any contingent claim about the real world. We can prove mathematical theorems with absolute certainty (e.g., that there are infinite primes), but not statements about the real world. This applies to causal claims made via both experiments and observational data. (We may just tend to be more confident of a causal claims made on the basis of a true experiment).
 

hlsmith

Omega Contributor
#33
There is something really special about the randomization process and the theoretical balancing of known and unknown confounders.


But Noetsi, I bet you didn't know the following about randomized/experimental studies.

Randomization studies have their own issues. Of primary interest is the adherence to treatment assignment. You can tell a person they are in a group, but you always have the risk of never takers and treatment defiers, and whether these deviations are related to the outcome. In addition, you can have differential loss to follow and many other issues like interference (one person's treatment affects the outcome of another person) and contamination (one person's outcome affects the outcome of another person).


Guess what, this is where the exact same observational causal approaches get applied to randomization studies and these things go beyond intent-to-treat protocols. In addition, the experimental studies can now have some of the exact same flaws as the observational studies, such as correlation in error terms.


So imagine the key causality assumptions: exogeneity, no multiple versions of interventions, probability of being in either treatment group (0 < probability < 1), interference, no measurement error, and model misspecification are all of a sudden in play and the analyst needs to apply greater assumptions into their statistical approaches.


This is why that one time I said animals and plants can't opt out of studies. Pretty much the majority of studies we would be interested in are at risk of the above topics, and the big follow-up kicker would be that results from human analogs should not be assumed to translate into the same results you would see in humans. This is in regards to animal and plant studies. Also there are two relevant other sayings, kids aren't little adults, and middle age people aren't the same as the elderly - meaning results from these groupings also may not translate over and run into extrapolation problems. I have yet to see a randomized study that did not have protocol application issues or deviations. Meaning, everyone assumes they are the best and without flaws, but this is not always the case.
 

noetsi

Fortran must die
#34
A problem I have with experimental studies (which rarely if ever occur in the areas I did research in or worked btw) is generalizability. That is given that 1) relatively few people are commonly involved in such studies and 2) they are commonly not randomly chosen from the population (this seems particularly true in medical studies) how certain can you be that the results apply outside the group you conducted analysis on?

But, at least in the methods books and articles I have seen, this does not seem to be a major concern with experimental methods. So maybe they address that someway I do not know.

I am remembering just how many times I have seen it stated, correlation does not equal causality, and go on to raise doubts about addressing causality with statistics. :p
 

CowboyBear

Super Moderator
#35
"Correlation does not equal causation" is what you say to a high school student or 101-level undergrad. The reality is more complicated (correlation can equal causation, but it depends on subtle assumptions). Anyway if you're interested, read some literature on causal inference - this discussion seems to be going in circles a bit.
 

hlsmith

Omega Contributor
#36
Noetsi, I think the two main reasons "correlation does not equal causation" comes about is in regards to "Ecological Fallacies" and people assuming that general reported correlations may be causal without taking into account potential confounders.


That is the main times when the statement should be used. Though, when studies are conducted taking those concept along with causal assumptions into account, causation can be inferred sometimes. Just because you feel you keep seeing things, so they are true, doesn't mean they are so. Take the normality assumption in linear regression being confused (even by myself when in school).


I agree with CB, this is a little too long winded and too rocky of a thread. Just let us know if any other ideas are forever cemented in your head and you are immutable to new information. So in the future we know that if we try to bring forth different points of view you will blindly ignore them with conviction.
 

noetsi

Fortran must die
#37
"Correlation does not equal causation" is what you say to a high school student or 101-level undergrad. The reality is more complicated (correlation can equal causation, but it depends on subtle assumptions). Anyway if you're interested, read some literature on causal inference - this discussion seems to be going in circles a bit.
Anything over 20 posts goes in circles....:p

Just let us know if any other ideas are forever cemented in your head and you are immutable to new information. So in the future we know that if we try to bring forth different points of view you will blindly ignore them with conviction.
The only idea that is forever cemented in my head is that every time I assume there is general agreement on any methods issue I am wrong. Over the six years I have come here I have ended up abandoning much of what I learned in my graduate (as in college not high school) statistics classes and my many hours of reading stats to be sure I understand what I do at work.

I now am not convinced anything is really agreed on in statistics at least for more than 5 year. I have lost track of the times I have read an article saying some well known statistical method is in error and should not be used :p Which is not very different than academics, but somehow I assumed statistics was different.
 
Last edited:
#38
Statistical test can help or guide you in determining if the evidence is strong enough for you to conclude or decide. The test is relevant to most students who do a research, thesis and data analysis.