+ Reply to Thread
Results 1 to 12 of 12

Thread: Help! How to interpret t-tests/t values?

  1. #1
    Points: 3,329, Level: 35
    Level completed: 86%, Points required for next Level: 21

    Posts
    4
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Unhappy Help! How to interpret t-tests/t values?




    Hello,

    I'm a journalist researching a story for a mental health publication. A number of studies I'm looking at give their results in terms of "t tests." When I searched on the Web, I found a number of sites that could tell me how to calculate a t test ... but none that told me how to interpret the number once you got it. What does this value mean?

    Here's an example.

    In a study of psychiatric inpatients taking yoga classes, mean scores on a standard test for tension-anxiety went from 5.7 to 3.33. The researchers calculate this has a t-value of 6.67.

    My question: how am I supposed to interpret that t-value? What does that number mean? Why is it more important/valuable than just a percent change? I'm tempted to just calculate the percent change and use that when I'm writing up my story since it's a figure a lot more people will be able to understand ... would that be correct to do?

    Thanks for whatever help you can offer!

  2. #2
    TS Contributor
    Points: 13,042, Level: 74
    Level completed: 48%, Points required for next Level: 208
    JohnM's Avatar
    Posts
    1,948
    Thanks
    0
    Thanked 5 Times in 4 Posts
    I'm going to over-simplify this in order to avoid writing a novel on hypothesis testing.......

    It's pretty much impossible to interpet the t-value without knowing the sample sizes in the study, unless the author directly provides the corresponding probability level for the t-value (usually referred to as a p-value).

    But, very basically, the t-value is a statistic that indicates the size of an effect, from the standpoint of a bell curve (a probability distribution). The further away from 0, the more likely that the effect is "statistically significant" (another way of saying that the effect is not likely due to random chance - there is likely an underlying, repeatable cause).

    For the overwhelming vast majority of situations, a t-value of 6.67 will be "statistically significant." In this specific situation, the researchers can comfortably conclude that the yoga class (assuming any other extraneous factors were well-controlled) was the underlying cause of the reduction in tension-anxiety scores.

    Based on this, it appears that future similar studies would probably reveal the same basic outcome, and would make the yoga classes a viable, effective option in therapy.

    The following link is a very good online resource that may help explain a lot of things:
    http://davidmlane.com/hyperstat/logic_hypothesis.html

    Hope this helps clarify some things. Post back if you need further explanations....

  3. #3
    Points: 3,329, Level: 35
    Level completed: 86%, Points required for next Level: 21

    Posts
    4
    Thanks
    0
    Thanked 0 Times in 0 Posts

    thanks, and a question

    John, thanks for your reply. I think I have a better sense of what's going on. It does seem to me that the t-value is a figure that's only really meaningful to researchers (since they're the only ones who'll know what it is). I'm writing this for an audience that would probably understand percent change figures more easily. Is there anything that would be strictly incorrect about me calculating percent change and reporting that?

    If it matters, here are answers to a couple of your questions: the sample size is 113 patients, and the paper reports the p value as .000, with a footnote that adds, "Meets Bonferroni Bounds correction for multiple statistical tests at p<.05 level." Seems strange to me that they're saying the p value is zero, since I haven't seen that before, but maybe there's something I'm missing here.


    Best,
    Kristie

  4. #4
    TS Contributor
    Points: 13,042, Level: 74
    Level completed: 48%, Points required for next Level: 208
    JohnM's Avatar
    Posts
    1,948
    Thanks
    0
    Thanked 5 Times in 4 Posts
    Quote Originally Posted by KristieR View Post
    John, thanks for your reply. I think I have a better sense of what's going on. It does seem to me that the t-value is a figure that's only really meaningful to researchers (since they're the only ones who'll know what it is). I'm writing this for an audience that would probably understand percent change figures more easily. Is there anything that would be strictly incorrect about me calculating percent change and reporting that?
    I would at least consult a psychologist/psychiatrist on this. Based on the test scores themselves, one may be tempted to report that tension/anxiety levels were reduced by approx 40%. However, I don't know if that is an appropriate analogy here....try contacting Karen Grace-Martin via e-mail (karen@analysisfactor.com) and ask her to weigh in on this thread (screen name = TheAnalysisFactor). She has a psych background and may be able to comment on this.

    Quote Originally Posted by KristieR View Post
    If it matters, here are answers to a couple of your questions: the sample size is 113 patients, and the paper reports the p value as .000, with a footnote that adds, "Meets Bonferroni Bounds correction for multiple statistical tests at p<.05 level." Seems strange to me that they're saying the p value is zero, since I haven't seen that before, but maybe there's something I'm missing here.
    The p-value isn't 0, it's just extremely small. What it is saying is that if the yoga classes truly have no effect on patients' tension/anxiety levels, then the probability that we would see a drop from 5.7 to 3.33 is extremely remote. Therefore, it is likely that the yoga classes do have an effect on tension/anxiety levels.

  5. #5
    Points: 4,760, Level: 44
    Level completed: 5%, Points required for next Level: 190

    Location
    Ithaca, NY
    Posts
    266
    Thanks
    0
    Thanked 2 Times in 2 Posts
    Hi Kristie,

    Thanks for the email. My son's 8th birthday was yesterday, so I've been not online since Thursday.

    Anyway, I think you're right that the t-value is meaningless to non-researchers and John is right that a percentage change is misleading.

    I think most people would understand if you said "The average anxiety dropped from 5.7 to 3.33 on a 7 point scale (or whatever the scale was)." If you want to add in the p-value, a good way to make it understandable is to say, "This result was found with 99.9% confidence of being true."

    Karen
    The Analysis Factor
    http://TheAnalysisFactor.com

  6. #6
    Points: 3,329, Level: 35
    Level completed: 86%, Points required for next Level: 21

    Posts
    4
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Karen,

    Thanks for your reply! I am still working on the story, so it comes at a helpful time. I also appreciate your tips.

    You say that a percentage is misleading. I've noticed that a number of psychology papers consistently give t values but not effect size, unlike papers in conventional medicine. If anything, they'll say "the effect size was medium to large," or something similar. Can I ask you why this is? It's very different from most of the medical literature I've come across, where it's very important to know how effective, say, a new treatment for heart failure is versus an older technique.

    Thanks,
    Kristie

  7. #7
    Points: 4,760, Level: 44
    Level completed: 5%, Points required for next Level: 190

    Location
    Ithaca, NY
    Posts
    266
    Thanks
    0
    Thanked 2 Times in 2 Posts
    Hi Kristie,

    I'm saying the percentage change is misleading because it's probably on a 1-7 scale (is that right?). There's a lot of debate on the meaning of the numbers on this kind of scale, but there is no doubt that there isn't a meaningful 0 point . So that means that 2 is not half as big as 4, or 50% less.

    In something like weight, blood pressure, or whatever ever else medical people must be measuring, I suspect they are more likely to have scales with a meaningful 0 point (called a ratio scale, because you can take ratios).

    As for effect sizes, they should be used a lot more in psychology. The small, medium, etc. effect sizes came from a paper from over 20 years ago (I don't remember exactly when) by a psychologist named Cohen. He came up with these labels because the effect size statistics (like eta-squared) are hard to interpret on their own--no meaningful scale. And sometimes the scale of the DV is not entirely meaningful either. (I mean, sure we know what a 40 point drop in systolic blood pressure actually means for someone's health, but how does a 3 point drop in anxiety matter? Is that big or small?). But they're overused a bit.

    Karen
    The Analysis Factor
    http://TheAnalysisFactor.com

  8. #8
    Points: 3,329, Level: 35
    Level completed: 86%, Points required for next Level: 21

    Posts
    4
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Thanks!

    Karen,

    I think it might be a 7-point scale, but I'd have to look it up. John and Karen, your comments have been really helpful! I'm about finished with the story and feel like I know a little more about interpreting these statistics now. Thanks again!

    Best,
    Kristie

  9. #9
    TS Contributor
    Points: 13,042, Level: 74
    Level completed: 48%, Points required for next Level: 208
    JohnM's Avatar
    Posts
    1,948
    Thanks
    0
    Thanked 5 Times in 4 Posts
    Kristie,

    This is up to you, but I'd be interested in seeing the story - you can send it via private message if you don't want to post it here.

    Thanks,
    John

  10. #10
    Points: 3,855, Level: 39
    Level completed: 37%, Points required for next Level: 95

    Posts
    47
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by TheAnalysisFactor View Post
    If you want to add in the p-value, a good way to make it understandable is to say, "This result was found with 99.9&#37; confidence of being true."

    Karen
    I&#180;ve been rambling about this before, and I am sure most people here know this, and I am sure all I do when bringing it up is increasing confusion, but it is of course not true to say that ".....99.9% confidence of being true". I don&#180;t see how encouraging someone to report something that just does not follow at all from the analysis can make it more understandable. It&#180;s just wrong.

    The statistical procedure of p-values IS twisted, and twisting and bending it backwards just does not cut it.

    So if you want to report stuff like the above, then use procedures that allow you to report that.

  11. #11
    TS Contributor
    Points: 13,042, Level: 74
    Level completed: 48%, Points required for next Level: 208
    JohnM's Avatar
    Posts
    1,948
    Thanks
    0
    Thanked 5 Times in 4 Posts
    Quote Originally Posted by spinningfeather View Post
    Iīve been rambling about this before, and I am sure most people here know this, and I am sure all I do when bringing it up is increasing confusion, but it is of course not true to say that ".....99.9% confidence of being true". I donīt see how encouraging someone to report something that just does not follow at all from the analysis can make it more understandable. Itīs just wrong.

    The statistical procedure of p-values IS twisted, and twisting and bending it backwards just does not cut it.

    So if you want to report stuff like the above, then use procedures that allow you to report that.
    Yeah, but the purpose of the article is not to teach the public about statistics (depends on the particular audience - if it is the general public or if it's a group of research scientists or practitioners) - part of the problem is that the author wants to convey the general level of confidence in the results.....

    ......readers who are well-schooled in statistical analysis will prefer a more "correct" interpretation or summary, and those who don't care about statistics will just want to have a sense for the bottom line. If one needs to twist back the statistical findings in order to effectively convey the scientific findings, then I feel that's OK.

    ....the vast majority of people will merely read the article and walk away with the general feeling that the yoga is effective...and that's all that really matters.

    Two weeks after reading the article, all they'll remember (if anything) is that the intervention seemed to work, not the particulars of the statistical confidence level.

  12. #12
    Points: 3,855, Level: 39
    Level completed: 37%, Points required for next Level: 95

    Posts
    47
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Quote Originally Posted by JohnM View Post
    Yeah, but the purpose of the article is not to teach the public about statistics (depends on the particular audience - if it is the general public or if it's a group of research scientists or practitioners) - part of the problem is that the author wants to convey the general level of confidence in the results.....

    ......readers who are well-schooled in statistical analysis will prefer a more "correct" interpretation or summary, and those who don't care about statistics will just want to have a sense for the bottom line. If one needs to twist back the statistical findings in order to effectively convey the scientific findings, then I feel that's OK.

    ....the vast majority of people will merely read the article and walk away with the general feeling that the yoga is effective...and that's all that really matters.

    Two weeks after reading the article, all they'll remember (if anything) is that the intervention seemed to work, not the particulars of the statistical confidence level.
    So, just say that the results indicate itīs effective then. Thatīs a compound version of what the results say, with the interpretation buried within. At least itīs better than to throw around numbers, confidence, and evidence and probabilities in a way thatīs just not true. Someone might actually think thatīs what the data show. Then he or she reads more papers and run around thinking that any little studie can show the probability of a hypothesis from a p-value.

+ Reply to Thread

           




Similar Threads

  1. Please help, how do i interpret this?
    By danswe3 in forum Regression Analysis
    Replies: 1
    Last Post: 03-27-2011, 09:49 PM
  2. P values, z scores, 1 and 2 tailed tests... CONFUSED
    By theoutsyder83 in forum Statistical Research
    Replies: 1
    Last Post: 04-27-2010, 02:22 PM
  3. Interpret the coefficient
    By gla.ac in forum Biostatistics
    Replies: 0
    Last Post: 04-07-2010, 10:56 AM
  4. Interpret...?
    By RedFred2089 in forum Statistics
    Replies: 2
    Last Post: 10-02-2006, 06:36 PM
  5. How to interpret the results?
    By maysa in forum Biostatistics
    Replies: 2
    Last Post: 09-29-2005, 01:10 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts






Advertise on Talk Stats