+ Reply to Thread
Results 1 to 8 of 8

Thread: Bayesian Convergence

  1. #1
    Omega Contributor
    Points: 38,432, Level: 100
    Level completed: 0%, Points required for next Level: 0
    hlsmith's Avatar
    Location
    Not Ames, IA
    Posts
    7,006
    Thanks
    398
    Thanked 1,186 Times in 1,147 Posts

    Bayesian Convergence




    I have to go to bed any minute, so I apologize for the lack of detail provided.


    I was thinking about repeated testing and how probabilities converge. I have a test with 80% cases being positive and 30% of non-cases being positive, and prior = 0.28.


    So, I can look at the posteriors for cases after positive after positive tests, and cases will converge to 1 and false positives to 0. If every test is positive.


    So if I wanted to simulate this I could see this, however if I test a case over and over again, they will eventually get a negative result. How do I incorporate that into the simulation??? Over time, the repeat tests will still converge, but how often would I expect a false negative test in a case. I figure this is a straightforward and easy answer.
    Stop cowardice, ban guns!

  2. #2
    Omega Contributor
    Points: 38,432, Level: 100
    Level completed: 0%, Points required for next Level: 0
    hlsmith's Avatar
    Location
    Not Ames, IA
    Posts
    7,006
    Thanks
    398
    Thanked 1,186 Times in 1,147 Posts

    Re: Bayesian Convergence

    Lights out, is it just 0.20? If so, What does the case converge to? Don't have time to run this right now.
    Stop cowardice, ban guns!

  3. #3
    R purist
    Points: 35,103, Level: 100
    Level completed: 0%, Points required for next Level: 0
    TheEcologist's Avatar
    Location
    United States
    Posts
    1,921
    Thanks
    303
    Thanked 607 Times in 341 Posts

    Re: Bayesian Convergence

    I'm having trouble following you.
    The true ideals of great philosophies always seem to get lost somewhere along the road..

  4. #4
    Devorador de queso
    Points: 95,995, Level: 100
    Level completed: 0%, Points required for next Level: 0
    Awards:
    Posting AwardCommunity AwardDiscussion EnderFrequent Poster
    Dason's Avatar
    Location
    Tampa, FL
    Posts
    12,938
    Thanks
    307
    Thanked 2,630 Times in 2,246 Posts

    Re: Bayesian Convergence

    Quote Originally Posted by TheEcologist View Post
    I'm having trouble following you.
    Me too.


    .
    I don't have emotions and sometimes that makes me very sad.

  5. #5
    Omega Contributor
    Points: 38,432, Level: 100
    Level completed: 0%, Points required for next Level: 0
    hlsmith's Avatar
    Location
    Not Ames, IA
    Posts
    7,006
    Thanks
    398
    Thanked 1,186 Times in 1,147 Posts

    Re: Bayesian Convergence

    Say you have a diagnostic test. It was tested on 1000 patients against a gold-standard. It was shown to have a sensitivity of 0.8, Spec of 0.7, the population and random sample it was tested on had a prevalence of 0.28. So that translates as true positive rate of 22.4, false positive 21.6, false negative 25.6, and true negative 50.4. Given this is how the test will perform, now see the following.


    You have a person who the gold-standard would show as a positive. You give the person the diagnostic, they get a positive (pretest probability 0.28, post-test probability 0.509 [which, is the post-test probability of the test]). Now you plug 0.509 in as the pre-test probability and the person gets another positive test again, then the post-test probability increases again. Given they get positive test again and again, the probability with converge to 1 (asymptotically).


    Now, say we did the same thing, but the person was truly negative, and the person kept getting positive tests, the post-test probability will eventually converge to "0" for false positive.


    This make sense, correct? However, in the first scenarios (repeated positive tests for a person with the condition), the diagnostic would give a false negative .20 percent of the time. So, this would limit the post-test from converging to 1, since 20% time the truly positive person would get a negative result. So if the patient is truly positive, would you expect them to get a positive 80% of the time and negative test 20% of the time. This would converge to post-test probability of 1.0, if we keep giving them the test over and over again, with most positive tests and some false negative tests. Does this seem right?
    Stop cowardice, ban guns!

  6. #6
    Omega Contributor
    Points: 38,432, Level: 100
    Level completed: 0%, Points required for next Level: 0
    hlsmith's Avatar
    Location
    Not Ames, IA
    Posts
    7,006
    Thanks
    398
    Thanked 1,186 Times in 1,147 Posts

    Re: Bayesian Convergence

    Bump, I added more info.
    Stop cowardice, ban guns!

  7. #7
    Devorador de queso
    Points: 95,995, Level: 100
    Level completed: 0%, Points required for next Level: 0
    Awards:
    Posting AwardCommunity AwardDiscussion EnderFrequent Poster
    Dason's Avatar
    Location
    Tampa, FL
    Posts
    12,938
    Thanks
    307
    Thanked 2,630 Times in 2,246 Posts

    Re: Bayesian Convergence

    If they would get the same result every time then after the first time there really is no more information added so the post probability wouldn't really change. If that's really the case (the result will stay the same) then you can't treat repeated tests as independent.
    I don't have emotions and sometimes that makes me very sad.

  8. #8
    R purist
    Points: 35,103, Level: 100
    Level completed: 0%, Points required for next Level: 0
    TheEcologist's Avatar
    Location
    United States
    Posts
    1,921
    Thanks
    303
    Thanked 607 Times in 341 Posts

    Re: Bayesian Convergence


    Quote Originally Posted by hlsmith View Post
    Now, say we did the same thing, but the person was truly negative, and the person kept getting positive tests, the post-test probability will eventually converge to "0" for false positive.
    Dason's point is key. The reason tests "fail" is only partly due to randomness. So independence cannot be assumed. It's easy to comprehend with an example;

    Suppose you see this person from your office entering the a university building everyday, however you can't determine his/her gender. On campus you know the statistics are 60% female and 40% male. You also know that females will wear trousers or skirts in equal numbers; while the men always wear trousers.

    Now everyday you see this person from a distance; and you see that this person is wearing trousers. What is the probability this person is female? From Bayes we know;

    P(F|T) = \frac{P(T|F) P(F)}{P(T)} = \frac{0.5 \times 0.6}{0.5 \times 0.6 + 1 \times 0.4 } = 0.429.

    So 0.43 chance this person is female. If we conduct the same calculation every day, and the person wears trousers everyday, under your hypothesis this would mean that the probability that this person is female would drop to 0 rapidly;

    P(F, days) = 0.43^{days}

    However, here already we see that this won't be the case. The person could be a female that simply never wears a skirt to work. As these people exist, we know that test test-subject combinations are not independent.

    The same goes for tests in medicine, where a persons genetic makeup could ensure a test always comes back positive while the person never has the disease.
    The true ideals of great philosophies always seem to get lost somewhere along the road..

+ Reply to Thread

           




Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts






Advertise on Talk Stats