+ Reply to Thread
Results 1 to 2 of 2

Thread: Advantages of having population-defined parameters?

  1. #1
    TS Contributor
    Points: 22,359, Level: 93
    Level completed: 1%, Points required for next Level: 991
    spunky's Avatar
    Location
    vancouver, canada
    Posts
    2,135
    Thanks
    166
    Thanked 537 Times in 431 Posts

    Advantages of having population-defined parameters?




    Hello peeps. Quick “conceptual”/weird question. While finishing-up my next manuscript, I started thinking that a way to “buff it up” would be to talk about the advantages of having a population-defined parameter that some sample statistic purportedly estimates. Let me use the example I’m working on.

    Say you’re doing research on the Pearson product-moment correlation. We know that that is the parameter of a certain type of distribution (i.e. bivariate normal) and that it has a finite sample definition as Cov(x,y)/sd(x)sd(y) (cov=covariance, sd=standard deviation). From this I can do things like generate bivariate normal data with various degrees of correlation and look at bias or efficiency if I were doing a simulation study, right?

    Ok, so now let’s say we’re working with what I’m dealing with, the population definition of the Spearman rank correlation. The population model is a little bit trickier here but it can be used just as well to generate data for simulations and whatnot. Where I’m coming a little empty-handed here is to find advantages (or maybe the need) to have a theoretical population model for this (or maybe other) sample statistics when running Monte Carlo simulations.

    The first advantages I came up with was that, with a population model, we can investigate the properties of consistency and unbiasedness. Like I guess it’s pretty obvious that unless we know what \theta is, it’s gonna be hard to find out what E[\hat{\theta}-\theta] would be when doing Monte Carlo simulations. I was thinking about adding something like variance or efficiency there but my advisor said (and I think he has a point) that techniques like the bootstrap obviate the need to know the properties of asymptotic variance (or a population parameter in general) since, when the assumptions are satisfied, we end up with correct confidence intervals.

    I guess my point is I’d like to say something more than “we need population-level definitions of parameters ‘cuz BIAS”. But I feel I’m not being creative enough to find others

    Help…?
    for all your psychometric needs! https://psychometroscar.wordpress.com/about/

  2. #2
    TS Contributor
    Points: 18,889, Level: 87
    Level completed: 8%, Points required for next Level: 461
    CowboyBear's Avatar
    Location
    New Zealand
    Posts
    2,062
    Thanks
    121
    Thanked 427 Times in 328 Posts

    Re: Advantages of having population-defined parameters?


    Quote Originally Posted by spunky View Post
    I was thinking about adding something like variance or efficiency there but my advisor said (and I think he has a point) that techniques like the bootstrap obviate the need to know the properties of asymptotic variance (or a population parameter in general) since, when the assumptions are satisfied, we end up with correct confidence intervals.
    I'm not quite sure I follow why the possibility of bootstrapping would imply that we don't need to worry about efficiency? Surely we want our estimates to be precise as possible, regardless of whether our interval estimates have correct coverage? I mean, extreme example:

    Code: 
    a = rbinom(n = 1, size = 1, prob = 0.95)
    if(a = 1) {c(-Inf, Inf)}
    else c(0, 0)
    ^That's a 95% confidence interval with 95% coverage for any parameter, but it's not really one you'd want to use! Efficiency still matters, I think?

    While finishing-up my next manuscript, I started thinking that a way to “buff it up” would be to talk about the advantages of having a population-defined parameter that some sample statistic purportedly estimates.
    Ok so back to the main question... I guess at the end of the day, statistics (or the fun part of it anyway) is all about making inferences about things we haven't directly observed. That might mean using a sample to make inferences about a population, but it could also mean using observations to make inferences about the causal influences that produced those observations.

    So the value of statistics relies very heavily on the idea that they are estimates of parameters (parameters of populations or that describe causal effects). If we have a sample statistic, but don't even know what parameter that statistic is intended to estimate, then that statistic surely doesn't achieve anything beyond describing the sample at hand?

+ Reply to Thread

           




Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts






Advertise on Talk Stats