A question about SAS power analysis !

#1
A question about SAS power analysis !

Hi!, Everyone! I am a new learner of SAS software. I am wondering if anyone know how to conduct a power analysis of a structure equation model (SME) with SAS. As you can see in my attached photo, there are lots of items under the main item of power and sample size, such as Anova, t-tests, multiple regression, etc. However, I can't find the item of structure equation model or path analysis, which I need to use for the conceptual model below. May I know if anyone know which item I should choose in order to achieve the statistical power of the model below.

Best Wishes
Kat
 

Attachments

spunky

Can't make spagetti
#2
A question about SAS power analysis !

Hi!, Everyone! I am a new learner of SAS software. I am wondering if anyone know how to conduct a power analysis of a structure equation model (SME) with SAS. As you can see in my attached photo, there are lots of items under the main item of power and sample size, such as Anova, t-tests, multiple regression, etc. However, I can't find the item of structure equation model or path analysis, which I need to use for the conceptual model below. May I know if anyone know which item I should choose in order to achieve the statistical power of the model below.

Best Wishes
Kat

There is no "ready made" menu when conducting a power analysis for SEM or Path Analysis. For advanced methodologies like those, your only option is to approximate power through a Monte Carlo simulation.
 
#3
There is no "ready made" menu when conducting a power analysis for SEM or Path Analysis. For advanced methodologies like those, your only option is to approximate power through a Monte Carlo simulation.
Many many thanks, spunky, in this case, it seems that the only way to do that with SAS is to use marco written then, which I am not so familiar with, haha, but anyway, thanks for your help and explanation.
 
#4
There is no "ready made" menu when conducting a power analysis for SEM or Path Analysis. For advanced methodologies like those, your only option is to approximate power through a Monte Carlo simulation.
May I ask you one more question. I am reading a research showing that:

"we tested the structural mode, in which social media use intensity was defined as an exogenous factor with the rest of the other four endogenous. Results indicated a satisfactory fit of the pro-posed model:v2=88.20 (df=62,p=.02),v2/df=1.42,GFI=0.95, CFI=0.98, NNFI=0.98, SRMR=0.07,RMSEA=0.04 (90 % CI 0.02, 0.06). A power analysis using MacCallum et al.’s (1996) SAS program indicated that the statistical power of the models were at 0.90. "

I am not so sure why the research need to do two tests to test his model, May I ask the different between the test of fit of the pro-posed model and statistical power of the models ?

Many thanks
 
Last edited:

spunky

Can't make spagetti
#5
May I ask you one more question. I am reading a research showing that:

"we tested the structural mode, in which social media use intensity was defined as an exogenous factor with the rest of the other four endogenous. Results indicated a satisfactory fit of the pro-posed model:v2=88.20 (df=62,p=.02),v2/df=1.42,GFI=0.95, CFI=0.98, NNFI=0.98, SRMR=0.07,RMSEA=0.04 (90 % CI 0.02, 0.06). A power analysis using MacCallum et al.’s (1996) SAS program indicated that the statistical power of the models were at 0.90. "

I am not so sure why the research need to do two tests to test his model, May I ask the different between the test of fit of the pro-posed model and statistical power of the models ?

Many thanks
Because in SEM the \(\chi^{2}\) test of fit (as well as all derived measures of fit such as CFI, RMSEA, TLI, etc. etc.) is actually a "lack of fit" test, it is important to know whether or not you have a large enough sample to detect misfit. Otherwise, you could claim your model "fits" the data correctly when in fact it is underpowered to detect misfit. Nevertheless, **IF** you have enough power to detect misfit (the power analysis part) **AND** you can offer statistical evidence (the \(\chi^{2}\) test part) that your model still fits, then you have a much stronger claim that your results are actually true as opposed to an artifact due to lack of power. Btw, I am familiar with the MacCallum et al.’s (1996) procedure and without knowing which RMSEA value they used for the null hypothesis, we have no idea whether those authors actually have enough statistical power. A good reviewer should have asked "statistical power to detect what?"

But I guess we wouldn't be in a Replication Crisis in psychology or the social sciences if the statistical rigour of peer review were a little more strict. ¯\_(ツ)_/¯
 
#6
Because in SEM the \(\chi^{2}\) test of fit (as well as all derived measures of fit such as CFI, RMSEA, TLI, etc. etc.) is actually a "lack of fit" test, it is important to know whether or not you have a large enough sample to detect misfit. Otherwise, you could claim your model "fits" the data correctly when in fact it is underpowered to detect misfit. Nevertheless, **IF** you have enough power to detect misfit (the power analysis part) **AND** you can offer statistical evidence (the \(\chi^{2}\) test part) that your model still fits, then you have a much stronger claim that your results are actually true as opposed to an artifact due to lack of power. Btw, I am familiar with the MacCallum et al.’s (1996) procedure and without knowing which RMSEA value they used for the null hypothesis, we have no idea whether those authors actually have enough statistical power. A good reviewer should have asked "statistical power to detect what?"

But I guess we wouldn't be in a Replication Crisis in psychology or the social sciences if the statistical rigour of peer review were a little more strict. ¯\_(ツ)_/¯
Many many thanks, spunky, I got it then , you are billion!^^