# AMOS - interpretation of bootstrap indirect effects for one-tailed hypothesis

##### New Member
Hello all,

I'm testing for mediation using bootstrapping in AMOS. This method reports results for two-tailed significance, but my hypotheses are directional. With bootstrapping, significance is assessed based on confidence intervals - though a p-value is also provided. For regression, I would normally divide the p-value by two to get the one-tailed result. Would this be appropriate with tests of indirect effects too? In my case the two-tailed p value is .068 so it is potentially either marginal or significant depending on whether it is interpreted as two- or one-tailed.

Also, since, for these types of tests, CI rather than p-value is reported, is there an equivalent adjustment for the confidence intervals? I was told that I can calculate the confidence estimates using the estimate for the indirect effect +/- the std error times the value of the standard normal distribution corresponding to the desired Type I error rate (1.645 for a one-tailed test). However, when I do this, the CI I get is different from what AMOS produces. I can't tell if that is because I have misunderstood how to calculate the CI, or if it is because the AMOS numbers reflect bias correction that I am not accounting for when I calculate the CI myself.

Akwasi

#### CowboyBear

##### Super Moderator
Hi there, welcome to the forum! Sorry for the delay in releasing your post - it was caught in our spam filter for some reason.

In my case the two-tailed p value is .068 so it is potentially either marginal or significant depending on whether it is interpreted as two- or one-tailed.
I'd like to ask you a simple but important question here: Did you decide that a one-tailed test was appropriate before or after you saw this p value of 0.068?

#### hlsmith

##### Omega Contributor
I haven't used AMOS before, but can you post your output so we can see what you are writing about.

Thanks.

##### New Member
Thanks Cowboybear

As mentioned, my hypotheses are directional - that is one variable is predicted to increase the other rather than just be related to it, so my understanding is that a one-tailed test is appropriate. I have generally been reporting one-tailed test results for my regressions. However, with AMOS, it is not possible to specify a one-tailed test, and with bootstrapping for indirect effects, usually the CI and not the p-value is reported. I received an email yesterday suggesting that right thing to do is to use a 90% confidence interval for the test rather than 95% and ignore the p-values. If you know otherwise, I'd be interested in your experience.

#### hlsmith

##### Omega Contributor
Yeah, using 90% CI seems reasonable. Your output was a little cryptic to me. I was expecting a more traditional/simpler looking mediation path!

#### CowboyBear

##### Super Moderator
Thanks Cowboybear

As mentioned, my hypotheses are directional - that is one variable is predicted to increase the other rather than just be related to it, so my understanding is that a one-tailed test is appropriate. I have generally been reporting one-tailed test results for my regressions. However, with AMOS, it is not possible to specify a one-tailed test, and with bootstrapping for indirect effects, usually the CI and not the p-value is reported. I received an email yesterday suggesting that right thing to do is to use a 90% confidence interval for the test rather than 95% and ignore the p-values. If you know otherwise, I'd be interested in your experience.
Yep, in technical terms you could just use a 90% confidence interval to make the decision; that is more or less equivalent to a one-tailed test at a 0.05 alpha level.

The reason I ask about when you made that decision is that things like whether you want to use one- or two-tailed testing really need to be specified in advance (and preferably, publicly committed to in a preregistration). There are a lot of decisions to make with respect to any data analysis, and when we have the flexibility to change from two- to one-tailed testing after observing a p value just over 0.05 - or any of a myriad of other decisions we might make contingent on the data - it is extremely easy to come up with a "statistically significant" finding in favour of an hypothesis - even if it's false.