LGC with Experimental Manipulations

#1
Hey gang, first time posting.

So, I have 9 time points of continuous learning data for 200ish people. I also have two manipulations (thus the 200 people are split into four groups, group size is unequal). I also collected several personality measures via questionnaires. I have a little bit of experience with SEM but not a ton. I have been thinking about using Latent Growth Curve modeling to describe the learning curves. I have successfully gotten a basic intercept, slope model to work, but I am confused about how to work experimental manipulations into an SEM framework.

Ultimately, what I would like to demonstrate is that people who scored high on one of the personality measures have different learning curves than people who score low. I have strong theoretical evidence to believe this relationship exists.

I realize this may be more of a conversation than a question and answer deal so I thank anyone that is willing to help me out.

Specifically, how do I work manipulations into SEM? Must I do group comparisons?

How do I investigate which items predict learning the best? See which items are loading most heavily on the latent construct?

I will definitely provide more details if needed, I just don't know what is relevant and what is not.

I am using LISREL but may switch to Mplus
 

Lazar

Phineas Packard
#2
I have not worked with lisrel for some time but there should be a parameter for the Intercept and slope. You can predict those with your treatment variable. In mplus the code would be something like:

I S | t1@0 t2@1 t3@2 t4@5;
I S on treatment;

This is the simplest way to do it. Another option is a multi-group approach but I do not see alot of advantages.

Not sure I get your second question. Could you elaborate. It sounds as if you are doing a fully latent LGC i.e. a growth curve using latent measures of personality or whatever. Is that right?

EDIT: On re-reading I am a little confused. It seems you want to look at the relationship between learning and personality. What then is your treatment?
 
#3
Yes, so I have run the basic level/slope model. I can't figure out how to enter manipulations into the analysis. I dummy coded the variables and I want them to predict the latent variable of slope (or change).

The second question is in regard to the personality variable predicting the change. How do I determine which items predict change the best?
 

Lazar

Phineas Packard
#4
Well you can enter the manipulations (which I am assuming are treatment conditions) as above. i.e. predict the intercept variance and slope variance with your treatment conditions. It would be worth knowing however what exactly your treatment conditions were.

I am going to assume that you have a measure of the big 5 or something at baseline (the causal ordering is highly dubious here but ok) and I am assuming by item you mean factor (I assum you are using manifest scale scores). Then you do as above predict intercept and slope variance with the 5 personality factors. You can then use the delta method (accessed in mplus via the model constraint command for example) to see if a particular factor is a stronger predictor of the slope than another.

If you could lay out your research in considerably more detail then I think I can give better advice (i.e. what is the treatment, what is your design, do you have multiple time points for personality or just one, what are your hypotheses).
 
#5
Thanks Lazar. Is there a way to officially thank you? I see the 'score' or whatever of the side of the site.

So, we have 9 time points, each time point is a summation of 5 trials. We are running eye blink conditioning. We manipulated whether participants got additional feedback on their performance, and the delay between CS and US. Feedback did nothing, RM-ANOVA suggests that delay had a significant effect.

We also administered 4 questionnaires (behavioral inhibition, BI; negative affect NA, social inhibitions SI, and trait anxiety TA). From previous research in our lab and based on theory, we believe that these measures should predict conditioned responses. The four scales are correlated between .2 and .8 with eachother. As with all personality measurement it seems, there is disagreement about constructs domain, as well as the neurobiological antecedents.

What I would ultimately like to investigate is which items from the four scales best predict CR learning. In some respects I'm trying to run an EFA and I want the latent factor to specifically be the construct that best predicts eye blink conditioning. We then have other data sets that we could potentially confirm this theory on.

This of course would be benifitial for 2 reasons. First, we can reduce the number of items we need to administer participants. Second, will gain a better understanding of the personality type most predictive of avoidance leaerning (e.g. behavioral inhibition or negative affect).

Does this make sense? Anything I should elaborate on more?
 

Lazar

Phineas Packard
#6
I think the problem is you keep using the word "item" but then the rest of what you write about sounds like you are talking about "scales". As such I will split my advice as follows:

1. If you are concerned you have to many items but want to retain the number of scales then I would be doing measurement work around EFA and CFA seperate to the LGC model. In personality instruments there tends to be a heap of redundent items and thus some careful measurement work may allow you to reduce the number of items to more managable sizes (note that the development of short scales is a cottage industry and thus chances are someone has already shortened the scales you are using).
2. If you are calculating manifest scale scores and then using these scale scores in the analysis there are a number of ways forward. The simplest is just to predict the slope variance with all of the personality variables (plus the treatment by personality interaction which I guess will be what you are most interested in), you could then no longer use the variables that are poor predictors in future studies. Second you could take a single principal component (i.e. develop an index of negative well-being or whatever) and use that principal component to predict the slope. Obviously some of the scales will contribute to this index more than others and thus that will give you some idea of which are most important in your context. Third, you could take a mixture modelling/person centered approach and identify clusters of people with similar profiles on your personality variables. You could then use profile membership as a predictor of the slope. I suppose if, in the profiles you pull out, 2 or more variables differ across clusters only in level you could retain only one that you think is most important.

There is danger in all of these approaches. Namely, when you move from a confirmatory approach to a data exploratory approach there is a very real chance that any scale reduction etc. you come up with is idiosyncratic to the data you collected and thus there is a real risk that your scale reduction etc. simply will not replicate to further samples.