analysis of learning curve

M

marsh

Guest
#1
I am designing a study in which each subject will learn to use an interface. There will be several dependent variables (performance on a few tasks of interest) that can be measured at fixed intervals. So essentially I expect to produce some learning curves (on the means) in which some variables will increase (should level off eventually) while others will decrease (trend towards zero). I would like to quantify when a variable reaches a point of diminishing returns. Since I will have a bunch of data points at pre-determined times, I could do a t-test on each pair of adjacent measurements to see when improvement is no longer significant, but that seems to be a quite naive method and I'm trying to get away from my old habits of using ANOVA and t-tests on everything.

I've done a bunch of searching this afternoon for how to quantify a time series or learning curve, but haven't found quite what I'm looking for. I guess fitting the curve is easy, but I need to figure out how to statistically say when a task is "learned." Hopefully somebody will have some ideas. I don't even know exactly what I should be searching for.

Thanks!
 
#2
I imagine you would have to formulate some test of retention, then perform a statistical analysis to ascertain whether the observed levels of retention were statistically significant?

Sorry, that's quite vague. Sounds like an interesting study though :)
 

Jake

Cookie Scientist
#4
I would like to quantify when a variable reaches a point of diminishing returns.
You are going to have to be precise about what you mean by this. The most obvious interpretation of "diminishing returns" is that the slope of performance over time is decreasing (i.e., the second derivative is < 0). So this will depend mostly just on what kind of function you choose to fit to the data, and in all likelihood there will be "diminishing returns" by this definition at all time points. Maybe what you want instead is to identify the point at which the slope falls below some pre-specified value, where this value has presumably been determined on the basis of some kind of cost-benefit analysis of cost of further practice vs. predicted increase in performance.
 
M

marsh

Guest
#5
You are going to have to be precise about what you mean by this. The most obvious interpretation of "diminishing returns" is that the slope of performance over time is decreasing (i.e., the second derivative is < 0). So this will depend mostly just on what kind of function you choose to fit to the data, and in all likelihood there will be "diminishing returns" by this definition at all time points. Maybe what you want instead is to identify the point at which the slope falls below some pre-specified value, where this value has presumably been determined on the basis of some kind of cost-benefit analysis of cost of further practice vs. predicted increase in performance.
I guess you're pretty much right as to what I want. I'll provide a bit more detail...

We have a computer interface that is quite natural once a person gets used to it, but at first it is pretty hard. We are running studies using this interface and we want to figure out how we can ensure that people are trained to use the interface to a sufficient degree of naturalness that it is not hindering performance on our study tasks of interest (assuming finite cognitive resources). So we've set up a structured training routine where the subject goes through a maze while also doing secondary tasks (remembering things).

Some of my tasks are movements through a virtual reality maze:
1) average speed during 1-minute period (higher is better)
2) time to completely stop at end of each 1-minute period (lower is better)
3) number of collisions with maze walls (lower is better)

Meanwhile, the user will be remembering a sequence of items (numbers, for example):
1) Did the person remember the sequence incorrectly? (probably will be binomial, lower is better)

So the premise is that the user will have trouble using the interface at first and then will gradually improve. Meanwhile the user's cognitive resources will be going to the interface. As the user improves, we should also see improvement on the memory tasks because the user will be able to devote more resources to them. The study is somewhat exploratory but we want to determine:
1) How much training is needed before an average user is able to use the interface at a high level? (high would be when there is not much more learning happening- low slope)
2) Is additional training needed before the user has sufficient resources to also perform well on the memory task? (memory tasks should have nearly perfect performance at end of curve)
3) How can we tell when a user is trained sufficiently?

So we don't exactly have a cost-benefit analysis but clearly we also don't want people to be training for weeks. We normally train subjects for 30 minutes or so. So my naive idea is that I can do a t-test between each pair of consecutive means to see at what point in the learning curve(s) that we no longer have statistically significant improvement. Obviously this could be a different point for each dependent variable's curve, and that's fine because that's what makes this interesting. I think that technically the t-test idea is valid but I imagine there may be more interesting approaches considering that we are working with a curve that can be fitted.

Thank you!
 

Jake

Cookie Scientist
#6
A simple approach would be to decide on some target performance levels for your different measures, fit the curves to the performance data, and then calculate the time point at which performance is predicted to reach the target level (i.e., plug the target performance levels into the fitted model equations and solve for time).