I'm lost - if we have a timeseries dataset like stock price for some company, why is it unacceptable to take the price as an input variable, lag it n times, and then use those lags of the price as predictors for the future price in a regression model?
I know that this is due to temporal autocorrelation since regressors that were close together in time have a huge covariance, but I can't quite understand what exactly the problem is with this and what to do about it. Isn't the AR model exactly as I suggested?
I know that this is due to temporal autocorrelation since regressors that were close together in time have a huge covariance, but I can't quite understand what exactly the problem is with this and what to do about it. Isn't the AR model exactly as I suggested?