Estimation of count regressors that tend to only increase with time

kiton

New Member
#1
Hello dear forum members,

I am seeking you comments on the following issue that I noticed in the project that I am working on. So, in 2014 I collected cross-sectional data to estimate the effect of a number of Internet-related metrics on some performance measure. So, the model was:

(1) y(i) = a + X1(i) + X2(i) + X3(i) + u(i)

where y -- is a rating score ranging from 1 to 100 with one decimal; X1 -- is a commonly used (fixed) control; X2 -- is a "star" rating ranging from 1 to 5 with one decimal; and X3 -- is, for example, number of twitter followers.

Now, I also have new 2016 data for y, X2, and X3. Noticeably, both y and X2 -- (as expected) changed in both directions due to their nature -- i.e., some when up, some went down. However, X3 -- number of followers only went up (at different rates for different observations, but definitely only up). The new model to be estimated:

(2) y(it) = a + X1(i) + X2(it) + X3(it) + u(i)

My initial approach to estimate the effects of time-varying regressors was to use a model with fixed-effects specification. Yet, I am sort of concerned that the effect of X3 -- number of followers -- will be biased as it only increases with time for all observations.

Is my logic here correct or misleading? Should I instead use the first-difference, for example, for X3? Or something else?

You comments would be greatly appreciated :)