Seeking advice on 3-level HLM

#1
Hello!

I am studying the association between product quality measured by expert intermediary (overallScore) and that measured by online user rating. In particular, I hypothesize that the aforementioned association is dependent on the longevity of product use. For instance, I would expect a positive correlation if the user rating were given right after the product purchase; and negative correlation if the user rating were given after some considerable use of the product. The longevity of use (moder) is captured at four levels (1 = < 1 month, 2 = 1-3 months, 3 = 3-6 months, and 4 = > 6 months). The descriptive statistics of the three variables is given below:
Screen Shot 2019-08-17 at 12.30.32.png

Further, below I include a screenshot of the binned scatterplot of the relationship between overallScore and rating by levels of moder, that provides some support to me initial assumption:
Screen Shot 2019-08-16 at 10.31.54.png

Now, my dataset has the following structure:
– User rating is captured at the individual level for a given product identified with a name_id (the total number of products is 109). The minimum number of ratings per
product is 3, and the maximum is 583.
– Expert intermediary overallScore is captured at the name_id level.
– And finally, each name_id is associated with a product category_id.

As far as I understand, following such data structure each rating is nested within name_id and each name_id is nested within category_id. Therefore, to formally test my hypothesis on the moderating role of longevity of use, I use a linear hierarchical model using -mixed- command in Stata:
Screen Shot 2019-08-17 at 12.32.57.png

Note (1) In case of my data technically the expert rating is published first and then users have an opportunity to provide their evaluations; therefore, I am using user rating as the outcome here.
Note(2) I did run several robustness tests with estimators more suitable for bounded interval outcome (i.e., 1-5).

Does this seem like a plausible approach? I would appreciate your advises.
 
#3
I keep thinking about the issue in hand and here are some additional elaborations. Given the structure of the data:

Level 1: Individual reviews — includes user ratings and duration of use
Level 2: Products — includes expert rating
Level 3: Product categories

Rule #1 in HLM is that outcome is always at Level 1. I believe this is exactly where the discrepancy comes in — there are multiple user ratings per one expert rating. Therefore, by default I should aggregate all reviews at Level 2. Otherwise, there is no variation in expert rating per user ratings.

Let's use a classic example and assume students nested within classrooms and classrooms nested within schools. I would need to have students’ individual perforce scores as outcome + student level controls + school level controls. However, in my case if I (A) use expert rating as an outcome, then it’s like using student level data to predict classroom level outcome; or (B) use user rating as an outcome, then it’s like using class level data to predict student level outcome.

Given all this… I somewhat doubt that I have a "true" HLM model.

Relevant input is still welcome
 
#4
As it turns out, the case I am observing with my data is related to the estimation of cross-level effects in multilevel models. For those facing challenges similar to mine, I recommend a great paper by Aguinis, H., Gottfredson, R. K., & Culpepper, S. A. (2013). Best-practice recommendations for estimating cross-level interaction effects using multilevel modeling. Journal of Management, 39(6), 1490-1528.