oh no, and i mean i totally welcome the discussion because i think it's a very relevant one for any of us who work as "knowledge translators" between statisticians/quantitative methodologists and everyone else... besides, we have a history here of hijacking other people's posts and take them in weird tangents.
you touch on a very important point there when you mention it's possible to use less sophisticated approaches (which is always desirable) with the huge caveat that they have to be "accurate enough for your purposes". the point that jpkelly and i are trying to make is that statistical estimates from Berley's data derived through traditional correlational methods (regular OLS regression, pearson's correlation, etc) could end up being so biased that analyzing them through simple approaches would end up doing more harm that good. i'm not sure what its name is in the ecological sciences (where jpkelley is our local expert) but here in the province of social sciences/educational measurement/psychometrics is called the "unit of analysis" problem, which is perfectly exemplified the school-setting paradigm: should analysis be done at the student-level? classroom-level? school-level? district-level? and performing analysis ignoring this clustering of the data (which can arise naturally like in the school example or by design as in Berely's case) produces such bad estimates that a whole new area of statistics called hierarchical linear models/multilevel models was created just to tackle this problem. so just from starters it is known the estimates derived from averaged data will not be accurate enough because there's about 20-yrs worth of analytical, simulation and real-data studies in the academic literature that backs that up.
which takes us to the second point. i dont think denny borsboom has ever dealt with corporate america (my assumption only. i have never asked him for his CV. he is a university professor in amsterdam) but he is well acknowledged as one of the most important living figures in the area of quantitative analysis for the social sciences and *the* most brilliant psychometrician of the post-IRT generation. the point that you make is very good, but i think that's true from any analysis. gov't agencies or the private industry usually care about results, and as someone who'se done interships at ETS (developers of the SATs, GREs and pretty much all the major standardised tests used in america and the world today) i understand these people end up wanting the "what" more than the "how did you get that". just as jpkelley said... why would you even mention in the first place a poisson distribution? or regression? or even the variance? you're not talking to experts here, what's relevent to them are the results of the analysis, not how you got there... because how you got there requires a certain degree of technical knowledge most people are not interested in acquiring.
so i ask you... let's assume you're my boss and i'm your number cruncher. what if i asked you: "i can analyze this in a very simple way. it will be wrong and mostly useless, but you'll be able to follow the logic of what i did perfectly. or i can do a super-convoluted analysis that will get you excellent estimates but you wont understand *bleep* of what i did. which one do you prefer?" and if we encourage people to do the wrong thing just because it's easy we are not gonna get very far, arent we?
albert einstein once said "make things as simple as possible... but not simpler". i mean, i could also try and fit some incredibly bizarre likelihood equation with strange discontinuities to Berely's data and probably get estimates just a tiiiiny bit better than would come from out from a mixed-effects regression. but the improvement from a regular OLS regression/correlation with averaged data to a mixed-effects regression is so substantial, that it is called for, even if it's more complicated to implement and/or understand.