Adjusted R squared interpretation

#1
I've always had the adjusted R squared interpretation explained to me in the following way (quoted from wikipedia)

Adjusted R2 is a modification of R2 that adjusts for the number of explanatory terms in a model. Unlike R2, the adjusted R2 increases only if the new term improves the model more than would be expected by chance. The adjusted R2 can be negative, and will always be less than or equal to R2.

However, I've been reading Discovering Statistics Using SPSS by Andy Field and he explains that adjusted R squared is the amount of variance in the outcome that the model explains in the population (instead of just the sample).

This seem like very different interpretations. Are both correct? Thanks,
 
#2
Both explanations are essentially the same, and both are correct, they're simply worded differently.

Try to think of it a bit like the standard deviation. We derive a parametric estimate of a standard deviation by dividing the sample variance by n-1, because the sample will necessary underestimate the amount of variance in the population. If, however, we have sampled the entire population, the correction is unnecessary and the standard var/n standard deviation statistic is perfectly suitable.

Same with regression. Samples will consistently produce a parametric R2 estimate that is inflated, so the adjusted R2 is used instead. If, however, you regress data from an entire population, it is no longer necessary to adjust the R2 coefficient because it is no longer biased.