Which approach to evaluate small spatial variations in temperature?

#1
I am evaluating whether utility-scale solar energy plants affect the surrounding climate (initially temperature).

An effect has been found in one paper using the approach described below/attached but when repeating this approach I find no effect for the same site.

I want to be sure that there isn't a test or approach that I am unaware of that might be more appropriate than the existing methodology and which might do a better job of identifying or quantifying any effect.

Please assume good background knowledge of stats at degree level (though I am not a statistician).

Existing approach
  • A site is identified and Landsat 8 imagery for 12 consecutive months pre- and post-construction is identified.
  • The panels, site boundary, and any potentially problematic areas (e.g. disturbed ground, infrastructure) within 2km of the site boundary are masked using geometry. (The working assumption is that these areas would increase the RMSE of the data though this is on a theoretical basis and has yet to be quantified).
  • A series of 100m-wide buffers is created from the site boundary outwards to 2km.
  • A series of scripts in Google Earth Engine is used to extract and process data for each site, eventually calculating the land surface temperature (LST) for each pixel.
  • Pixel] values in each 100m buffer are averaged.
  • 'To normalise for the effect of changing temperatures on different days the percentage deviation from the average LST of all buffers [is] calculated for each buffer.
  • The difference in temperature deviation between adjacent buffers is calculated (e.g. Buffer 1 LST (0-100m) - Buffer 2 LST (100-200m)).
  • Finally, a paired T-test is used to test the null hypothesis that there is no difference between pre- / post-construction LST around the installation.

The paper actually describes using ANOVA with month as a repeated measure and presence of the solar power station as the explanatory variable, though the lead author, upon checking, tells me a paired T-test was actually used).

Replicating this process I have calculated a t-statistic of 0.312 (3dp, 18 df) for the sample site, with critical values of 1.73 (1 tail) and 2.10 (2 tail) which suggests there is no effect.

This approach seems a bit reductive. I have a huge amount of detailed spatial data that seems to have been reduced because it is difficult to analyse.
Can anyone suggest an approach to investigate that might be able to better process spatial data or better use the available data statistically?

I have about 30,000 pixels per site - each of which holds around 200 data (including the calculated LST, it's precursors, and significant additional data from the original Landsat images). I can process this as geoTIFFs in GIS (QGIS or ArcGIS Pro), or I could extract the data (using Eart engine/GIS) to process in R/SPSS/whatever).
 
#4
The effect found, though, is of very small reductions in LST from 0-700m from the site boundary. Hence my question about approaches that might do a better job of detecting small changes.
 

katxt

Active Member
#5
This approach seems a bit reductive. I have a huge amount of detailed spatial data that seems to have been reduced because it is difficult to analyse.
Can anyone suggest an approach to investigate that might be able to better process spatial data or better use the available data statistically?
You probably can't do better than using the average per site. As I read it, your main concern is the before/after difference. The experimental units are the sites and the data is the site average. The detailed data is nested within site. This means that although that data could be used to investigate site to site differences, when it comes the before/after difference, the data is averaged at the highest level of nesting which is sites. The way to get more power is to have more sites. Width rather than depth.
An effect has been found in one paper using the approach described below/attached but when repeating this approach I find no effect for the same site.
Replicating this process I have calculated a t-statistic of 0.312 (3dp, 18 df) for the sample site, with critical values of 1.73 (1 tail) and 2.10 (2 tail) which suggests there is no effect.
This t value indicates to me that the differences (if any) are so small that an experiment of this size has virtually no chance of reliably detecting them. (I assume that your investigation is much the same size.) I think you may have a good case for suspecting a false positive in the original paper. Was their p value close to 0.05? Did the original workers investigate several climate variables and chose temperature to report? Which conclusion is more surprising - a difference or no difference? It's hard to comment without access to the source but studies have shown that a disconcerting fraction of replications fail to substantiate the first finding for all sorts of reasons.
 
#7
Although you're correct in assuming the important difference is a change pre-/post-construction, the average is not the for the site in its entirety.

With reference to the attached screenshots of Earth Engine:
  1. ID site
  2. Create 100m wide buffers around the boundary of the site (our to 2km distant from the boundary)
  3. ID problematic features with Geometry (colours represent different objects to allow inclusion/exclusion)
  4. Amalgamate geometry to create a single mask
  5. Use mask to exclude areas of the buffers drawn in step 2 that might distort the analysis.
It is for the remnants of each buffer that average LST is calculated (see next comment).
 

Attachments

#8
The process of calculating LST is described in the attached diagram.

The Landsat 8 data used is at a 30m resolution, so I have LST (and many other data) every 30m across the area of interest. The granularity of these data is not used in the existing approach as they are averaged for each of the buffers.

It is this granularity that I would like to explore to see if there is a pattern in the noise.

I have used ArcGIS Pro to process the raster data - each pixel contains over a hundred data - but visualising variance, standard deviation, clustering or even just slicing the data to show small ranges doesn't show anything. (I am just exploring the data - which many would argue isn't the right way to go about it). The same applies if I take a few steps back in the process and look at emissivity. There is lots of variation (as you'd expect), but no obvious pattern.

This makes me think that there is nothing to find, or if there is something the effect is small. If the latter is the case then statistical analysis is the obvious approach.
 

Attachments

#9
In the existing approach, the degrees of freedom relates to the buffers around the site. I could reduce the width of these buffers, but given a pixel is only 30m, I'm not sure this would make much difference.

The existing approach only explored temperature. I've attached a screenshot of a relevant section of the results.

The effect found was small - generally less than a degree (possibly within the RMSE Of the approach) - and quickly tailed off. This, and the conceptual basis on which an effect on LST is expected lead me to believe there is nothing to find, and if there is an effect it is of no consequence.

There are loads of confounding variables that are very difficult to control for - wind being the most significant. Furthermore, this effect has only been detected in arid areas - other sites were explored but rejected as they had further confounding issues.

I am replicating the approach on one of the initial sites, but also on two additional sites - one with as few issues as possible, and another with a few more than the existing site. I expect to find nothing!
 

Attachments

katxt

Active Member
#11
All well explained. The Results shot is less than convincing - the multiple p values are an issue. If you take enough p values, there is a good chance something will be <0.05.
I think we've gone as far as general advice in small bites with over several days can go. In my opinion it would be money well spent, buying an hour of a spatial statistician's time, and sorting the whole thing out face to face (assuming you're not locked down as we are.) kat
 
#14
I'll have a look tomorrow! (In the UK it's still yesterday). Speaking with a temporospatial statistician from another department on Thursday. I'll update this thread afterwards.