I am hoping for some assistance with understanding the correct SD to use for the calculation of minimal detectable change scores.

I have the calculation for MDC as: MDC90 = SEM x 1.65 x √2, where Standard error of the measusment (SEM) is calculated using the equation: SEM = sd x √(1 – ICC).

I am unsure about what SD I am supposed to use. Most papers read as the SD of the measure. Given that what I am measuring is the difference between days I was assuming that ‘the measure’ should be the SD of the difference between days rather than the SD of the actual variable.
E.g Resting metabolic rate (RMR) 1: 7382 ± 1447, RMR 2: 7419 ± 1527 Kj. Between day difference: -36.4 ± 358.8.
So I though I would use the SD of 358.8.

However, I have found a couple of articles that have provided a lot of their data and so I can practically calculate their MDC from their data and I think they are using the SD of the actual measures. (e.g. in my case the average of 1447 and 1527).

Though I cannot understand why it would be based on the actual measure not the SD of the difference.
This would mean that having a more diverse group with a large range of RMRs results in a larger MDC even if the between day differences are no bigger.

Can somebody help to clarify if the SD is the mean SD of RMR 1 and RMR 2 or if it is the SD of the between day differences. And if it is the SD of the actual measure can you help me to understand why.
Thanks heaps,

Lee