Regular Sampling


TS Contributor
I had the following question and I really wonder if I gave the right answer:

Assume that you can measure a process once a day ( for the sake of an example assume sampling the quality of water from a stream). Does it make sense to sample at random time points or is it enough to sample always at the same time?

I think the answer would depend on the type of variation we are interested in. If only longer term trends were of interest a regular sampling would be sufficient. Random intervals wouls give a chamce to pick smaller variations ( maybe on the scale of a few hours ) but I am not sure. It feels like the Nyquist-Shannon sampling theorem had some relevance here but I am not sure.

What do you think?



Less is more. Stay pure. Stay poor.
Is there any urgency in solving this problem? Can you sample at the same time for awhile then follow it up with random sampling?


Super Moderator
Excuse my laziness here as this section is cut from McBride (2005)
and discusses this briefly in terms of water quality (page 33 section 1.4).

(McBride, G.B. (2005) Using Statistical Methods for Water Quality Management: Issues, Problems and Solutions, John wiley & Sons, Inc., Hoboken, New Jersey.

The water quality measurements we make reflect a myriad of processes in the environment.
It is well known how some of these processes operate (e.g., dilution, in-stream
reaeration), and they can be quantified. Other processes are not so well understood.
So whenever measurements are made, say of an effluent BOD5 or a stream pH, some
part of their variation has to be attributed to chance (i.e., variation we cannot explain).
Statistical methods offer a means of doing this in an objective way, rather than the
all-too-common confusion of conflicting subjective opinions. They are capable of
distinguishing between randomness (noise, random variability) and pattern (e.g.,
seasonality, trend) by repeatable procedures.
Because of the role of chance in measurements, statistical methods call for a water
quality variable to be viewed as a random variable, or as a stochastic process. The
distinction is that whereas a random variable does not imply some natural ordering
of the results, a stochastic variable does (e.g., a time series of data at a particular site,
or a set of samples down a river at the same time). In either case the value that a
water quality variable may take has at least some element of randomness in it, and
this needs to be recognized (Ward & Loftis 1983 [335]).
It would be tidier perhaps to use just one term, but we’re stuck with the fact that
the common literature uses both, with “stochastic” being much used in hydrological
science. For practical purposes it is acceptable to use either, so we’ll use the term
random variable.
The major point is the idea of randomness. If you want to make an estimate of
some water quality variable, or compare one water quality “population” with another,
random samples may be needed at some stage (in space and/or in time). Indeed,
some advocate that sampling should always be random (e.g., Cotter 1985 [60]). At
first sight this is alarming: How do you fit random temporal sampling into a normal
workday? It would make work scheduling much more difficult than it is now-and
the reaction of the laboratory manager can be imagined. Random spatial sampling
may also invoke difficulties of site access.
Fortunately, some form of systematic sampling is often an acceptable substitute
for random sampling, as we shall see-specially for trend assessments. It is
even possible-though it sounds like a contradiction in terms-to adopt systematic
random sampling.

I guess the crux of it is, that in the WQ industry we tend to you systematic sampling for day to day measurements
but more often than not, continuous data are logged at 15minute intervals.

In some respects WQ data are special cases becasu many of the paramters measures flutuate widely over the
course of a day.

Storm event sampling of course is random but the water stage is usually of more interest than time of day.

I realise that this is very specific to stream water quality but I hope it is of some use.
Last edited:


Global Moderator
I agree strongly with bugman. Periodic regular sampling will inevitably lead to bias in many situations.

Some easy examples are:

Measuring sap flow in trees is strongly time dependent, getting samples at a fixed time (e.g. early morning) would give you are very limited picture of what is going on.
Traffic density, only measure traffic density at rush hour and you will grossly over estimate ... measure it at midnight and you will grossly underestimate it.

All these examples will become less biased with random sampling. Stream flow can be influenced many things... like when a tapir takes a bath for example (link to a book about the field site I work with a where stream measurements came up wrong).



TS Contributor
Is there any urgency in solving this problem? Can you sample at the same time for awhile then follow it up with random sampling?
Hi hlsmith,
it is a theoretical question, just assuming we have a budget (or technical possibility) for one measurement per day. As I see from the responses a randomised measurement would be preferable in any case to avoid the tapirs, for instance.



Less is more. Stay pure. Stay poor.
I thought tapirs were Southern hemisphere hog beasts. Are they coming around and messing up your data?

Side note, I just read a biography on von Neumann, which described Hungary/Budapest in detail around fin de siecle. I did not know Hungary was on such a rise power about that time period (pre-wars).


TS Contributor
Yeah, the first world war was the great catastrophe for Hungary, the country really never recovered from it.