Are two sources of variation ever completely independent?

#1
Pick three values to represent the true value, and two error sources and put them as the header in separate columns. Remember to include a positive or negative sign to represent the direction of the error. Let the experimentally determined value be the sum of all three values. Now, use a random number generator to create some variance for your two error values. In each error column, use the random numbers to come up with error values that distribute around a mean with the error value that you had chosen. In the true value column, let the experimentally determined value be the true value plus each of the randomly distributed error values. Calculate the variance for each column, and you will see why Var(X+Y)=Var(X)+Var(Y)+2*CoVar(X,Y). When CoVar = 0, this is a test for independence, but it is not a guarantee. When CoVar is not zero, then the sources of variation cannot be independent.

Students in a statistics class are usually taught to test for independence before making this assumption, and this is one of the tests.

Unfortunately, I can't find an example where the CoVar must be zero, yet the total variation doesn't involve adding the source variances together. Is complete independence from experimentally determined results possible?
 

noetsi

Fortran must die
#2
In theory it is, and in fact it is normally assumed. Whether it ever is with real data is doubtful to me, but I know of no analysis that says that....

I think you would rarely find covariance of exactly zero with actual data simply because of spurious regression - you probably would have to make special data sets to get this.