Tukey-like pairwise comparisons of variances

#21
I don't know that I fully agree yet.
The only question is why there is a big difference between the result of Levene's test over two groups (right-tailed, center=mean) and F test for two variances (two-tailed), I would expect it to have a similar p-value
There's something strange here, all right. If we look at just two groups then we have two different ways of comparing the variances. We can do an F test on the raw data or we can do a t test (anova) on the absolute differences (Levene). I hope I've got that right?
I set up two random normal samples, one with the variance 3 times the other, and found a p value from each method. Like obh I would have expected a reasonably close match. Here is a graph plotting the pairs of p values for 100 such trials.
1585779125010.png
 

obh

Active Member
#24
So, can you agree to the following flow?
1. If Levene's test is a valid test for variances, we may use it to compare two sample variances.
2. The Levene's test over two samples is equivalent to two samples T-test over the differences (Pooled variance)
3. You may use two samples T-test over the differences, to compare two sample variances.
4. When having multiple comparisons you may use Tukey HSD instead of T-test.
5. You may Tukey HSD over the differences for multiple variances comparisons.

But we know that Levene's test is a valid test for variances. (I don't know of any limitation for two samples)

Now the only question is why two valid tests to compare variances have such different results?
 
#25
The logic is good.
I hadn't studied the internal workings of Levene's test before. All I can think of at the moment is that the absolute differences are far from normal. Hence the t test is unreliable.
Unsettling.
 

obh

Active Member
#26
Hi Katxt,

How did you do the chart?
What sample size did you use for each normal distribution?

I used sample size=20 and got reasonable results (CLT) 50 repeats

1585797755158.png

Even with sample size=10, It is still reasonable, 80 repeats

1585798536276.png


=====================
#1:

reps<-50
res1 <- numeric(reps)
res2 <- numeric(reps)
set.seed(1)

for (i in 1:reps )
{
x1<-rnorm(20, mean = 10, sd = 2)
x2<-rnorm(20, mean = 5, sd = 6)

result1=var.test(x1, x2, alternative="two.sided")
res1 <- result1$p.value

value=c(x1,x2)
group1 <- c(rep("Group1", length(x1)), rep("Group2", length(x2)))
my.dataframe<-data.frame(value, group1)
result2=leveneTest(value~group1, data=my.dataframe, center=median)
res2 <- result2$Pr[1]
}
paste(as.character(res1), collapse=", ")
paste(as.character(res2), collapse=", ")

#2:
reps<-80
res1 <- numeric(reps)
res2 <- numeric(reps)
set.seed(1)

for (i in 1:reps )
{
x1<-rnorm(10, mean = 10, sd = 2)
x2<-rnorm(10, mean = 5, sd = 6)

result1=var.test(x1, x2, alternative="two.sided")
res1 <- result1$p.value

value=c(x1,x2)
group1 <- c(rep("Group1", length(x1)), rep("Group2", length(x2)))
my.dataframe<-data.frame(value, group1)
result2=leveneTest(value~group1, data=my.dataframe, center=median)
res2 <- result2$Pr[1]
}
paste(as.character(res1), collapse=", ")
paste(as.character(res2), collapse=", ")
 
#27
I used n=16 in in an Excel Monte Carlo workbook. I like Excel for this because you can see the whole process working.
You say your graphs are reasonable. Weren't you expecting some pattern of points at 45 degrees? or at least the semblance of a line somewhere?
 

obh

Active Member
#28
Hi Katxt,

It is not exactly the same test so I don't expect to get exactly the same result.
You can see that most of the dots were near the origin, say a significant result as expected.

I tried the same with a smaller difference between the SDs: SD1=2, SD2=2.2 (n=10 , rep=80)
n=10 is a smaller test power ... 0.0576183.
Is this what you expected to get?


1585800759765.png

Code:
reps<-80
res1 <- numeric(reps)
res2 <- numeric(reps)
set.seed(1)

for (i in 1:reps )
{
  x1<-rnorm(10, mean = 10, sd = 2)
  x2<-rnorm(10, mean = 5, sd = 2.2)
 
  result1=var.test(x1, x2, alternative="two.sided")
  res1[i] <- result1$p.value
 
  value=c(x1,x2)
  group1 <- c(rep("Group1", length(x1)), rep("Group2", length(x2)))
  my.dataframe<-data.frame(value, group1)
  result2=leveneTest(value~group1, data=my.dataframe, center=median)
  res2[i] <- result2$Pr[1]
} 
paste(as.character(res1), collapse=", ")
paste(as.character(res2), collapse=", ")
 
#29
OBH and Katxt -- Thanks for continuing the conversation. I don't have your level of statistics expertise, so I must confess that I don't entirely understand the exchange. Do you think you've reached agreement on OBH's method? OBH -- did I summarize it correctly in my previous post (copied below)? Let me know if it might be helpful for me to provide additional information. Thanks!


OBH and Katxt -- Thanks so much for these extremely informative and helpful responses. I'm replying to OBH's above message because I understand it to be a summary of the approach I should use to test both hypotheses. The process for testing the first hypothesis (comparison of the means) is now clear to me. From what I understand, the tests for H1 should be conducted separately from the Levene's test using the ANOVA/Tukey operations described above. This all seems very straightforward, but please let me know if I'm mistaken.

Regarding the tests for H2 (comparison of the variances), I'll summarize your advice here, and you can let me know if this is correct:

First, I'll address a point that we haven't yet discussed explicitly: I assume the standard deviations calculated as part the Levene's test (or Brown-Forsythe if using the median) can be reported as measures of internal variance for each group. I'm asking this because the Tukey output for the Levene's calculator appears to report differences between group means, not between group variances. When I hover the cursor above the "differences" column in the Tukey table, it tells me that "differences" actually refers to group means. In the event that the differences in variance aren't automatically calculated in Levene's, OBH notes that "calculating the differences is very easy so you may also try to run on R," but I'm not sure how exactly this should be done. Can you clarify?

Second, when comparing the group variances (pairwise), the p-values in the Levene's test Tukey table should indicate whether the differences in variance are statistically significant. If my data are reasonably normal and the sample size is above 30 (as it is), I should select the mean as the center. In this case, the test I'm conducting is Levene's. Otherwise, if the groups have non-normal distributions, I should select the median. If I select the median, the test is called Brown-Forsythe, even though I'll be using the Levene's calculator to get the results. Notably, Katxt orignially wrote: "What Zach is looking for (I think) is which particular groups have variances that are probably different from other particular groups." This is absolutely correct, and it led Katxt to recommended different tests, but from what I understand, Katxt now agrees with the above advice from OBH.

Is all of this correct? Are OBH and Katxt in agreement? Also, if possible, could you clarify how the differences in variance might be calculated? Again, THANK YOU for this extraordinary advice!

Zach
 
#30
Yes. I agree with obh's suggestion of doing an anova on the deviations (absolute differences from the group mean).
Transform the data into deviations - do your anova with Tukey HSD post hoc tests. Forget Leven's test. The idea is an ingenious one and I like it.
The mean of a group's deviations is an indirect measure of the group's sd, so a difference in mean deviations implies a difference in sd's. You can now go through the process of applying Tukey's post hoc procedure to those means to get a Tukey type HSD for the sd's.
Just one reservation. The deviations data are very non normal and it isn't clear what effect that will have on the results.
 

obh

Active Member
#31
The Tukey output for the Levene's calculator appears to report differences between group means, not between group variances. When I hover the cursor above the "differences" column in the Tukey table, it tells me that "differences" actually refers to group means. In the event that the differences in variance aren't automatically calculated in Levene's, OBH notes that "calculating the differences is very easy so you may also try to run on R," but I'm not sure how exactly this should be done. Can you clarify?
Zach
The Tukey table in the Levene's calculator compares the variances. nothing there is related to the comparison of the means.
"it tells me that "differences" actually refers to group means" -
It is the means of the differences ... confusing ...should be written differently! ...but when you run the Levene's test the focus is on the variances.

Example of how to calculate the differences:
Using average as a center
Group1: [1,2,3,2,3,4,4] the average is 2.71429
The differences from the center are: [abs(1-2.71429),abs(2-2.71429),abs(3-2.71429),,abs(2-2.71429) ....]

If you use the median just use 3 instead of 2.71429

Second, when comparing the group variances (pairwise), the p-values in the Levene's test Tukey table should indicate whether the differences in variance are statistically significant. If my data are reasonably normal and the sample size is above 30 (as it is), I should select the mean as the center. In this case, the test I'm conducting is Levene's. Otherwise, if the groups have non-normal distributions, I should select the median. If I select the median, the test is called Brown-Forsythe, even though I'll be using the Levene's calculator to get the results. Notably, Katxt originally wrote: "What Zach is looking for (I think) is which particular groups have variances that are probably different from other particular groups." This is absolutely correct, and it led Katxt to recommended different tests, but from what I understand, Katxt now agrees with the above advice from OBH.
Zach
There are better tests for non-normal data, but generally, you are correct and can use the means in your case.

Per my understanding both options are correct:
1. If you expect to get equal variances
Run the Levene's test, and if you find that they aren't equal now you can run the Tukey HSD "for variances" over the differences from the means.

2. If you expect to get unequal variances, you may run directly the Tukey HSD "for variances" over the differences from the means.

I didn't see the Tukey HSD "for variances" over the differences written officially as an option, but as I wrote longly above I can't think why it won't work ...
 
Last edited:

obh

Active Member
#32
Hi Zach,

Probably not elegant:

The following will do in R the Levene's test, and Tukey HSD for variances on the differences
Using the ANOVA test instead of the Levene's test

Code:
g1=c(1,2,2,3,3,4,4)
g2=c(3,4,5,6,6,7,7,8,78)
g3=c(3,4,4,4,5,34)
len1=length(g1)
len2=length(g2)
len3=length(g3)

data1 <-c(g1,g2,g3)
mean1 <- c(rep(mean(g1), len1), rep(mean(g2), len2), rep(mean(g3), len3))
value=abs(data1-mean1)
group1 <- c(rep("Group1", len1), rep("Group2", len2), rep("Group3", len3))
my.dataframe<-data.frame(value, group1)
res.aov <- aov(value ~ group1, data = my.dataframe)
summary(res.aov)
TukeyHSD(res.aov)
It also gets the same results as the Tukey HSD table on the Levene's calculator
 
#33
Hi Zach,

Probably not elegant:

The following will do in R the Levene's test, and Tukey HSD for variances on the differences
Using the ANOVA test instead of the Levene's test

Code:
g1=c(1,2,2,3,3,4,4)
g2=c(3,4,5,6,6,7,7,8,78)
g3=c(3,4,4,4,5,34)
len1=length(g1)
len2=length(g2)
len3=length(g3)

data1 <-c(g1,g2,g3)
mean1 <- c(rep(mean(g1), len1), rep(mean(g2), len2), rep(mean(g3), len3))
value=abs(data1-mean1)
group1 <- c(rep("Group1", len1), rep("Group2", len2), rep("Group3", len3))
my.dataframe<-data.frame(value, group1)
res.aov <- aov(value ~ group1, data = my.dataframe)
summary(res.aov)
TukeyHSD(res.aov)
It also gets the same results as the Tukey HSD table on the Levene's calculator

Hi, coming in a bit late to the action. I am a friend of @zachros and have been consulting with him on this research. I like @obh 's idea of using the deviations. After all, the variance is the average squared deviation, it seems to make sense to me but I'm always wary of trying to manipulate statistical tests in a non-standard way. My question is shouldn't your input variable be the squared deviation and not the absolute deviation?

Below, I used the chickwts data to create some artificial data. I use "feed" as the sub-group variable, and create a new higher group feed_group with two feeds each (six total). Thus, we want to examine whether the variance of a "typical feed" chick weight (that is the variable of interest) differs between the feed_groups (that I created). Thus this will tell if a given feed_group's feeds tend to have more variability in the weights than others.

Note, I use a nested model where feed is nested in feed_group (and feed ideally should be random rather than fixed effects since this is the application @zachros is going for). But the feed_groups are fixed, so we have random (feed) effects nested within the fixed effects (feed_group), since a feed only belongs to one feed_group.

Code:
data(chickwts)

#there are six types of feed.  This will be our sub-group, so now create three
#artifical larger groupings 1,2,3

chickwts$feed_group <- 1
chickwts$feed_group[ chickwts$feed %in% c("soybean", "sunflower")] <- 2
chickwts$feed_group[ chickwts$feed %in% c("meatmeal", "casein")] <- 3
chickwts$feed_group <- as.factor(chickwts$feed_group)


#try subtracting mean and squaring, then ANOVA calculates the average variance in each sub-group
chickwts$squared_wt_dev <- 0
feeds <- unique(chickwts$feed)
for (ff in feeds) {
    ss <- chickwts$feed == ff
    mu <- mean(chickwts$weight[ ss ])
    chickwts$squared_wt_dev[ ss ] <- (chickwts$weight[ ss ] - mu)^2
}  

#I am not sure how to specify this model correctly.  Is it?
mod_variance2 <- aov(squared_wt_dev ~ feed_group + (feed_group/feed), data=chickwts)
summary(mod_variance2)
#      Df    Sum Sq  Mean Sq F value Pr(>F)
#feed_group       2  41388193 20694097   1.750  0.182
#feed_group:feed  3   9168419  3056140   0.258  0.855
#Residuals       65 768562648 11824041 

summary.lm(mod_variance2)


Call:
aov(formula = squared_wt_dev ~ feed_group + (feed_group/feed), 
    data = chickwts)

Residuals:
   Min     1Q Median     3Q    Max 
 -3775  -2275  -1047   1588  11524 

Coefficients: (12 not defined because of singularities)
                          Estimate Std. Error t value Pr(>|t|)  
(Intercept)                2501.19     992.64   2.520   0.0142 *
feed_group2                -314.94    1403.81  -0.224   0.8232  
feed_group3                1304.56    1403.81   0.929   0.3562  
feed_group1:feedhorsebean -1158.43    1472.32  -0.787   0.4343  
feed_group2:feedhorsebean       NA         NA      NA       NA  
feed_group3:feedhorsebean       NA         NA      NA       NA  
feed_group1:feedlinseed         NA         NA      NA       NA  
feed_group2:feedlinseed         NA         NA      NA       NA  
feed_group3:feedlinseed         NA         NA      NA       NA  
feed_group1:feedmeatmeal        NA         NA      NA       NA  
feed_group2:feedmeatmeal        NA         NA      NA       NA  
feed_group3:feedmeatmeal     23.43    1435.36   0.016   0.9870  
feed_group1:feedsoybean         NA         NA      NA       NA  
feed_group2:feedsoybean     534.43    1352.74   0.395   0.6941  
feed_group3:feedsoybean         NA         NA      NA       NA  
feed_group1:feedsunflower       NA         NA      NA       NA  
feed_group2:feedsunflower       NA         NA      NA       NA  
feed_group3:feedsunflower       NA         NA      NA       NA  
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 3439 on 65 degrees of freedom
Multiple R-squared:  0.06172,   Adjusted R-squared:  -0.01045 
F-statistic: 0.8551 on 5 and 65 DF,  p-value: 0.5161

###### 

library(nlme)
# I think this is correct
mod_variance2 <- lme(fixed=squared_wt_dev ~ feed_group, random=~1|feed/feed_group, data=chickwts)
anova(mod_variance2)

nDF  F-value p-value
(Intercept)     1    65 47.09377  <.0001
feed_group      2     3  1.80936  0.3052

Linear mixed-effects model fit by REML
 Data: chickwts 
       AIC      BIC    logLik
  1319.622 1332.939 -653.8112

Random effects:
 Formula: ~1 | feed
        (Intercept)
StdDev:   0.1890072

 Formula: ~1 | feed_group %in% feed
        (Intercept) Residual
StdDev:   0.1890256 3381.896

Fixed effects: squared_wt_dev ~ feed_group 
                Value Std.Error DF   t-value p-value
(Intercept) 1974.6295  721.0227 65 2.7386510  0.0080
feed_group2  499.3837  979.6769  3 0.5097433  0.6454
feed_group3 1842.3194 1008.5357  3 1.8267270  0.1652
 Correlation: 
            (Intr) fd_gr2
feed_group2 -0.736       
feed_group3 -0.715  0.526

Standardized Within-Group Residuals:
       Min         Q1        Med         Q3        Max 
-1.1194239 -0.6979622 -0.4303324  0.3641095  3.4112561 

Number of Observations: 71
Number of Groups: 
                feed feed_group %in% feed 
                   6                    6
However, I'm not sure how to further analyze this, or if I have correctly specified the model
 

obh

Active Member
#34
I like @obh 's idea of using the deviations. After all, the variance is the average squared deviation, it seems to make sense to me but I'm always wary of trying to manipulate statistical tests in a non-standard way.
I also don't like to use non-standard ways.
I believe it should be okay, and I wrote the logic behind it.
You may open a new question for another opinion.

My question is shouldn't your input variable be the squared deviation and not the absolute deviation?
Levene's run the ANOVA over the differences, and Levene's test with two groups is equivalent to t-test over the differences.
Tukey HSD is a replacement for the t-test with multiple comparisons, so it should run over the differences.
 
#35
Levene's run the ANOVA over the differences, and Levene's test with two groups is equivalent to t-test over the differences.
Tukey HSD is a replacement for the t-test with multiple comparisons, so it should run over the differences.
Okay, I have verified that you are correct about using the absolute rather than squared deviations, assuming the online Levene's test calculator is correct. I don't understand your explanation for it, but sure. Here is code that confirms it.

Code:
df <- data.frame(cbind(grp=rep(1:3, each=6), x=c(1:6, 2:7, 3:8)))
df$grp <- factor(df$grp)

df$grp_mean <- 0
for (gg in 1:3) {
    ss <- df$grp == gg
    df$grp_mean[ ss ] <- mean(df$x[ ss ])
}

df$abs_dev <- abs(df$x - df$grp_mean)
df$sq_dev <- df$abs_dev^2

mod1 <- aov(abs_dev ~ grp, data=df)
TukeyHSD(mod1)

  Tukey multiple comparisons of means
    95% family-wise confidence level

Fit: aov(formula = abs_dev ~ grp, data = df)

$grp
             diff       lwr      upr p adj
2-1 -2.220446e-16 -1.341328 1.341328     1
3-1 -2.220446e-16 -1.341328 1.341328     1
3-2  0.000000e+00 -1.341328 1.341328     1
1586106463883.png

You get the same results on the online calculator (see lower CI and upper CI) columns for the multiple comparisons test.

Now, back to my additional question. What @zachros is actually interested in testing (like I said, I've been working with him), is whether for several items in each group, for which we have multiple measurements on each item, whether the variance (calculated separately on each item) within each group is different between groups. The multiple comparisons test is to test these differences in variances between pairs of groups. If you see my above post with the chickwt example, the top groups would be the 3 "feed_groups", each of which has two feeds (items) in it, each of which we have a set of weight measurements on. He wants to calculate the variance for each feed (giving two variances for each feed_group), then compare pairs of feed_groups (multiple comparisons) to know if one feed_group has substantially higher internal feed weight variance. Again, this is just an artificial example to illustrate.

As I understand it, the feed_groups are fixed effects, and the items (feeds) should be treated as randomly selected from some relevant population, and are independent. Since the feeds are not shared across feed_groups, this should be treated as random effects nested within fixed effects.
Here is how I propose this (note the fixes so we use absolute deviation):

Code:
chickwts$feed_group <- 1
chickwts$feed_group[ chickwts$feed %in% c("soybean", "sunflower")] <- 2
chickwts$feed_group[ chickwts$feed %in% c("meatmeal", "casein")] <- 3
chickwts$feed_group <- as.factor(chickwts$feed_group)

#try subtracting mean taking absolute value, then ANOVA calculates the average variance in each sub-group
chickwts$abs_wt_dev <- 0
feeds <- unique(chickwts$feed)
for (ff in feeds) {
    ss <- chickwts$feed == ff
    mu <- mean(chickwts$weight[ ss ])
    chickwts$abs_wt_dev[ ss ] <- abs(chickwts$weight[ ss ] - mu)
}  

#now conduct ANOVA with random nested in fixed effects

library(nlme)
# estimate mean variance within feed_group, where feeds are nested within it and are random
nested_model <- lmer(abs_wt_dev ~ feed_group + (1 | feed_group:feed), data=chickwts)
summary(nested_model)

Nested model fit by REML ['lmerMod']
Formula: abs_wt_dev ~ feed_group + (1 | feed_group:feed)
   Data: chickwts

REML criterion at convergence: 671.2

Scaled residuals: 
    Min      1Q  Median      3Q     Max 
-1.4883 -0.8318 -0.2510  0.6110  2.2818 

Random effects:
 Groups          Name        Variance Std.Dev.
 feed_group:feed (Intercept)   0       0.00   
 Residual                    985      31.39   
Number of obs: 71, groups:  feed_group:feed, 6

Fixed effects:
            Estimate Std. Error t value
(Intercept)  37.0636     6.6914   5.539
feed_group2  -0.1351     9.0918  -0.015
feed_group3  15.2295     9.3596   1.627

Correlation of Fixed Effects:
            (Intr) fd_gr2
feed_group2 -0.736       
feed_group3 -0.715  0.526
convergence code: 0
boundary (singular) fit: see ?isSingular


# now that the model is specified as ANOVA, we can conduct Tukey mutiple comparisons
library(multcomp)
mc <- glht(mod_variance2, mcp(feed_group="Tukey"))
#calculate confidence intervals, which is another way of doing the test
confint(mc)

Linear Hypotheses:
           Estimate lwr      upr     
2 - 1 == 0  -0.1351 -21.4420  21.1719
3 - 1 == 0  15.2295  -6.7051  37.1641
3 - 2 == 0  15.3646  -5.6900  36.4192
Now, the fact that all of the pairwise confidence intervals include zero means that the feed_groups do not differ in this respect.

Can someone confirm that my specification of the model is correct? I tried it with aov as well below and got pretty similar, but not identical, results on the multiple comparison intervals.

Code:
mod_variance2 <- aov(abs_wt_dev ~ feed_group + (feed_group/feed), data=chickwts)
TukeyHSD(mod_variance2)

_group
          diff        lwr      upr     p adj
2-1 -0.1350649 -22.214840 21.94471 0.9998813
3-1 15.2295125  -7.500676 37.95970 0.2499582
3-2 15.3645775  -6.453651 37.18281 0.2170815
 
#36
Assuming what I did above is correct, I am having some trouble interpreting the estimates correctly. The estimates of the pairwise differences should be the ones you can calculate empirically, correct? I assume that the Tukey intervals just affect the standard errors of the pairwise differences and not the estimates. For instance, here I calculate the variance of each feed weight.

Code:
df_variance <- aggregate(chickwts$weight, by=list(chickwts$feed_group, chickwts$feed), FUN=var)
colnames(df_variance) <- c("feed_group", "feed","weight_variance")

df_obs <- aggregate(chickwts$weight, by=list(chickwts$feed, chickwts$feed_group), FUN=length)
nobs <- aggregate(df_obs$x, by=list(df_obs$Group.2), FUN=sum)
df_variance$wts <- (df_obs$x / rep(nobs$x, each=2))^2

# df_variance
#  feed_group      feed weight_variance       wts
#1          3    casein        4151.720 0.2066116
#2          1 horsebean        1491.956 0.2975207
#3          1   linseed        2728.568 0.2899408
#4          3  meatmeal        4212.091 0.2130178
#5          2   soybean        2929.956 0.2722117
#6          2 sunflower        2384.992 0.2287335
Here, weight_variance is the variance of "weight" for each feed. I want to calculate the composite variance within feed_group by taking a weighted average of the variance of the feeds in the feed_group. "wts" is a multiplier for this to reflect the number of observations in each. Shouldn't we be able to get the values of the Tukey pairwise differences? For instance, how do I get the value -0.1350649 for feed_group 2 vs 1? I would think the estimates are interpreted as the increase in the variance for an item within each feed_group.
 

obh

Active Member
#37
Code:
mod_variance2 <- aov(abs_wt_dev ~ feed_group + (feed_group/feed), data=chickwts)
TukeyHSD(mod_variance2)

_group
          diff        lwr      upr     p adj
2-1 -0.1350649 -22.214840 21.94471 0.9998813
3-1 15.2295125  -7.500676 37.95970 0.2499582
3-2 15.3645775  -6.453651 37.18281 0.2170815

I am having some trouble interpreting the estimates correctly
The differences from the group's average represent the variances.

The difference between the "average of the differences" of group1 and the "average of the differences" of group2 equals -0.1350649.
The difference is not significant since the p-value is bigger than 0.05 (assuming you use the significance level of 0.05)
 
#38
The differences from the group's average represent the variances.


The difference between the "average of the differences" of group1 and the "average of the differences" of group2 equals -0.1350649.

The difference is not significant since the p-value is bigger than 0.05 (assuming you use the significance level of 0.05)

Okay, that helps, I'm trying to just verify this. Below, I calculate the variance for each feed (=Group.2), nested within feed_group (=Group.1). The "x" column is the calculated variance

Code:
feed_variance <- aggregate(chickwts$weight, by=list(chickwts$feed_group, chickwts$feed), FUN=sd)
feed_variance
#  Group.1   Group.2        x
#1       3    casein 64.43384
#2       1 horsebean 38.62584
#3       1   linseed 52.23570
#4       3  meatmeal 64.90062
#5       2   soybean 54.12907
#6       2 sunflower 48.83638

Now let's look at the observed Tukey difference for feed_group 3 -1 is 15.2295125. How do we get that? You're saying it should be the observed mean of the differences between the groups. If I take the variances of each feed in Group 3 and subtract the variances of each feed in Group 1, I get 4 differences. Is that what you're supposed to do? Is it some weighted version of this (according to the sample sizes of each feed)?

Code:
#Variances of feeds in groups 3 and 1
group3 <- feed_variance$x[ feed_variance$Group.1 == 3]
#64.43384 64.90062
group1 <- feed_variance$x[ feed_variance$Group.1 == 1]
#38.62584 52.23570

#calculate difference of each pair
# i.e, 64.4 - 38.6, 64.4 - 52.2, 64.9 - 38.6, 64.9 - 52.2
pair_diff <- rep(group3, each=2) - rep(group1, 2)
#25.80800 12.19814 26.27478 12.66492
mean(pair_diff)
# 19.23646
So I get 19.23646 instead of 15.2295125. I'm not sure how to calculate it, I just want to verify this is doing the right thing. Thanks for your guidance on this!
 

obh

Active Member
#39
Hi,

I already gave an example how to calculate the average of the differences:

Example of how to calculate the differences:
Using average as a center
Group1: [1,2,3,2,3,4,4] the average is 2.71429
The differences from the center are: [abs(1-2.71429),abs(2-2.71429),abs(3-2.71429),,abs(2-2.71429) ....]
Now an average of the list.

The average of the differences is not equal to the variance of the group. was that your question?