Code:

```
library(dplyr)
library(tidyr)
library(ez)
options(scipen=999)
# Faking data function
make_y_values <- function(N, mean=140, within_sd=1, reps=1){
sj <- 1:N
subs <- c()
ys <- c()
repeats <- c()
i=1
for (s in sj){
subs <- c(subs, rep(s, reps))
y <- rnorm(reps, mean, sd=runif(1, 0, within_sd)
repeats <- c(repeats, 1:reps)
ys <- c(ys, y)
i = i+1
}
return(data.frame(subject=subs, repeats=repeats, y=ys))
}
pre <- make_y_values(N=10, mean=140, within_sd=10, reps=6)
post <- make_y_values(N=10, mean=140, within_sd=1, reps=6)
pre$session <- rep(0, nrow(pre))
post$session <- rep(1, nrow(post))
df <- rbind(pre, post)
grp <- df %>%
group_by(subject, session) %>%
summarise(y_mean=mean(y), y_sd=sd(y))
grp
boxplot(y_mean~session, data=grp)
# ANOVA
library(rstatix)
ANOVA <- anova_test(
data=df,
dv="y",
wid="subject",
within=c("repeats", "session")
)
print(ANOVA)
```

The example experiment might be something like an intervention for blood pressure treatment. We have 20 patients come into our clinic at time 1 (before drug). Their blood pressure got measured 6 times, showing a spread of values with a mean and SD. Patients go home and take our magic drug for some time before coming back at time 2 for another 6 measures. So we now have pre drug, post drug recordings of means and SD in blood pressure. And surprise, the blood pressure means did not differ so we say the treatment failed. But then we spot the variances or SDs look very much different - much smaller at time 2. This seems useful, but I don't know, how would we analyse something like this? ANOVA does not capture it.

I think we can use an F test to compare, but is it useful? Also what if we have a more complicated design (that I am not able to code). What is the approach then?