ok, so a few comments (while this is still relevant, lol) that i couldn't bring forth before because they were gonna take some time.

I didn't think that Raykov's composite reliability was the same as coefficient omega. But maybe they're connected in a way that I didn't know?

lemme quote from one of the most comprehensive bibles of SEM out there: the EQS Manual (we're going old school, yaaay!). on page 119 where it talks about reliability it says:

*Cronbach’s alpha (1951) is well-known. Developed for EQS, rho is based on the latent variable model being run. This could be any type of model with additive error variances. When this is a factor model, this gives Bentler’s (1968, eq.12) or, equivalently, Heise and Bohrnsted’s (1970, eq. 32) omega for an equally-weighted composite. With the /RELIABILITY command, it is based on a one-factor model (Raykov, 1997ab), and then rho is the same as McDonald’s (1999, eq. 6.20b) coefficient omega. Note that McDonald’s omega is a special case of Heise and Bohrnsted’s omega.*
this is why i said, for practical purposes, they're all sorta doing the same thing. particularly beacuse reliability is usually thought about within the (hopefully), one-factor, tau-equivalent model.

Interesting, I didn't realize that the lower bound thing only held in the limit with population values.

Jake, let me show you a quick simulation i cooked up in lavaan to sort of make my point:

Code:

```
library(lavaan)
set.seed(123)
model <- '
f1 =~ .7*V1 + .7*V2 + .7*V3 + .7*V4
f1 ~~ 1*f1
V1 ~~ .51*V1
V2 ~~ .51*V2
V3 ~~ .51*V3
V4 ~~ .51*V4
'
fitted.mod <-
'
f1 =~ NA*V1 + V2 + V3 + V4
f1 ~~ 1*f1
'
reps <- 1000
rho <- double(reps)
alfa <- double(reps)
for (i in 1:reps){
datum <- simulateData(model, 100)
run1 <- cfa(fitted.mod, datum)
Sigma.hat <- fitted.values(run1)$cov
Sigma.error <- inspect(run1,"coef")$theta
rho[i] <- 1-sum(Sigma.error)/sum(Sigma.hat)
alfa[i] <- as.numeric(alpha(datum)$total[1])
}
(sum(alfa>rho)/reps)*100
```

notice some VERY nice things here. a one-factor, tau-equivalent model HOLDS in the population (that's under the 'model' label). a one-factor model is *fitted* to a sample size of 100 one thousand times and both rho (or omega, or composite reliability or whatever you wanna call it) and cronbach's alpha are calculated, so i end up with 1000 rhos and 1000 alphas.

then notice that i calculate the proportion of times that alpha is greater than rho. my R gave me somewhere around

**6.5%**. so

**6.5%** of the time in this simulation alpha is greater than rho. sure, it isn't THAT much.. particularly because it's only 6.5% out of a 1000.... but please keep in mind that we're talking about the most ideal of ideal cases, the case where ALL the assumptions for alpha to either be equal to or less than rho hold! so the point i'm trying to make is that if even for data where all the conditions are met for alpha to be an accurate estimate of rho (or at least a lower bound of rho) you can still find instances where alpha is

*greater* than rho, then it really is expected that, every now and then, alpha would be greater than rho for no other reason aside from sampling variability. now consider the case the OP presents... with TEN factors! and i'm willing to bet my brownies it doesn't even fit the data (by the chi-square test)! tht's why i think CowboyBear hits the nail on the head by saying that, for practical purposes, we don't really know anything about the relationship between alpha and rho when working with real data.