Composite reliability vs Cronbach's alpha

So, yeah ... in conclusion, only the devil knows what SMARTpls're doing here. there exists no basis behind a mathematical theory of reliability from the world's Least Squares section. I mean, to begin with, these estimates do not even partition errors! Ughh! ok, I'll shut up now because I can only go on a date with how many ways PLS-SEM is a mistake to use as a replacement for traditional ML-SEM
Hi, when i saw your doubt i put on the net to clarify it. This explanation could be interesting for you. I expect to help you.

There is the following:

Cronbach's alpha is the most misused and misunderstood test. Some practitioners do not seem to differentiate the difference between RELIABILITY and CONSISTENCY. reliability refers to the quality of the instrument (survey) to produce trustworthy data. Consistency refers to the predictability of the data. The Cronbach's alpha speaks to the consistency of the response in the survey, i.e. a measurement of consistency of a particular tagged question. For example, a survey may contain 50 questions and a researcher wants to test the consistency of question #10 and the sample size is 200 surveys returned. Here n = 200 and the answer score for question #10 is tabulated for Cronbach's alpha testing. The tests uses a scale between 0 - 1.00, hence a ratio test. Assume that the answer in #10 among 200 samples scores high (say 0.80) on the Cronbach's alpha test, what does it mean? It means that the answer in #10 is 0.80 consistent in a 1.00 scale. Does it tell anything more? No, it does not. In order to find a relationship, for instance, at least two variables are required. Nevertheless, practitioners mistake it for a test of reliability.
As for the issue of reliability, Cronbach's alpha does not help. In fact, it has nothing to do with reliability. Take for example, a researcher uses a Likert scale, i.e. 1 = lowest , ..., 5 = highest. How can the issue of reliability be addressed? We must not ask "whether the SURVEY is reliable?' we must ask "whether the INDIVIDUAL QUESTION is reliable?" In this case, if a conventional 95% confidence interval is used, a Likert scale of 1 - 5 fails because it can achieve only 80%, i.e. expected error distribution is 0.20: E = ([n - n(1 - df(a)]/n), where n = number of answer choice in the question and df = n -1, and a = 0.05 or precision level. reliability must come from instrument calibration.