The alpha and the omega of scale reliability and validity: Why and how to abandon Cronbach’s alpha and the route towards more comprehensive assessment of scale quality

Authors

  • Gjalt-Jorn Y. Peters Faculty of Psychology and Educational Science, Open University, the Netherlands

Abstract

Health Psychologists using questionnaires rely heavily on Cronbach’s alpha as indicator of scale reliability and internal consistency. Cronbach’s alpha is often viewed as some kind of quality label: high values certify scale quality, low values prompt removal of one or several items. Unfortunately, this approach suffers two fundamental problems. First, Cronbach’s alpha is both unrelated to a scale's internal consistency and a fatally flawed estimate of its reliability. Second, the approach itself assumes that scale items are repeated measurements, an assumption that is often violated and rarely desirable. The problems with Cronbach’s alpha are easily solved by computing readily available alternatives, such as the Greatest Lower Bound or Omega. Solving the second problem, however, is less straightforward. This requires forgoing the appealing comfort of a quantitative, seemingly objective indicator of scale quality altogether, instead acknowledging the dynamics of reliability and validity and the distinction between scales and indices. In this contribution, I will explore these issues, and provide recommendations for scale inspection that takes these dynamics and this distinction into account.

References

Bartholomew, L. K., Parcel, G. S., Kok, G., Gottlieb, N. H., & Fernández, M. E. (2011). Planning health promotion programs: an Intervention Mapping approach (3rd ed.). San Francisco, CA: Jossey-Bass.

Cortina, J. M. (1993). What is Coefficient Alpha? An Examination of Theory and Application. Journal of Applied Psychology, 78(1), 98–104.

Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3), 297–334.

Dunn, T. J., Baguley, T., & Brunsden, V. (2013). From alpha to omega: A practical solution to the pervasive problem of internal consistency estimation. British Journal of Psychology, n/a–n/a. doi:10.1111/bjop.12046

Field, A., Miles, J., & Field, Z. (2012). Discovering Statistics Using R. London: Sage Publications Ltd.

Fishbein, M., & Ajzen, I. (2010). Predicting and changing behavior: The reasoned action approach. New York: Psychology Press.

Graham, J. M. (2006). Congeneric and (Essentially) Tau-Equivalent Estimates of Score Reliability: What They Are and How to Use Them. Educational and Psychological Measurement, 66(6), 930–944.

Luszczynska, A., Scholz, U., & Schwarzer, R. (2005). The general self-efficacy scale: multicultural validation studies. The Journal of Psychology, 139(5), 439–57. doi:10.3200/JRLP.139.5.439-457

Peters, G.-J. Y., Abraham, C. S., & Crutzen, R. (2012). Full disclosure: doing behavioural science necessitates sharing. The European Health Psychologist, 14(4), 77–84.

R Development Core Team. (2014). R: A Language and Environment for Statistical Computing. Vienna, Austria. Retrieved from http://www.r-project.org/

Revelle, W., & Zinbarg, R. E. (2009). Coefficients Alpha, Beta, Omega, and the glb: Comments on Sijtsma. Psychometrika, 74(1), 145–154. doi:10.1007/s11336-008-9102-z

Sijtsma, K. (2009). On the Use, the Misuse, and the Very Limited Usefulness of Cronbach’s Alpha. Psychometrika, 74(1), 107–120. doi:10.1007/s11336-008-9101-0

Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 1359–66. doi:10.1177/0956797611417632

Downloads

Published

2014-04-01

Issue

Section

Original Articles