Science knowledge has often been measured with a set of questions that include these items about evolution and the big bang:
1.Human beings, as we know them today, developed from earlier species of animals. Is that true or false?
2. The universe began with a huge explosion. Is that true or false?
The National Science Board removed from the National Science Foundation’s Science and Engineering Indicators 2010 a discussion of public responses to these items about evolution and the big bang, which drew strong criticism from some scientists (see here and here). But National Science Board member Louis Lanzerotti claimed that the evolution and big bang items were “flawed indicators of scientific knowledge because the responses conflated knowledge and beliefs” (Bhattacharjee 2010).
Discussion of responses to the evolution and big bang items were reinstated in the 2012 edition of the Indicators, but disagreement remains about whether questions about evolution and the big bang should be placed in a science knowledge exam alongside questions about non-disputed facts such as whether electrons are smaller than atoms.
To provide more data on this issue, I conducted a factor analysis of sixteen science exam questions that were asked on National Science Foundation surveys from 1988 to 2001 for which full data were available.
… takes thousands and potentially millions of measurements and qualitative observations and resolves them into distinct patterns of occurrence (Rummel 1967, p.445).
The sixteen questions used in the science exam are listed in the table below.
Blue cells indicate questions measuring an understanding of probability, red cells indicate religiously-neutral factual questions, and white cells indicate factual questions that are, have been, or might be perceived to be in conflict with religious teaching.
The table below presents results of a principal components factor analysis that retained four factors:
Factor analysis reports but does not name or otherwise identify the patterns, so the patterns must be interpreted. The four probability questions unsurprisingly loaded on the same factor, so it appears that an understanding of probability is a separate dimension of science knowledge. Factual recall questions were spread among three factors, but factor 3 contained the questions about evolution, the big bang, and continental drift — each of which is perceived by some to be in conflict with biblical creation accounts.
This third factor therefore might tap a religious dimension of science knowledge, so I repeated the analysis after adding responses from a question that measured religion-science conflict more directly:
We depend too much on science and not enough on faith. Do you strongly agree, agree, disagree, or strongly disagree?
Disagree responses were collapsed and agree responses were collapsed to form a binary variable.
Results shown below indicate that the religion-science conflict question loaded onto factor 3. Perhaps this third factor is not tapping into science knowledge as much as it is tapping into trust in science or scientists; after all, the facts underlying the factual questions in this factor — about evolution, the big bang, and continental drift — are disputed by scientists and religious authorities. So, for instance, the big bang item might be measuring whether a person trusts science or religion to answer questions about the origin of the universe.
This analysis does not resolve the definitional question of whether science literacy requires acceptance of a scientific consensus, but it does suggest science literacy is not a unidimensional concept, and it does suggest that there might be value in separating the dimensions of science knowledge, similar to the way that the SAT separates verbal and math scores.
- Non-responses and don’t know responses for the science knowledge questions were coded as incorrect responses.
- There were 13,503 observations across the survey years of 1988, 1990, 1992, 1995, 1997, 1999, and 2001, and all of these observations were used for the first factor analysis. There were only 12,796 responses to the question about faith and science, so the second factor analysis had this lower number of observations.
- Three questions on genes and experiments were not included in the analysis because the questions were not asked in 1988 and 1990. Questions on radiation and the center of the Earth being hot were not included in the analysis because their number of observations was less than the number of observations for other questions in 1990. Inclusion of these five questions does not change the inference that disputed items load onto a separate factor or that the science-religion controversy item loads on the disputed questions factor, but inclusion of these questions does add a fifth and sixth factor in some cases.
- The Stata do file for the analysis is located here, and the dataset can be downloaded here. The citation for the data is: Miller, Jon D., Linda Kimmel, ORC Macro and NORC. National Science Foundation Surveys of Public Attitudes Toward And Understanding of Science And Technology, 1979-2006 [Computer file]. 3rd Roper Center version. Tallahassee, FL: Susan Carol Losh, Florida State University, Department of Educational Psychology & Learning Systems/Arlington, VA: National Science Foundation, Division of Science Resources Statistics/Arlington, VA: American Statistical Association [producers], 2009. Storrs, CT: Roper Center for Public Opinion Research [distributor], 2009.
- The variables in this analysis are binary (correct or incorrect, faith or science), so a factor analysis with tetrachoric correlations is preferable to the reported regular analysis. M-Plus results that account for the binary nature of the variables is presented below, providing confirmation of the main finding of a separate factor for the religious-tinged items.