According to science literacy experts…

The National Science Board might revise two science knowledge questions that are currently phrased as follows:

1.Human beings, as we know them today, developed from earlier species of animals. Is that true or false?

2. The universe began with a huge explosion. Is that true or false?

 
The Board is considering respectively prefacing the questions with “according to evolutionary theory” and “according to astronomers.” This proposal has generated controversy, as Yudhijit Bhattacharjee reported:

The change infuriates Jon Miller, a science literacy expert at the University of Michigan, Ann Arbor, and architect of the original questionnaire, which is now used by several countries. “If you are altering the questions in that way, you are doing it for religious reasons,” he says. “We don’t make statements like, ‘According to some economists, we had a recession’ or ‘According to the weatherman, we had a tsunami” (p. 394).

 
Miller’s examples are not well-chosen analogies to questions about evolution and the big bang: questions about a recession or a tsunami are typically not prefaced with “according to” because these are definitional questions for which there are commonly-accepted definitions; for example, a recession is defined as two consecutive quarters with a decline in gross domestic product.

But questions about evolution and the big bang are inferential questions that involve interpreting a large number of observations to make an incredibly broad summary judgment. Thus, a more suitable analogy to economics might be a question about whether deficit spending necessarily improves an economy in recession. Lack of an “according to John Maynard Keynes” preface to this sort of question would make some Nobel economists look financially illiterate.

This is not to suggest that the uncertainty inherent in economics is commensurate with the uncertainty inherent in science, but it is to suggest that the type of question suitable for measuring knowledge of a definition might not be suitable for measuring knowledge of a disputed inference.

But the dispute over adding an “according to” caveat is misguided because the National Science Board should direct its efforts to developing better measures than the current questions that require mere familiarity with or acceptance of a summary inference.

Perhaps respondents should be asked to cite evidence for the big bang, which would provide a deeper measure of knowledge and would avoid the should-we-measure-belief-or-knowledge issue that the National Science Board is stuck on. Researchers could be much more confident in a person’s level of science knowledge if the person mentioned the red shift or cosmic background microwave radiation or the expanding universe, than if all the person did was agree with the true-false statement that the universe began in a big explosion.

See here and here for other thoughts on measuring science knowledge.

Scientific literacies

Science knowledge has often been measured with a set of questions that include these items about evolution and the big bang:

1.Human beings, as we know them today, developed from earlier species of animals. Is that true or false?

2. The universe began with a huge explosion. Is that true or false?

 
The National Science Board removed from the National Science Foundation’s Science and Engineering Indicators 2010 a discussion of public responses to these items about evolution and the big bang, which drew strong criticism from some scientists (see here and here). But National Science Board member Louis Lanzerotti claimed that the evolution and big bang items were “flawed indicators of scientific knowledge because the responses conflated knowledge and beliefs” (Bhattacharjee 2010).

Discussion of responses to the evolution and big bang items were reinstated in the 2012 edition of the Indicators, but disagreement remains about whether questions about evolution and the big bang should be placed in a science knowledge exam alongside questions about non-disputed facts such as whether electrons are smaller than atoms.

To provide more data on this issue, I conducted a factor analysis of sixteen science exam questions that were asked on National Science Foundation surveys from 1988 to 2001 for which full data were available.

Factor analysis

… takes thousands and potentially millions of measurements and qualitative observations and resolves them into distinct patterns of occurrence (Rummel 1967, p.445).

 
The sixteen questions used in the science exam are listed in the table below.

Blue cells indicate questions measuring an understanding of probability, red cells indicate religiously-neutral factual questions, and white cells indicate factual questions that are, have been, or might be perceived to be in conflict with religious teaching.

The table below presents results of a principal components factor analysis that retained four factors:

Factor analysis reports but does not name or otherwise identify the patterns, so the patterns must be interpreted. The four probability questions unsurprisingly loaded on the same factor, so it appears that an understanding of probability is a separate dimension of science knowledge. Factual recall questions were spread among three factors, but factor 3 contained the questions about evolution, the big bang, and continental drift — each of which is perceived by some to be in conflict with biblical creation accounts.

This third factor therefore might tap a religious dimension of science knowledge, so I repeated the analysis after adding responses from a question that measured religion-science conflict more directly:

We depend too much on science and not enough on faith. Do you strongly agree, agree, disagree, or strongly disagree?

 
Disagree responses were collapsed and agree responses were collapsed to form a binary variable.

Results shown below indicate that the religion-science conflict question loaded onto factor 3. Perhaps this third factor is not tapping into science knowledge as much as it is tapping into trust in science or scientists; after all, the facts underlying the factual questions in this factor — about evolution, the big bang, and continental drift — are disputed by scientists and religious authorities. So, for instance, the big bang item might be measuring whether a person trusts science or religion to answer questions about the origin of the universe.

This analysis does not resolve the definitional question of whether science literacy requires acceptance of a scientific consensus, but it does suggest science literacy is not a unidimensional concept, and it does suggest that there might be value in separating the dimensions of science knowledge, similar to the way that the SAT separates verbal and math scores.

Notes

  1. Non-responses and don’t know responses for the science knowledge questions were coded as incorrect responses.
  2. There were 13,503 observations across the survey years of 1988, 1990, 1992, 1995, 1997, 1999, and 2001, and all of these observations were used for the first factor analysis. There were only 12,796 responses to the question about faith and science, so the second factor analysis had this lower number of observations.
  3. Three questions on genes and experiments were not included in the analysis because the questions were not asked in 1988 and 1990. Questions on radiation and the center of the Earth being hot were not included in the analysis because their number of observations was less than the number of observations for other questions in 1990. Inclusion of these five questions does not change the inference that disputed items load onto a separate factor or that the science-religion controversy item loads on the disputed questions factor, but inclusion of these questions does add a fifth and sixth factor in some cases.
  4. The Stata do file for the analysis is located here, and the dataset can be downloaded here. The citation for the data is: Miller, Jon D., Linda Kimmel, ORC Macro and NORC. National Science Foundation Surveys of Public Attitudes Toward And Understanding of Science And Technology, 1979-2006 [Computer file]. 3rd Roper Center version. Tallahassee, FL: Susan Carol Losh, Florida State University, Department of Educational Psychology & Learning Systems/Arlington, VA: National Science Foundation, Division of Science Resources Statistics/Arlington, VA: American Statistical Association [producers], 2009. Storrs, CT: Roper Center for Public Opinion Research [distributor], 2009.
  5. The variables in this analysis are binary (correct or incorrect, faith or science), so a factor analysis with tetrachoric correlations is preferable to the reported regular analysis. M-Plus results that account for the binary nature of the variables is presented below, providing confirmation of the main finding of a separate factor for the religious-tinged items.

Science knowledge and biblical interpretation

Recent General Social Surveys have included measures of science knowledge. This article reports Darren Sherkat’s analysis of religion and science knowledge items that appeared on the 2006 GSS, and this article reports my analysis of religion and science knowledge items that appeared on the 2008 GSS. There are substantial differences between the two research designs: a different survey year, different control variables, and a different set of items measuring science knowledge. But both studies concurred that biblical literalism correlated with lower scores on an exam of the science knowledge items.

The studies differed, though, on whether a non-literal belief in biblical inspiration correlated with lower exam scores in the presence of control variables: my analysis did not uncover a statistically significant correlation, but Dr. Sherkat’s analysis found a statistically significant 3 percent difference compared to biblical disbelief: 0.41 items on a 13-item exam.

Dr. Sherkat removed an item on evolution from the science exam, because

the purpose is to see if religious factors have a bearing on scientific understandings outside that controversial realm (p.114).

 
But the exam retained items on continental drift and the big bang theory, which might have conflated knowledge of a scientific consensus with acceptance of a scientific consensus: the big bang item asked respondents whether the universe began with a huge explosion, which is a statement that some Christians perceive to be compatible with the biblical accounts of creation but other Christians do not; moreover, the continental drift item required a respondent to agree with an old-Earth chronology that many biblical literalists do not accept:

The continents on which we live have been moving their locations for millions of years and will continue to move in the future. Is that true or false?

 
Dr. Sherkat kindly and quickly helped me replicate his analysis. The code and files that he provided were for the SPSS program that I do not have, but I was able to come close to replicating the exact numbers presented in Table 2 of his analysis.

The table below presents coefficients from Dr. Sherkat’s model using all 13 science knowledge items (Model 1), coefficients from my attempted replication of the Sherkat model (Model 2), coefficients from a model removing items on continental drift and the big bang (Model 3), and coefficients from a model with items only on continental drift and the big bang (Model 4).

Slight differences appeared between coefficients in Model 1 and Model 2 due to a few different codings; for example, for the measure of sectarian Protestantism, Dr. Sherkat coded individual religions, but I used the GSS fundamentalist religion measure to generate a binary fundamentalist-or-not-fundamentalist variable.

Black cells in the table indicate statistically significant coefficients that differ from zero, and the number in the black cell is the point estimate for the effect of the variable on the science exam score. For example, the 0.42 value for biblical disbeliever in Model 1 indicates that biblical disbelievers scored 0.42 items higher on the 13-item science exam than the omitted category of persons who think the Bible is inspired but not meant to be interpreted literally.

The last column of the table presents results from a model of correct responses to the big bang and continental drift items, with an added control for science exam scores for the other 11 science knowledge items. Biblical disbelievers scored 0.22 items higher on the 2 items than those who think the Bible is inspired but not meant to be interpreted literally.

The main takeaway from this table is the lack of statistical significance for the biblical disbeliever variable in Model 3, which indicates that — when items about the big bang and continental drift were removed from the science exam — the data did not reveal a discernible difference in science exam scores between persons who do not accept the Bible as the word of God and persons who accept the Bible as the word of God but do not interpret the Bible literally (p-value of 0.418).

The data presented in the table indicate that science knowledge is not incompatible with belief that the Bible is the word of God, but they do suggest that certain styles of biblical interpretation might inhibit science knowledge, if certain religious beliefs foster a perception that science and religion are incompatible.

Science teachers and other science advocates should therefore work to eliminate a perception of conflict between science and religion. Josh Rosenau of the National Center for Science Education struck the correct chord summarizing Dr. Sherkat’s results here:

I’ll just add that this is why it’s so critical to engage those religious communities through trusted avenues like their pastors, or scientists speaking in their churches. If they’re culturally averse to science (at least, to certain forms of scientific knowledge and certain claims of science’s competence), it’s key to find other ways to reach them and bring them closer to the mainstream. They need to be able to engage with science not just on hot button issues, but on uncontentious matters of personal health, workplace safety, and professional advancement.

 
Amen.

Click here for a list of the 2006 GSS science knowledge items used in the analysis and for Stata commands to replicate models presented in the table.