Motivated Numeracy

Motivated Numeracy

To know inauthenticity is not the same as to be authentic.” –Paul See de Man

20131019_FBD001_0

If the practices enforced by science correct for cheaters, then should we be concerned by these articles from The Economist?

Lately, public media has been expressing increased skepticism towards scientific practices. Consider the recent cover articles in The Economist: How Science Goes Wrong and Trouble in the Lab. In the latter article, Jason Ford artfully depicts scientists in labs that would make EH&S shiver, sweeping poorly conducted experiments and data under the rug. These articles indeed resurface the problems shaped by incentive systems, such as publishing models. They also point out how the disavowal of null results by high impact journals may ultimately promote unethical practices for those who wish to stay in the publication pipeline.

Scientists as people are not the disinterested, communistic, universal, and organized obeyers of the ideals of science proposed by Robert Merton, though they are expected to be when conducting experiments. What The Economist fails to acknowledge is that although scientists are people preserving self-interests (just as any other professional), science itself constantly destroys and reconstructs new hypotheses across fields (in line with Thomas Kuhn’s paradigm shift). Science has self-revision at the core of its process. If an experiment was conducted poorly and the results significant, scientists have the right and expectation to challenge it by conducting their own experiments.

In contrast, the Economist article makes it seem like the mistakes within science are a problem with science itself. Although the problems identified are important to rectify within science, and indeed doing so should be a part of science, it would be a great exaggeration to treat science as though it were broken. What must be acknowledged, in spite of identifiable problems in scientific practices, is “how science goes right” for the most part, and the problems in it should not provoke a skepticism of the entire practice.

If the practices enforced by science correct for cheaters, then should we be concerned by these articles from The Economist? If their goal is to stir public distrust of science, then it is time to reconsider all of the technology (that arose from these “dirty” practices) upon which society depends. Again, these authentic accomplishments are undermined in the guise of inauthentic scientific practices (consider the Paul See de Man quote that introduces this article).

There is an important ideological question raised by this article. Is the underlying message of, or inevitable reaction to, the article one that convinces people that they need to give greater support to science and help facilitate the development of better scientific practices? Or instead, is the message that the public should indulge in politically motivated negative attitudes towards science itself, and divest (metaphorically or literally) in scientific progress. This latter possibility is particularly unsettling given some of the popular politicized positions taken towards issues such as climate change, vaccinations, and the environment.

***

Whatever one’s view on science may be, it is important to not let cultural views (e.g. politics, religion, etc.) shape how one interprets numbers shown in the data, assuming the experiment is conducted ethically. As a part of a colloquium on Law and Psychology, Dan Kahan presented findings from his paper, “Motivated Numeracy and Enlightened Self Government” with the question, “Why care?” underpinning the title of the talk.

Is academic discourse supported by empirical evidence (or data) just a “grab bag” from which you can grab any story you want in order to better reinforce your cultural identity? How do we understand our world and the evidence that reinforces this world-view, Weltanschauung?

The debate in the “public” (mis)understanding of science is shaped by two ideas. Either:

  1.  the public is not properly educated in discriminating dependable sources and interpreting scientific data (i.e. low numeracy, elaborated by the Scientific Comprehension Thesis) or
  2.  the public is influenced by cultural cognitive worldviews (e.g. political, religious, cultural, etc.), and interpret information (both qualitative and quantitative) through a biased lens to reinforce these worldviews (i.e. confirmation bias, elaborated by Identity-protective Cognition Thesis).

Kahan performs an empirical investigation to disentangle these (mis)understandings.

How can we overcome this barrier to scientific understanding?

This barrier is in part imposed, in philosophical dimensions, as a consequence of social  constructionism. When scientists were formerly conceived as as the beholders of absolute truth, similar findings observed across research labs could be more easily generalized as universal law—this was the view of materialism. However, social constructionism implies there is no universality (absolute truth), and our reality is merely layers of abstraction put forth by subjective scientists of relative research. Some opponents of controversial topics (e.g. global warming) discredit the research done by weakly applying the social constructionist argument and stating that the researchers are biased in their approach. These insights were inspired from a talk by Luigi Pellizzoni at the 4S Conference.

Does being biased skew our interpretation of robust statistical analyses? Perhaps you were not expecting the question of “scientific understanding” to be turned towards scientists. Dan Kahan does not specifically research scientists. He shows that individuals with higher numeracy (a category in which most scientists are expected) were more likely to fall victim to Identity-protective Cognition (i.e. preserving cultural-identity at the sacrifice of numeracy).

We are making important political decisions with topics that extend outside our field of expertise. Do we look as critically at research in these areas as we look at research in our fields, or do we save the time and energy needed to deliberate on such issues by relying on anecdotal evidence for the scientific topics outside of our fields?

Approaches to Consider: Slow vs. Fast and Altercentric vs. Egocentric

At the conference Being Human, hosted in San Francisco this year, a few ideas emerged under the guise of different names: “Slow vs. Fast” andAltercentric vs. Egocentric” styles of thinking.

Joshua Greene from Harvard Moral Cognition Lab gives a talk (see here) on “Slow vs. Fast” thinking. Based on the moral problem from Garrett Hardin The Tragedy of the Commons in 1968, it is the conflict between individual and collective interests, illustrated by animal herders having to negotiate how many animals to have on common pastures, without adding too many animals to destroy the pasture for everyone (overview).

Greene’s paper on The Cognitive Neuroscience of Moral Judgement states:

 “A range of studies using diverse methods support a dual-process theory of moral judgment according to which utilitarian moral judgments (favoring the “greater good” over individual rights) are enabled by controlled cognitive processes, while deontological judgments (favoring individual rights) are driven by intuitive emotional responses.”

*bolded statements for emphasis are not in the original

Greene describes utilitarian (for us) and deontological (for me) as something more complicated. What if two separate groups, solving utilitarian problems differently (e.g. one group is communist while another is individualist), cooperating under different answers to ethical questions, are suddenly confronted with one another?

How do these different groups resolve a “meta-utilitarian” conflict (i.e. for “vosotros” and “nosotros”), in this modern moral problem?

This is where “slow” and “fast” thinking come into play, as described by Daniel Kahneman in Thinking, Fast and Slow (from wiki):

  • System 1: Fast, automatic, frequent, emotional, stereotypic, subconscious
  • System 2: Slow, effortful, infrequent, logical, calculating, conscious

In Greene’s camera analogy, “automatic” is fast and efficient and “manual” is slow and flexible. “Think Fast” with problems of “Me vs. Us” (individual vs. utilitarian) and “Think Slow” with problems of “Us vs. Them” (local utilitarian vs. meta-utilitarian), notably assigning slow-thinking for problems of meta-morality described in the “meta-utilitarian” conflict.

Laurie Santos from the Comparative Cognition Laboratory at Yale gives a later talk (see here) describing “Altercentric vs. Egocentric” thinking.

Egocentrism involves trusting yourself as an expert (“You are like me.”), e.g. telling others what they can do around Berkeley when they visit. Altercentrism is trusting others more than yourself (“I am like you.”), e.g. using Yelp to look for business reviews and recommendations.

Santos describes an experiment by Kenneth Savitsky at Williams College showing increased egocentrism among friends versus strangers. We are more considerate to different perspectives of strangers, but our egocentric bias misleads us to make wrong assumptions about our friends.

Victoria Horner and Derek Lyons compare how children and chimpanzees learn (video) in a way that is comparable to egocentric and altercentric approaches to understanding:

Two boxes, one opaque and one transparent, each contain a treat. The experimenter goes through ritualistic tapping and prodding and removes a treat from the box. Both children and chimps will go through the ritual when the box is opaque; however, the clear box reveals that the treat is just behind a clear door, and the ritualistic movements are not necessary to get it. Chimps will ignore the ritual and grab the treat, but children will still go through the ritual (note there is no description in the video if the children were given verbal directions from the conductor of the experiment).

The opaque box conjures altercentric (“I am like you”) responses in both children and chimps. Since the box is a novel contraption and you can not see exactly how it works, it makes sense to trust the person showing you how to use it. What happened with the clear box? The chimps were more egocentric in their approach, trusting their own judgement of how to get the treat more than the ritual shown by conductor of the experiment. Children, due to social structures or cultural constraints, still take the altercentric approach and go through the useless ritual.

Santos shares a quote from Mark Twain’s Own Autobiography that is worth quoting:

“In the matter of slavish imitation, man is the monkey’s superior all the time. The average man is destitute of independence of opinion of his own, by study and reflection, but is only anxious to find out what his neighbor’s opinion is and slavishly adopt it.”

There is value in being valued. Mimicking our neighbors may show some element of trust in their judgement. If that’s what we care about, then there is no immediate need to condemn these external forces in our decision making; however, should data speak louder than anecdote? Do some data conceal subjectivity?

There is an interesting parallel with gut reaction (instilled by cultural-cognitive identity) vs. data deliberation (determined by scientific numeracy) in the previously described study of Dan Kahan.

Some important take aways may be summarized to say, “Smarter are more stubborn,” describing a, “Why think harder? I got the answer I want,” epidemic. Know your triggers. If you are emotionally passionate about certain issues, so much so that you take measures to make political changes, then I would suggest at least considering your opponents’ views first, with a slow-thinking, altercentric thoughtfulness.

 

[1] Blindness and Insight: Essays in the Rhetoric of Contemporary Criticism

[2] Allow me to emphasize the definition of “public” as anyone outside his/her field of expertise. We are all “public” in some contexts and try-fail experts in others.

[3] Maney, Ethel S. “Literal and critical reading in science.” The Journal of Experimental Education 27.1 (1958): 57-64.

[4] Kahan, Dan M et al. “Culture and identity‐protective cognition: Explaining the white‐male effect in risk perception.” Journal of Empirical Legal Studies 4.3 (2007): 465-505.

[5] Carl Zimmer. “Children Learn by Monkey See, Monkey Do. Chimps Don’t. – New …” 2004. 20 Oct. 2013 <http://www.nytimes.com/2005/12/13/science/13essa.html>

[6] I anthropomorphize chimps with concepts of “trust” or “judgement” for descriptive purposes, and do not intend to imply such abstractions are concluded from these experiments

 

Share this post:

Leave a Reply

*

2 comments

  1. Kristina Kangas

    Just wanted to add a huge thanks to Chris Shaver for editing and adding elements to the article that provided more depth and clarity.

  2. Giuseppe D'Agostino

    I really enjoyed reading this article and its wealth of references.
    However, I’d like to make a few points on the debate that you touch upon, i.e. “science vs. the people” as it has been a pretty intense one here in Italy.

    I take that among sociology of science scholars, many have dismissed the “deficit model” paradigm, where “the people” are thought to be people lacking scientific literacy (and numeracy) that only need to be educated in order to support science and its practices. This lead in many countries in the 80s to an enforcement of Public Understanding of Science in order to deliver to the public well-crafted means to access, understand and digest scientific facts. However, a British initiative called COPUS was shut down some years after upon recognizing that not only an increase in scientific literacy was not “helping the cause”, but it was instead exacerbating the opposition to science and its community. The new paradigm that emerged was that of Public Engagement of Science, where the public is not educated in a top-down fashion, but treated as a part of the society that is both the peer and the recipient of scientific outputs (be it basic or applied research), and as such has a stake in decisions, research guidance and policy writing.

    So the first point I want to make is that, however interesting it may be to understand the neuro-cognitive models that underlie the decision-making process in the public (and in the scientific community), what we face is not a matter of Public Misunderstanding of Science, but rather of Public Mistrust.
    I believe it is important to keep strongly separate the social construction of facts and the social construction of policies. The former should be avoided as much as possible: through the use of the best available scientific method (I read your post and ejoyed it too!), “facts” are constructed as objectively as possible out of measurements and fitting models.
    However, the social construction of policies can be used to solve the issue of science mistrust: by involving people from all different parts of society during the first stages of scientific discussions, i.e. trying to find an answer to questions such as “what should we investigate?” “where should we head to?” “what are our priorities?”, we are able to integrate their needs (and inputs) not by bestowing scientific knowledge unto them – which is a rather arrogant move – but rather by allowing them to have their say in the direction of scientific research.
    I don’t know how this may apply to US funding, but here in Europe lots of grants come from public taxpayers’ money (though I guess that it is indeed the case for federal grants or NIH funding), and the need to justify their use for research that apparantely has no impact whatsoever on their lives is increasingly strong.

    The second point I’d like to make is about why it is useful to point out “when science goes wrong”: not because of a masochistic spirit (I am a phd student in molecular biology myself), but because of the need for the scientific community to correct its own mistakes and regain public trust.
    We are currently using a set of tools for analysis and communication that appears now to be blunt, as opposed to the ever-cutting edge of technology that the scientific community uses and creates. We still have to master statistics and decide a better p-value threshold. We still have to find a way to publish negative data. We still have to allow reproducibility through secure repositories of raw data and cross-lab validation of results. We still need to avoid basing our evaluation solely on what Randy Sheckman labelled as “luxury journals”.
    Many steps have to be taken to build a better publication infrastructure, and while all of these may not be problems of science itself, they are problems of the scientific community indeed.
    The effect of the article may be detrimental at first for the scientific community, as far as the view from the outside is concerned. However, it can have a beneficial effect in the long run, provided it triggers a series of deep renovations of how science is produced and disseminated.

    I apologize for the long comment but I look forward to your opinion on my 2 cents :)

Type to Search

See all results