An image of a Facebook post containing bad arguments to show why fact-checking is not enough.

Fact-checking is not enough. We need argument-checking.

I see more fact-checking on Facebook than I used to. While I’m glad to see fact-checking catching on, fact-checking isn’t enough — or so I’ll argue in this post.

1. Fact-checking: The problem

Let’s say that you and I agree on all the facts. Now let’s say that we start arguing. Will we argue well? Not necessarily!

After all, we can reason badly even if we agree on the facts. Specifically, we can jump to conclusions that don’t follow from the facts. So fact-checking our argument(s) won’t necessarily fix all the problems with our argument(s).

(Besides, fact-checking often doesn’t change people’s’ minds. In fact, it sometimes backfires (Lord et al 1979; Nyhan an Reifler 2010, 2015; Nyhan, Reifler, and Ubel 2013).)

2. Bad Arguments

Consider some of the claims that people make:

  • ObamaCare caused [A]
  • [B] increased/decreased in unemployment
  • that trade agreement caused [C]
  • gun-carrying causes [D]
  • illegal immigrants cause [E]
  • tax cuts will cause…

Notice that all of these claims are causal claims. They are not merely correlational. Unfortunately, no one has run the kinds of studies that can support these causal claims. So even if we cite all the facts about healthcare, unemployment, guns, taxes, etc. we cannot validly reason our way to any of the conclusions above.

3. Example Of Good Causal Arguments

Before we can claim that some thing caused some other thing, we need to do a few things.

  1. Collect data about all variables that might be causally relevant to what we’re studying.
  2. Assign all people (or whatever we’re studying) to each condition randomly.
  3. Include a control condition (and a placebo/sham condition, if possible).
  4. Make enough observations for powerful statistical analysis (at least 50 observations per variable per condition).
  5. Include all relevant variables in the final analysis and reporting (see 1).
  6. Subject all data, methods, analysis, etc. to peer review.

Very few — if any — of these steps have been completed when it comes to healthcare, unemployment, guns, taxes, and the other stuff that we argue about. But that is hardly surprising. After all, we can’t randomly assign people to different public policies, laws, tax brackets, etc. And since we can’t do this, there are loads of causal claims about healthcare, unemployment, guns, etc. that we simply cannot defend.

4. How To Get Better At Reasoning

Maybe you’re interested in trying to check your own arguments. If so, here are a few things to think about.

4.1  Acknowledge your limitations

For example, when we are tempted to make a causal claim, ask ourself:

  • Did I complete steps 1-6?
  • Has anyone completed steps 1-6 and published their results?

If not, then avoid making a causal claim.

Instead, acknowledge the limitations involved in arriving to your conclusion — e.g., “I don’t know what the research suggests, but it seems to me that…” or “in my own limited experience, I find that…” or “one or two studies find a correlation between…”.

4.2  Test yourself

Try explaining how your conclusion follows from the evidence. After all, we often feel that we’ve reasoned correctly before we’ve explained each step in our reasoning process (Thompson and Morsanyi 2012). But when we try explain our argument, we often spot an error or realize that we don’t understand things as well as we thought that we did  (Fernbach et al 2013).

4.3  Look for good evidence

Statistically significant findings are cheap. You can easily find a few if you try. So it is not enough to point to just one or two findings — I’m looking at you, science journalists! If you want to make a strong case for something, then you to point to findings that have replicated many times in many contexts. That means citing many papers (or reviews/meta-analyses of many papers).

4.4  Look for opposing evidence.

Like I said, it’s easy to find a few statistically significant results. And so it is very easy to find some evidence for almost any conclusion. So if we are predisposed to a particular conclusion (and let’s be real: we are), then we will have to work hard to find (and not ignore) findings that challenge our desired conclusion (Flynn, Nyhan, and Reifler 2016; see also motivated reasoning). [Another opportunity for a stern look at science journalists/reporters.]

4.5  Take your time

You might have realized that these steps take a while. You have to search, read, maybe take notes, explain, and — of course — think. But it might be worth it: some people reason more carefully when they take their time (Paxton et al 2012).

4.6  Example

In 2014 I found that atheist and agnostic philosophers are more reflective than theist philosophers (Byrd). Since statistically significant findings are cheap, this single finding was not that insightful. After all, my finding might have been a statistical fluke. I briefly looked for opposing evidence, but didn’t find any. And then, a year later, someone published research that challenged my finding (Finley et al 2015). So I had to discount my finding. However, a 2016 meta-analysis confirmed that most published studies find that atheists and agnostics are more reflective (Pennycook et al). Now I’m a bit more confident in the finding.

4.7  Summary

When evaluating a claim, look for evidence. But don’t just look for any evidence. Look for a preponderance of evidence. To find that, look for meta-analyses, meta-syntheses, and systematic reviews. (Beware: there might not be a preponderance of evidence.) Don’t jump to conclusions that don’t follow from the evidence. Instead, acknowledge the limits of what we can infer from the evidence. And explain how your conclusion follows from the evidence. Finally, take your time. It might be months or years before you find all the relevant evidence and arrive at the best conclusion(s).

5. How To Help Others Get Better

Improving our own reasoning is only a small part of the problem. What about everyone else? What do we do about them?

5.1  Accountability

I don’t know about you, but spotting errors in others‘ reasoning is much easier than spotting errors in my own reasoning. Turns out that I am not alone (Trouche et al 2015). So we would be wise to ask each other to look for our reasoning errors.

But what about public figures? We cannot easily contact them and point out their reasoning error(s). And even if we could, it’s not clear that correcting them would improve their future arguments. What we need is a way to publicize public figures’ reasoning errors. We need large-scale argument-checking. Perhaps that is how we can prevent the spread and acceptance of misinformation (Nyhan 2010; Nyhan and Reifler 2015b). You might think that argument-checking is a pipe dream. However, the good people over at Clearer Thinking are already doing it! Check out their argument checking videos of some of the 2016 US presidential (primary) debates.

5.2  Education

Another solution might be to ensure that every kid is given the tools of reasoning. In the US, people are not taught much — if anything — about reasoning. For instance, none of my primary or secondary schools offered courses in logic, philosophy, or critical thinking. There is some evidence that such courses can improve our reasoning (Attridge et al 2016; Alvarez-Ortiz 2007), even at a very early age (Gorard et al 2015). So perhaps its time to institute reasoning curriculum in primary and secondary school in the US.

5.3  Research

Maybe you’re not convinced that accountability and education will work. That’s fair. But that’s just a reason to do more research. After all, it is science that will ultimately reveal the most reliable methods of improving our reasoning.

 

References

Alvarez-Ortiz, C. (2007). Does philosophy improve critical thinking skills? The University of Melbourne. [Open Access]

Attridge, N., Aberdein, A., & Inglis, M. (2016). Does studying logic improve logical reasoning? Proceedings of the 40th Conference of the International Group for the Psychology of Mathematics Education. [Open Access]

Byrd, N. (2014). Intuitive And Reflective Responses In Philosophy. University of Colorado. [Open Access]

Fernbach, P. M., Rogers, T., Fox, C. R., & Sloman, S. A. (2013). Political Extremism Is Supported by an Illusion of Understanding. Psychological Science, 24(6), 939–946. [Open Access] [Paywall]

Finley, A. J., Tang, D., & Schmeichel, B. J. (2015). Revisiting the Relationship between Individual Differences in Analytic Thinking and Religious Belief: Evidence That Measurement Order Moderates Their Inverse Correlation. PloS One, 10(9), e0138922. [Open Access]

Flynn, D. J., Nyhan, B., & Reifler, J. (2016). The Nature and Origins of Misperceptions: Understanding False and Unsupported Beliefs about Politics. [Open Access]

Gorard, S., Nadia, S., & See, B. H. (2015). Philosophy for Children | Evaluation report and Executive summary (p. 42). Education Endowment Foundation. [Open Access]

Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37(11), 2098–2109. [Open Access]

Nyhan, B. (2010). Why the “Death Panel” Myth Wouldn’t Die: Misinformation in the Health Care Reform Debate. The Forum, 8(1). [Open Access] [Paywall]

Nyhan, B., & Reifler, J. (2010). When Corrections Fail: The Persistence of Political Misperceptions. Political Behavior, 32(2), 303–330. [Open Access] [Paywall]

— — —. (2015a). Does correcting myths about the flu vaccine work? An experimental evaluation of the effects of corrective information. Vaccine, 33(3), 459–464. [Open Access] [Paywall]

— — —. (2015). The Effect of Fact-Checking on Elites: A Field Experiment on U.S. State Legislators. American Journal of Political Science, 59(3), 628–640. [Open Access] [Paywall]

Nyhan, B., Reifler, J., & Ubel, P. A. (2013). The Hazards of Correcting Myths About Health Care Reform: Medical Care, 51(2), 127–132. [Paywall]

Paxton, J. M., Ungar, L., & Greene, J. D. (2012). Reflection and Reasoning in Moral Judgment. Cognitive Science, 36(1), 163–177. [Open Access[Paywall]

Pennycook, G., Ross, R. M., Koehler, D. J., & Fugelsang, J. A. (2016). Atheists and Agnostics Are More Reflective than Religious Believers: Four Empirical Studies and a Meta-Analysis. PLOS ONE, 11(4), e0153039. [Open Access]

Thompson, V. A., & Morsanyi, K. (2012). Analytic thinking: do you feel like it? Mind & Society, 11(1), 93–105. [Paywall]

Trouche, E., Johansson, P., Hall, L., & Mercier, H. (2015). The Selective Laziness of Reasoning. Cognitive Science, 1-15. [Open Access] [Paywall]

 

Image credit: © Nick Byrd 2016; background image in public domain here.

Published by

Nick Byrd

Nick is a cognitive scientist studying reasoning, wellbeing, and willpower. When he is not teaching, in the lab, writing, exercising, or relaxing, he is blogging at www.byrdnick.com/blog