Image of Donald Trump speaking in Arizona. Via Gage Skidmore from Wikimedia CC BY 2.0

Is post-fact reasoning redeemable?

You know how I do. When people make strong claims, I want evidence and arguments. So this US presidential campaign was a lot of work. A lot! (E.g., I read over 1000 pages about Clinton-related investigations alone). The problem is that people made loads of unsupported claims during the election. So I asked for loads of evidence. Curiously, people didn’t take kindly to my requests for evidence. As a reasoning researcher, this was fascinating. But as an aspiring reasoning teacher, it was thoroughly demoralizing. In this post, I’ll discuss my experience, some research that bears on my experience, and my concerns about post-fact reasoning.

1.  Too Many Unsupported Claims

First of all, there were way too many unsupported claims in my midst. I just didn’t have the time to follow-up on all of them.

People who know me might be surprised by this. After all, I interact with very few people. I go to work. And I go back home. I rarely “hang out”, as they say. So it’s not like I was canvassing local neighborhoods or searching the internet looking for unsupported claims. I was just going about my day. But in talking to a few friends/family, checking my online social network, running into people around town, etc., I was inundated with more unsupported claims than I could address.

2.  People’s Responses Were Disappointing

When I asked for evidence, I expected people to give some evidence — even if it wasn’t always good evidence. But I was wrong. Usually, people didn’t even try to give evidence. And when they did give evidence, it was bad evidence. Looking back, people responded in one of four ways.

2.1  Change the subject

One way people responded was by changing the subject. So, for instance, when I asked for evidence for a damning claim about Hilary Clinton, they’d start talking about something else:

“Look, Trump tells it like it is. So […]”.

2.2  Further unsupported claims

Another way people would respond is with further unsupported claims. For example, when I asked for evidence for the damning claim about Hilary Clinton, people would say,

She’s got blood on her hands!

She’s just not a good person!

She’s got character issues.

These claims, of course, require (a lot) more evidence, so now I had to ask for even more evidence.

2.3  Bad evidence

Sometimes people did give evidence — albeit bad evidence. For instance, when I asked an old co-worker why they thought that Hilary Clinton had bad character, they replied,

I watched the Benghazi hearings. I could see it on her face.

That was it. That’s all they could muster. (Aside: think about the fact that this person can serve on a jury.)

2.4  Attack

A third response was some sort of attack.

You’re just a blind leftist.†

Let me guess, you also support baby killing.††

3.  What Does This Mean?

Given my experience, I’m entirely unsurprised that some people call ours a “post-fact era” and some people call certain voters “irredeemable.” After all, the only kind of evidence that people seemed to appreciate was the evidence that confirmed the view they had before they started looking for evidence — #confirmationBias. And if people are only interested in confirming their own view of the world, then it would seem that they care more about feeling correct than they do about the facts or about “the common good” (Brennan 2012).

4. The Problem: It Was Never About Facts

Think about it. There are very careful, detailed, free, and easy-to-find reports produced by massive professional investigations and large-scale fact-checking operations out there.

Cover page of the Inspector General's Report on email record management and security
Exhibit A: the Inspector General’s report on email management and security (PDF).
Reports like this bear directly on many peoples’ unsupported claims — e.g., peoples’ unsupported claims Hilary Clinton’s email use.  

For example, the Inspector General’s report on email security and management found that Republican Secretary Powell used a private line to send official emails from a personal email address. It also found that Republican Secretary Rice (like Powell and Clinton) failed to follow State Department separation procedures regarding email. Oh, and it found that 90 of Powell’s/Rice’s immediate staff used personal emails for official business. When I mention this report’s findings around some people, they flat-out reject the reports! (And for fallacious reasons.).: 

The [report/institution] is [liberal, government, mainstream, establishment, elitist, etc.]. I don’t trust that stuff.

Notice the mental shortcuts these people use to systematically dismiss certain evidence. They’re not dismissing evidence based on careful analysis. They’re dismissing it based solely on its association with a group that they don’t like. So their political claims aren’t even intended to be about facts; they’re about identity

It’s like spectator sports: spectators boo when their favored team receives a foul — even if the foul is indisputable.

Related post: “The Bias Fallacy

5.  Is This Normal?

In short, kind of. Three sorts of empirical findings affirm this.

First, people systematically ignore relevant evidence (Kahneman 2013, Chapter 13). And even when people are presented with evidence, they are often biased in their assimilation of it — #biasedAssimilation (Lord et al 1979). For instance, when people are presented with counter-evidence, they systematically prefer the evidence that confirms their own view (Corner et al 2012; Hart and Nisbet 2012). That or they discount the counter-evidence (ibid.). And sometimes people actually strengthen their view when presented with counter-evidence — #backfireEffect (Nyhan and Reifler 2010).

More than that, people do not trust perceived political opponents — and it’s getting worse (Hetherington & Rudolf 2014, 2015).

And since President Johnson, trust in government is way down, in general (ibid.). More specifically, most Republicans report that they “never” trust the government (vs. a minority of Democrats that report the same thing) (Hetherington & Rudolf 2015, italics added).

So when the Inspector General’s report about email conflicts with Republicans’ claims about Hillary Clinton, then it might be normal for Republicans to flat-out reject the report — and not because they’re Republican, but because they’re human. (NB: normal ≠ good, acceptable, etc.).

6.  What now?

Honestly, I’m not sure. And to be honest, dealing with peoples’ responses to my requests for evidence is exhausting.

Part of my exhaustion comes from having the wrong expectations. Looking back I (foolishly) expected people to care about evidence and to update their arguments and conclusions according to evidence (and logical norms). But apparently people don’t always do that.

Indeed, sometimes people don’t even have arguments! They just have the conclusions! #beliefBias

And sometimes peoples’ mission is not some sort of open-minded inquiry into the best arguments and evidence. Rather, their mission is to uphold their own group’s views and/or intentions (Mercier 2012; Mercier & Sperber 2011; Norman 2016; Sperber & Wilson 2001).

But when people reason this way, there might be no amount of counter evidence that can update their view.

Their reasoning might be immune to facts.

So they might be irredeemable.

I hope that both of these hypotheses turn out to be wrong.



† (1) Answer: no. I’ve been registered as “No Party Affiliation” for at least 5 years — switching only when presidential primaries require a strategic change in affiliation. Before that, I was registered as a Republican. And so far, I’ve voted for a Republican, a Democratic, and a third-party presidential candidate. (2) If asking for evidence is a “leftist” thing to do, then that seems like a point for leftists, no?

†† Answer: no.


Brennan, J. (2012). The Ethics of Voting. Princeton: Princeton University Press.

Corner, A., Whitmarsh, L., & Xenias, D. (2012). Uncertainty, scepticism and attitudes towards climate change: biased assimilation and attitude polarisation. Climatic Change, 114(3–4), 463–478. [Free PDF]

Hart, P. S., & Nisbet, E. C. (2012). Boomerang Effects in Science Communication How Motivated Reasoning and Identity Cues Amplify Opinion Polarization About Climate Mitigation Policies. Communication Research, 39(6), 701–723. [Free PDF]

Haidt, J., & Hetherington, M. J. (2012). Look How Far We’ve Come ApartThe New York Times.

Hetherington, M. J., Long, M. T., & Rudolph, T. J. (2016). Revisiting the Myth New Evidence of a Polarized Electorate. Public Opinion Quarterly, 80(S1), 321–350.

Hetherington, M. J., & Rudolph, T. J. (2015). Why Washington Won’t Work: Polarization, Political Trust, and the Governing Crisis. Chicago: University Of Chicago Press.

Kahneman, D. (2013). Thinking, Fast and Slow (1st edition). New York: Farrar, Straus and Giroux.

Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37(11), 2098–2109. [Free PDF]

Mercier, H. (2012). Some Clarifications about the Argumentative Theory of Reasoning. A Reply to Santibáñez Yañez (2012). Informal Logic, 32(2), 259–268. [Free PDF]

Mercier, H., & Sperber, D. (2011). Why do humans reason? Arguments for an argumentative theory. Behavioral and Brain Sciences, 34(2), 57–74. [Free PDF]

Norman, A. (2016). Why we reason: intention-alignment and the genesis of human rationality. Biology & Philosophy, 1–20. [Free PDF]

Nyhan, B., & Reifler, J. (2010). When Corrections Fail: The Persistence of Political Misperceptions. Political Behavior, 32(2), 303–330. [Free PDF]

Sperber, D., & Wilson, D. (2001). Relevance: communication and cognition. Oxford; Cambridge, MA: Blackwell Publishers.


Featured Image: “Donald Trump by Gage Skidmore 6” (cropped) via Gage Skidmore on Wikimedia, CC BY 2.0


Published by

Nick Byrd

Nick is a cognitive scientist studying reasoning, wellbeing, and willpower. When he is not teaching, in the lab, writing, exercising, or relaxing, he is blogging at

7 thoughts on “Is post-fact reasoning redeemable?”

  1. Great piece Nick. I think the wilful ignorance is one sided however. I don’t know a liberal that doesn’t like to discuss climate change, disparity of educational opportunity, income inequality, monetization of healthcare, persistent poverty and racism after decades of money and programs and many other issues. To the extent they are discussed, it is within our own “safe” bubble. I have been struck by how uninformed we are in an age where facts and information are so available. We only look at data that confirms what we want to believe. The idea that everything you believe is based on falsehoods is scary and to be avoided. Think how that correlates to most religions and social media like Facebook. Apparently we need social structure, norms and beliefs, no matter how ignorant. That part is bipartisan.

    1. Rick,

      So glad to hear from you! I hope you and yours are doing very well! Happy belated Veteran’s Day, by the way!

      I feel the asymmetry in the ignorance that you mention as well. But I often wonder if the data would actually support that. After all, the most educated people I know are (on average) more liberal. So of course I would perceive that liberals are more interested in the facts. The liberals that I know are more aware of some of the facts. But as you point out, my sample of friends might not be representative. It might just be my “safety bubble.” So if I lived in a community in which my most highly educated friends were conservative people, then I might think that the asymmetry goes the other way: that only liberals are uninterested in facts.

      There is some evidence that affirms your and my experience(s): social conservatives are significantly less reflective than social liberals in Depot et al’s “Reflective liberals and intuitive conservatives…”:

      But there is also some confounding evidence: both free market neoliberals and ideological moderates (as opposed to ideological extremists) are more susceptible to bullshit. That’s from Sterling et al’s “Are Neoliberals More Susceptible To Bullshit?”:

      So, in short, our experiences might generalize, but — then again — they might not. These two studies make me think that either is an open possibility. I look forward to learning more about how political ideology impacts peoples’ interest in facts.

      As always, I am glad to be talking with you. I wish you well!

  2. My focus when thinking about this topic of post-fact discussion and decision making is on instrumental effectiveness.

    By speaking in terms of ‘redeemability’, this throws the discussion into the moral arena which I see as ineffective. Is it useful to make a negative moral judgment of people who do not carefully consider evidence that conflicts with their opinions? Rather than wishing that things were different or condemning a massive portion of society, it seems more prudent to accept how it is and then focus on solutions. It seems to me that you are implicitly putting the burden of change on these people that don’t value the abundance of facts that they can access with the flick of their thumb, or the persuasiveness of reputable sources. I believe it is more effective to put the burden of change on the system which is not serving this large portion of society.

    If we share the very general goal of effective decision making that addresses societal problems (and I believe we do), this gives us something to strive towards and to direct our investigation of what to do, given that we are in an age of post-fact reasoning. You would be hard pressed to find someone that doesn’t share this goal of solving problems in general. What highlighting this idea does is starts a discussion based on a common value. After the discussion is focused on a common value, evidence and scientific information can then be utilized in pursuit of the common goal – AND (this is the important part) this evidence and scientific information will be accepted and utilized by the stakeholders in the conversation because it brings them closer to their goal.

    What I’m saying is that the intense focus on arguments and facts that support one’s opinion and disinterest in the arguments and facts that conflict with one’s opinion is not something to be tossed aside as foolish unintelligence or lack of critical thinking – it is a sign that facts and evidence are not being utilized in a useful way in dialogue.

    Kahan et al. came up with a concept called “cultural cognition”, which is basically the idea that in cognition, cultural worldview comes prior to facts. If someone aligns themselves with a social group and way of thinking that values individualism and hierarchical societal structure, for instance, they literally cannot process facts that implicitly deny their values such as ‘income inequality leads to reduced social capital and a breakdown of the democratic process’ (or even a much more concrete quantitative fact). Kahan and Braman conducted studies that supported this. (Kahan, et al., 2006)

    Jasanoff, in her book “The Fifth Branch” (which investigates case studies from federal regulatory agencies to answer the question “what processes result in effective decision making?”), shows that even in the most formal and intentional environments Cultural Cognition (from Kahan et al.) predicts the outcomes of decision making bodies. Specifically, Jasanoff shows that when there is value disagreement, introducing evidence and scientific information results in conflict, deconstruction of the facts, and ineffective policy (that never makes it into law – or if it does, soon gets repealed). Effective policy was best created in an environment where scientific information and factual information was only introduced after all the stakeholders shared commons goals and values in relation to the purpose of the group. For example, when determining the ozone concentration that constitutes pollution, effective policy can only be written when it includes the voices of all the major stakeholders and these stakeholders and unified in their goals. (Jasanoff, 1990)

    In formal decision making bodies the shortcomings of information processing and argumentation mirrors your experience in everyday conversations. People reject ‘facts’ that implicitly deny their values and worldview.

    If decision making professionals and topical experts in formal settings cannot overcome this ‘shortcoming’ in processing of facts, perhaps rather than demanding that human nature changes in service of an ideal vision (this is called utopianism, right?), we adjust our tactics.

    What now? Lets work on the process by which we discuss politics, values, and facts. I’d suggest, that we focus on finding common values before we introduce facts. This allows us to move towards a definite goal while utilizing facts in a straightforward way that is resilient to deconstruction.

    1. Hi Jesse! Great comments! And I love that you’re citing the relevant literature as you go. Thanks for engaging!

      Regarding the stuff about morality: rebuke heartily accepted. My use of ‘post-fact’ and ‘redeemable’ is mostly an attempt to be engaging in a public-facing blog post by using terms that are already in circulating in the public discourse. (And — to show my cards a bit — I am also a pragmatist. And, more specifically, I agree that condemnation is often unproductive (even when it is, strictly speaking, just)).

      Regarding your call to find common values/goals, sounds great! Where do I sign?

      But to be honest, I am not exactly optimistic about that project. When I think of the repeated instances of obstructionism in US politics, I see a lot of pragmatism, but very little interest in common goals. In fact, I see something worse. I see that the primary goal of each major, US, political party is to undermine the other party (until they think that they have enough power to fulfill all of their own partisan goals — and then, of course, pay lip service to the need for common values/goals, problem-solving, unification, bipartisanship, etc.).

      So I wonder if your call to identify common values/goals requires a prior goal: getting serious about common values/goals.. And if that is right, then I find myself wanting to understand the mechanisms by which we become jointly serious about common goals. (And I hope the mechanism doesn’t require hitting rock bottom.)

      Finally, are these the Kahan et al papers that you have in mind? (The only “cultural cognition” papers that seemed (initially) to be from 2006 on Google Scholar turned out to be from 2005 and 2007.)

      Braman, D., Grimmelmann, J., & Kahan, D. M. (2007). Modeling Cultural Cognition (SSRN Scholarly Paper). Rochester, NY: Social Science Research Network.
      Kahan, D. M., & Braman, D. (2005). Cultural Cognition and Public Policy (SSRN Scholarly Paper). Rochester, NY: Social Science Research Network.

      1. Thanks for your speedy reply, Nick!

        Oh, weird – the article from 2005 is the one I was referencing.

        Your desire to understand the mechanisms by which we motivate the search for common values is something I share. How to do this seems to be the million-dollar question (or perhaps, trillion-dollar question).

        I think the key to motivating the search for common values is to make a compelling argument for the idea that this search is the most effective way for each side to reach their own goals. Looking at the history of regulatory policy (from Jasanoff), the idea that working together wholeheartedly brings each side closer to their own goals doesn’t seem too far-fetched. Finding common values can be shown to be the best option from a purely self-interested perspective.

        In your original post you said, “And sometimes peoples’ mission is not some sort of open-minded inquiry into the best arguments and evidence. Rather, their mission is to uphold their own group’s views and/or intentions.” I think that it is possible to show that upholding your own group’s views is most possible through this process of first finding common values when making collective decisions. These ideas can be thought of by issue advocates as new tools for accomplishing their own goals, NOT as some idealistic moral should-do to ‘reach a solution that works for everybody’.

        Having this argument laid out and then accepted by different stakeholders is a fundamental key – I even suggested that sociology of science and ‘science in the policy process’ be mandatory coursework in grade school in one of my papers (thereby creating a generation that accepts these ideas). The acceptance of this idea in decision making bodies can be facilitated by continuous reiteration of the idea TO stakeholders AFTER they have been disappointed by these bodies failing to act effectively. If the decision-makers themselves aren’t motivated by failing to achieve their policy goals, perhaps the stakeholders who are most affected by these failings will be motivated to change their tactics to include advocacy for these superior processes. This, over time, could lead to not only more of this type of discussion, but the creation of entirely new types of decision making bodies that are formed around these ideas – in the places where it matters most.

Comments are closed.