“Le Penseur” by Magnus Manske is licensed under CC BY 3.0

Philosophical Cognition: Experimental Starting Points

One of my ongoing projects is a natural account of what goes on (cognitively, neurally, psychologically) when philosophers do philosophy. There is a growing literature in experimental psychology (and experimental philosophy) that  provides hints about philosophical cognition. Below I will outline a few such studies. For links to the papers I reference, see the works cited at the bottom.

 

Anthony Jack & colleagues

Anthony Jack and Philip Robbins have published some research on human judgment of mentality in other creatures.  Their work studies these judgments on the behavioral level and the functional level. I will begin with this functional stuff. In this case, there are two distinct functional networks of interest: the Default Mode Network (DMN) and the Task Positive Network (TPN).

They are interesting because of the way they interact and subserve opposing intuitions. Their relationship seems to be reciprocally inhibitory—when one is activated, the other is deactivated. Each network seems to be recruited by certain types of reasoning: the DMN is activated during social reasoning, like when subjects are thinking about their own mental states or the mental states of others while the TPN is activated by mechanical, causal, logical, or mathematical reasoning, like when subjects watch videos about physics.

The details about these networks imply a worry: that our intuitions about the mindedness of creatures could be influenced by seemingly irrelevant stimuli. For example, imagine that someone asks you, “Do you think lobsters have minds?” then your answer might depend on what you were doing immediately prior to being asked the question. If you were engaging in conversation or watching a sit-com (which presumably require social cognition), then you are more likely answer affirmatively. If you were doing logic puzzles or programming (which presumably require logical or causal cognition), then you are less likely to answer affirmatively.

And this in indeed what Jack & Robbins found. Their subjects’ answers to questions about a creature’s mindedness and ability to feel pain varied in association with a prime: either a social reasoning task or a mechanical reasoning task. The suggestion is that when the TPN is recruited via social cognition, the subject will be more likely to ascribe mindedness to a creature than if their DMN had been recruited by mechanical/causal cognition. Or in other words, social reasoning and positive intuitions about mindedness will inhibit mechanical (etc.) reasoning and vice versa.

One can easily imagine how these judgments (i.e., judgments about mindedness) could be isomorphic to judgments that philosophers make when doing philosophy of mind, philosophy of consciousness, philosopher of cognitive science, etc. So, the suggestion is what we were worried about: our judgments about mindedness can be swayed by the seemingly irrelevant factors. I see no knock-down reason why we shouldn’t expect academic philosophers’ judgments  to be similarly swayed.

 

Liane Young & Colleagues

Liane Young and colleagues have conducted a variety of studies that are also germane to philosophical cognition. They have found that the concept of intention is a pivotal in moral judgment. That is, whether or not a subject consider’s someone blameworthy for an outcome (e.g., killing a human via poison) depends heavily on whether or not the subject thinks the person intended to cause the outcome. If a subject thinks that the person did not intend the outcome, then they are far less likely to find the person blameworthy. On the other hand, if the person thinks that the person intended the outcome, then they are more likely to report that the person is blameworthy.

That result is interesting in and of itself, but there is more. It turns out that Young and colleagues have found a few areas of the brain that are particularly sensitive to belief attributions such as intention, such as the right temporal parietal junction [r-TPJ] (Young et al 2007). So, recruited two groups: subjects with damage to their r-TPJ and subjects without such damage. They gave both groups a battery of moral dilemmas (e.g., whether or not to sacrifice one life for multiple lives). Sure enough, the subjects with damage to the r-TPJ responded differently to certain moral dilemmas than the subjects without such damage (Koster-Hale et al 2013).

Young and colleagues have also done lesion studies concerning the likelihood to make utilitarian judgments about moral scenarios (Koenigs et al 2007). It turns out that subjects with damage to certain areas of the prefrontal cortex associated with social emotions [i.e., the ventromedial prefrontal cortex, or VMPFC] “produce an abnormally ‘utilitarian’ pattern of judgments on moral dilemmas…” This seems unsurprising from the armchair. After all, if one is less likely to take their own emotions and/or the emotions of others into account when making a moral judgment, then it seems natural that a utilitarian heuristic would be a suitable strategy for moral judgment.

These results suggest that variations between individuals in the aforementioned areas might cause variations in judgment about certain moral dilemmas. Again, it is easy to see how these judgments could be isomorphic to judgments that philosophers make (e.g., when doing ethics). What I am suggesting is that natural variations between philosophers’ neurobiology could produce some of the variation between philosophers concerning their dispositions towards a certain method of ethics (i.e., deontology, utilitarianism, etc.).

 


Begany, K., Cesari, R., Barry, K., Ciccia, A., & Jack, A. Two domains of human higher cognition.

Cushman, F., Young, L., & Hauser, M. (2006). The role of conscious reasoning and intuition in moral judgment testing three principles of harm. Psychological Science, 17 (12), 1082-1089.

Cushman, F., Sheketoff, R., Wharton, S., & Carey, S. (2013). The development of intent-based moral judgment. Cognition, 127(1), 6. doi:10.1016/j.cognition.2012.11.008; 10.1016/j.cognition.2012.11.008

Koenigs, M., Young, L., Adolphs, R., Tranel, D., Cushman, F., Hauser, M., & Damasio, A. (2007). Damage to the prefrontal cortex increases utilitarian moral judgements. Nature, 446(7138), 908-911.

Koster-Hale, J., Saxe, R., Dungan, J., & Young, L. L. (2013). Decoding moral judgments from neural representations of intentions. Proceedings of the National Academy of Sciences,

Jack, A. I., Dawson, A., Begany, K., Leckie, R. L., Barry, K., Ciccia, A., & Snyder, A. (2013). fMRI Reveals Reciprocal Inhibition Between Social and Physical Domains. Neuroimage,

Jack, A. I., & Robbins, P. (2012). The Phenomenal Stance Revisited. Review of Philosophy and Psychology, , 1-21.

Jack, A. I., & Shallice, T. (2001). Introspective Physicalism As An Approach to the Science of Consciousness. Cognition, 79(1), 161-196.

Robbins, P., & Jack, A. I. (2006). The Phenomenal Stance. Philosophical Studies, 127(1), 59-85.

Young, L., Cushman, F., Hauser, M., & Saxe, R. (2007). The neural basis of the interaction between theory of mind and moral judgment. Proceedings of the National Academy of Sciences, 104(20), 8235-8240.

(Photo credit: “Le Penseur” by Magnus Manske is licensed under CC by 3.0)

Published by

Nick Byrd

Nick is a cognitive scientist studying reasoning, wellbeing, and willpower. When he is not teaching, in the lab, writing, exercising, or relaxing, he is blogging at www.byrdnick.com/blog