"Crack [Cocaine]" by Agência Brasil licensed under CC by 3.0

Do We Need Bargh’s Selfish Goals?

(Photo credit: “Crack [Cocaine]” by Agência Brasil licensed under CC by 3.0)

This week I will be at the 2013 Consciousness and Experiential Psychology conference and the 4th Annual Experimental Philosophy Workshop in Bristol, England.  I look forward to (1) feedback and (2) afternoon tea. Below is a précis of a paper I will present:

John Bargh and colleagues have recently outlined “Selfish Goal Theory” (see Huang and Bargh, forthcoming).  They claim that (1) mental representations called “goals” which are (2) selfish, (3) autonomous, and sometimes (4) consciously inaccessible adequately explain a variety of otherwise puzzling behaviors (e.g., addiction, self-destructive behavior, etc.). The details of (1) through (4) are below.

  1. Goal: mental representation of an end-state (e.g. if the goal is to have one’s hands washed, the end-state is washed hands).
  2. Selfish: a mental representation of an end-state is selfish if it (somehow) influences an agent to actualize its end-state.
  3. Autonomous: goals can compete with other goals (e.g., akrasia, fragmentation of self/will, etc.).
  4. Consciously inaccessible: an agent might not be aware of his or her own goal(s).

So what are Bargh and colleagues after with these “goals”? Well, they think that goals are needed to explain otherwise puzzling judgments and behaviors.

My reply begins thusly,…

It is not goals, or (1), that allow for Bargh’s explanations, but (2), (3), and (4). We can get isomorphic explanations by adopting only Bargh’s (2), (3), and (4) into our extant theories of behavior, e.g., the belief-desire model. So, if beliefs and desires with these features could do all the explanatory work that “selfish goals” do (and I think they do), and if they could do it without positing as many mental entities as Bargh proposes (i.e., not positing goals), then we should prefer my proposal to Bargh’s.

This proposal should seem feasible. Consider that we already use terms like “implicit belief” and “implicit desire.” Depending on the circumstance, these “implicit” mental states might involve Bargh’s (2), (3), and (4). In fact, it might be that the importance of Bargh’s theory is its teasing out of some distinctive features or types of ‘implicit.’ The point is that Bargh’s (2), (3), and (4) should easily find a home in our existing theories of behavior—certainly more easily than if we were to add new mental state to our theories.

Now, this is not to say that Bargh’s work on selfish goals is unimportant. After all, he and his colleagues did prompt us to countenance mental features like (2), (3), and (4). Further, he (and colleagues) are quite right to point out that many judgments and behaviors will be difficult to explain without these features. And it goes without saying that were it not for Bargh’s research, we might not readily know about behaviors and judgments that seem to involve these features. So Bargh’s work is important indeed(!)—both his experimental stuff and his theoretical stuff.

And so, it should be clear that my proposal is preferable to Bargh’s proposal in virtue of its being simpler, equally efficacious, and more feasible.

 


Bargh, John A., Annette Lee-Chai, Kimberly Barndollar, Peter M. Gollwitzer, and Roman Trötschel. 2001. “The Automated Will: Nonconscious Activation and Pursuit of Behavioral Goals.” Journal of Personality and Social Psychology 81 (6): 1014.

Bargh, John and Julie Huang. 2013. “The Evolutionary Unconscious: From ‘Selfish Genes’ to ‘Selfish Goals’.” University of New South Wales, Sydney Symposium of Social Psychology, March 12.

Bargh, John and Julie Huang. 2009. “The Selfish Goal.” The Psychology of Goals: 127-150.

Bargh, John and Julie Huang. 2011. “The selfish goal: Self-deception occurs naturally from autonomous goal operation.” Behavioral and Brain Sciences 34 (1): 27-28.

Bargh, John A., Michelle Green, and Gráinne Fitzsimons. 2008. “The Selfish Goal: Unintended Consequences of Intended Goal Pursuits.” Social Cognition 26 (5): 534.

Bargh, John A., and Tanya L. Chartrand. “The unbearable automaticity of being.” American psychologist 54, no. 7 (1999): 462.

Bargh, John A., Wendy J. Lombardi, and E. Tory Higgins. 1988. “Automaticity of Chronically Accessible Constructs in Person x Situation Effects on Person Perception: It’s just a Matter of Time.” Journal of Personality and Social Psychology 55 (4): 599-605.

Bargh, John A. and Ezequiel Morsella. 2008. “The Unconscious Mind.” Perspectives on Psychological Science 3 (1): 73-79.

Bargh, John A. and Erin L. Williams. 2006. “The Automaticity of Social Life.” Current Directions in Psychological Science 15 (1): 1-4.

Huang, Julie; Barge, John. (forthcoming). “The Selfish Goal: Autonomously Operating Motivational Structures as the Proximate Cause of Human Behavior and Judgment” in Behavioral and Brain Sciences.

 

Published by

Nick Byrd

Nick is a cognitive scientist studying reasoning, wellbeing, and willpower. When he is not teaching, in the lab, writing, exercising, or relaxing, he is blogging at www.byrdnick.com/blog

3 thoughts on “Do We Need Bargh’s Selfish Goals?”

  1. Good to hear you are over in the UK this weekend. I am disappointed not to be able to make the conference.

    Response to your blog:
    Hierarchical Systems Theory (HST) is consistent with 1 through to 4:

    1. However, I think “goal” is the wrong kind of term to use because it implies a fixed directive and an end position. Alternatively, Hierarchical Systems Theory (HST) argues that mental states are systems constructs – In being systems constructs, mental states seek a stable form of representation about environmental properties and conditions. Consequently, due to the dynamic nature of environmental experience, “the goal”-posts are constantly moving – The nature of the stable construct moves with experience and with it; “the goal”. In other words, it is a fluid state rather than a fixed state.

    2. It is a pre-requisite for a systems construct to seek or tend toward stability or maintain stability of form (selfishly), for to do otherwise is to readily destabilise. States that readily destabilise tend to cease to maintain temporal value, are transitory, and do not have a physically influential existence.

    3. The autonomy would be multi-layered: Any given systems construct cannot avoid interacting with other constructs and with environmental properties. If whilst doing so, a construct maintains its functional behaviour, it maintains its independence and its separate identity. However, if its behaviour becomes dysfunctional, it will cease to operate as an independent coherent form.

    4. According to HST, one can think of consciousness as being a systems construct comprised of multiple levels of other types of systems constructs. Consequently, and by default, many of the characteristics associated with processes relating to consciousness are ineffable, subjective, outside of conscious awareness etc, for this is what the construct that creates ‘conscious awareness’ requires are the components of its construct. These ‘processing’ layers or components are inaccessible to higher systems level that involves analytic evaluation and conceptual thinking. Each level still obeys the purposeful systems principles in so far as all physical principles must obey their causal nature in order to be consistent with being physically viable, and they must demonstrate some degree of independence to ensure behavioural and structural coherence whilst interacting with external conditions and properties.

    1. Mark: I got this when I was in the UK. I rarely had internet access. When I got back, I had to hit the ground running to catch up. My apologies for my delayed reply.

      I took a look at your Hierarchical Systems Theory. I am impressed at how well you able to zoom out when discussing systems. For example, your Type-A and Type-B definitions could apply to many systems, not just the mind or the organism. As a result, I feel a bit like I am reading Jerry Fodor when I read you—in case your opinion of Fodor is not high, I should mention that I do not think sounding Fodorian is a bad thing.

      Anyway, thanks for sharing. I hope to give HST a more thorough look sometime. I hope you are well!

      1. I hope your talk went down well in the UK.
        I am constantly reworking HST; revising and developing the ideas, whilst looking out for research that might be applied to validate it.
        The main task, is to show how HST explains why different classes of systems naturally emerge and ‘grow’, thereby creating a hierarchy of dynamic constructions that engage with their macroscopic environmental conditions in ways that are increasingly complex. In doing so, HST can be regarded as an emergentist theory of mental state properties which is reducible – by virtue of its hierarchical nature – to the smallest of physical constructions.
        Presently, of all living philosophers, Jerry Fodor is whom I would like to engage with most: perhaps in hopeful anticipation of finding common ground.

Comments are closed.