27
Electronic copy available at: http://ssrn.com/abstract=2226711 1 ---Handbook Draft--- Behavioral Ethics Meets Behavioral Law and Economics Yuval Feldman 1 1. Introduction The last twenty years have seen a dramatic increase in the influence of psychology on the field of economics in general and on the law and economics movement in particular 2 . . As a result, significant efforts have been devoted to mapping the flaws in human cognition and examining their implications for how individuals deviate from making optimal decisions (see Jolls et al. 1999; Ulen and Korobkin 2000; Langevort 1998, and Jolls 2007). For example, the literature has investigated how irrelevant factors of context, framing, or situation can cause individuals to make decisions that are contrary to their best interest. The literature reviewed in this chapter focuses on the importance of these automatic processes and examines how they affect people’s moral and ethical behavior. It is important to clarify at the outset the difference between behavioral law and economics (BLE) and behavioral ethics (BE). In contrast to BLE, BE has suffered from structural limitations that have reduced its ability to affect broad legal academic circles. For example, the work of many seminal authors in the field of BE ended up reducing the visibility of the field as a whole, as opposed to BLE, which gained prominence in part because the original writings on BLE were the work of two main authors: Kahneman and Tversky. Furthermore, unlike BLE, BE was developed as part of the management literature, which is less related to legal scholarship than BLE is, and therefore less able to influence it. Another limitation of BE lies in its reliance on mechanisms of behavioral automaticity and awareness, areas that are difficult to explore (for example, how is it possible to prove that people are unaware of their selfish intentions?), whereas classical BLE focuses on sub-optimal outcomes, which are easily examined empirically. This places many of the findings of BE in an inferior methodological position relative to those of BLE. 1 Associate Professor of Law, Bar Ilan University, Israel. Visiting Professor (2011-13), The Edmond J. Safra Center for the Study of Ethics and Implicit Social Cognition Lab, Harvard University. I wish to thank Amos Schur and the participants of the Law and Market Behavior Conference at Notre Dame University, April 2013, for their comments. I am also thankful to Troy Schuler and Peter Grogan for research assistance. Doron Teichman and Eyal Zamir for their detailed and insightful comments. 2 refer to Teichman and Zamir intro

Behavioral Ethics Meets Behavioral Law and Economics

  • Upload
    biu

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Electronic copy available at: http://ssrn.com/abstract=2226711

1

---Handbook Draft---

Behavioral Ethics Meets Behavioral Law and Economics

Yuval Feldman

1

1. Introduction The last twenty years have seen a dramatic increase in the influence of psychology

on the field of economics in general and on the law and economics movement in

particular2. . As a result, significant efforts have been devoted to mapping the flaws in

human cognition and examining their implications for how individuals deviate from

making optimal decisions (see Jolls et al. 1999; Ulen and Korobkin 2000; Langevort

1998, and Jolls 2007). For example, the literature has investigated how irrelevant factors

of context, framing, or situation can cause individuals to make decisions that are contrary

to their best interest. The literature reviewed in this chapter focuses on the importance of

these automatic processes and examines how they affect people’s moral and ethical

behavior.

It is important to clarify at the outset the difference between behavioral law and

economics (BLE) and behavioral ethics (BE). In contrast to BLE, BE has suffered from

structural limitations that have reduced its ability to affect broad legal academic circles.

For example, the work of many seminal authors in the field of BE ended up reducing the

visibility of the field as a whole, as opposed to BLE, which gained prominence in part

because the original writings on BLE were the work of two main authors: Kahneman and

Tversky. Furthermore, unlike BLE, BE was developed as part of the management

literature, which is less related to legal scholarship than BLE is, and therefore less able to

influence it. Another limitation of BE lies in its reliance on mechanisms of behavioral

automaticity and awareness, areas that are difficult to explore (for example, how is it

possible to prove that people are unaware of their selfish intentions?), whereas classical

BLE focuses on sub-optimal outcomes, which are easily examined empirically. This

places many of the findings of BE in an inferior methodological position relative to those

of BLE.

1 Associate Professor of Law, Bar Ilan University, Israel. Visiting Professor (2011-13), The Edmond J.

Safra Center for the Study of Ethics and Implicit Social Cognition Lab, Harvard University. I wish to thank

Amos Schur and the participants of the Law and Market Behavior Conference at Notre Dame University,

April 2013, for their comments. I am also thankful to Troy Schuler and Peter Grogan for research

assistance. Doron Teichman and Eyal Zamir for their detailed and insightful comments. 2 refer to Teichman and Zamir intro

Electronic copy available at: http://ssrn.com/abstract=2226711

2

Another body of literature that stands in contrast to BE concerns the prosocial

account of human behavior (Stout 2010 and Benkler 2011). According to this literature,

rational choice models cannot account for our ability to cooperate and engage in

prosocial behavior beyond what is in our self-interest. The present chapter takes a

complementary view according to which, in some contexts, the rational choice accounts

of self-interest are under-inclusive when describing our self-interested behavior.

Both the BE and prosocial bodies of literature agree with the need to take a broader

view of what self-interest is, and both disagree with the notion that money is the main

motivating force of people. But they agree less on what this means: BE believes that a

broad account of self-interest will reveal our tendency toward selfish action, whereas the

prosocial literature argues the opposite. I do not suggest that one must look at people’s

selfish choices to understand their behavior. On the contrary, the arguments of BE and of

this chapter offer a more complex view of what it means for a choice to be in one’s self-

interest and how this affects behavior. The differences are as follows:

First, BE takes the broad view that many people's actions are based on self-interest

in that they serve a need to maintain a positive and coherent view of a person's self.

Second, BE accounts for the effect that self-interest has on our cognitive processes

(for example, sight and memory), as opposed to simply looking at how self-interest

affects motivation.

Third, BE is more concerned with how self-interest affects us implicitly than how it

shapes our explicit choices.

Both bounded ethicality and the traditional behavioral economics literature focus

on automatic processes that underlie people's decision making. But although both

literatures examine the limits of current discussion of how one's self-interest influences

decision-making, BE explores the automaticity of self-interest whereas BLE explores

areas where automatic decisions undermine our self-interest (see Kish-Gephart 2010;

Bazerman and Gino 2012 and 2013).

Given the importance of intentionality to the law (for example, determining

culpability in criminal law), one would expect this line of reasoning to be much more

central to legal scholarship than it is. But the fact that BE has been included within the

field of management, not of legal scholarship, and the nascent stage of the literature have

kept it from being part of mainstream legal scholarship. In particular, BE suffers from the

simultaneous development of multiple, competing paradigms that can muddle the

underlying points on which the literature agrees. These disagreements also prevent BE

from being able to present consistent policy recommendations – another obstacle to its

adoption within the law.

The first part of this chapter is devoted to reviewing and organizing the literature into

manageable themes that deserve to be evaluated more rigorously for the purposes of legal

scholarship. The second part of the chapter examines some of the key normative issues

this scholarship raises before the law. In particular, I examine the general view of

behavioral ethics, according to which people are often unaware of their biases and of the

true motivations that underlie their decision-making processes. The analysis covers both

3

the scope of the effects of these System 13 biases and their applicability to the legal

context.

It is difficult to cover all of the works in this body of literature because it has been

growing exponentially. The argument I want the reader to take away from this chapter is

straightforward: if deliberative choice is limited, we must reexamine how to make most

interventions (e.g., incentives) in the law effective. As the critical review of literature

demonstrates, this task is especially difficult because although we know that System 1

affects ethical decisions, we do not know enough about how this effect counteracts the

it’s undesirable effects.

The third part of the chapter examines how bounded ethicality should inform the law

and economics scholarship. In particular, I examine the shift from addressing individuals

to addressing the underlying situations, and the need to determine what serves as an

incentive in a world in which people's explicit ethical choices are secondary to their

implicit ethical behaviors. I also examine how much of the literature on debiasing and

nudging, which were popular normative reactions to the classical behavioral law and

economics literature, can be relevant to the bounded ethicality scholarship. I conclude

with some suggestions concerning disambiguation and forced reflection, which may be

able to curb the types of non-compliance uncovered by the bounded ethicality literature.

2. The Complex Effect of Self-interest on Ethical Behavior

In this part of the chapter I describe the research on ethical decision making that is

relevant to the law.

2.1. Automaticity of Self-interest Behavioral ethics relies on the fact that self-interested behavior is automatic. We

must carefully distinguish, however, automaticity from the related, overlapping concepts

of awareness, controllability, intentionality, and attention.

Despite the fact that automatic self-interested behavior is fundamental to

behavioral ethics, most of the experimental evidence is inconclusive about how it

operates. Even if we accept the automaticity of self-interest, we do not necessarily know

the extent to which we are aware of this predisposition and whether it is malleable or

even controllable. Because of the importance of self-interest in law in both systems, it is

particularly important to understand the interplay between Systems 1 and 2 in ethical

decisions.

3The concept of two systems of reasoning has gained popular recognition in Kahneman’s

recent book, Thinking Fast and Slow (2011). See also Stenovitch and West (2000), Evans

(2003) [Zamir and Teichamn intro. The general concept differentiates between System 1,

an automatic, intuitive, and mostly unconscious process, and System 2, a controlled and

deliberative process

4

2.2. Current Research on Automaticity The “automaticity of self-interest” is of fundamental importance to the present

study. Therefore, we take a step back to review the history of research on automaticity in

general and on the automaticity of self-interest in particular. The researcher most

identified with the effect of automaticity on behavior is John Bargh (1992, 1993, 1996,

and 1999). In a long series of experiments, Bargh explored the meaning and significance

of automatic processes. The main argument he advanced is that the majority of daily

behaviors are the product of automatic and unconscious processes. He also suggested that

these behaviors are below our level of awareness, although the research is inconclusive as

far as this part of the argument is concerned.

Automaticity is typically examined using the technique of “priming.”4 In a well-

known priming study, priming stereotypes of elderly people and of African Americans

caused people to adopt slower moves and aggressive behavior, respectively. Priming

techniques have also been observed in legally relevant behaviors such as competitiveness

and cooperation (Bargh 1996; Kawakami 2003; Kay 2004).

Money has also been shown to be an effective primer, both methodologically and

theoretically. In a series of nine experiments, Vohs et al. (2006, 2008; for a related study

see Kay et al. 2004) demonstrated that exposing people to subtle reminders of money

completely changed their mode of behavior in the areas of cooperation and social

interaction. Aquinno et al. (2009) have compared the effects of priming with such stimuli

as the Ten Commandments, morality, and financial incentives on such behaviors as

contribution to public causes and dishonesty.5 Even more recently, Kouchaki et al. (2012)

demonstrated that priming with money has increased the chances that people would

engage in unethical behavior. These recent studies show the relevance of priming to our

understanding of automatic human behavior and its importance to the law.

Another relevant example of automatic behavior is “embodiment.” In contrast to

priming, which attempts to circumvent System 2 by creating a stimulus that cannot be

fully processed by it, embodiment involves a physical stimulus that interacts with the

individual’s body (e.g., Niedenhthal 2005). Kouchaki et al. (in press) showed, for

example, that carrying heavy weights caused people to feel guiltier about a given

behavior than people carrying lighter weights felt. In another example, washing one’s

hands has been shown to reduce feelings of guilt when recalling unethical behavior

(Zhong and Liljenquist 2006) and to reduce the severity of moral judgments against those

who behaved unethically (Schnall et al. 2008). Although these techniques are relevant to

the law because they show the automaticity of some legally relevant behaviors, the most

important question is how automatic processes affect our reasoning capabilities. The

following section examines this question.

4 Priming involves displaying some cue (for example, an image) to participants in such a way that they are

not consciously aware of the fact that they have been exposed to the cue. 5 The study also documents some important role of individual differences in behavioral ethics are

, discussed later in the chapter, in the context of moral identity. See also Amir, Mazar, and Ariely

(2008) for an even earlier demonstration of the effect of the Ten Commandments.

5

2.3. Motivation and Cognition

Many of the paradigms that underlie behavioral ethics are based either directly or

indirectly on “motivated reasoning.”6 Kunda (1987, 1990), a seminal author in this area,

argues that our internal motives have a significant effect on our complex reasoning skills.

An example of such a complex reasoning skill is our ability to establish a causal

connection between events. In particular, Kunda defines two distinct types of motivated

reasoning: in one our motive is to arrive at an accurate conclusion; in the other our

reasoning leads us to our desired conclusion.

The latter type of motivated reasoning is the key behavioral process that underlies

behavioral ethics. Kunda suggests that motivated reasoning affects our higher level

reasoning skills, but more fundamental evidence suggests that motivated reasoning

affects our memory and perception, both skills highly relevant to the law. For example,

Shu et al. (2011) showed that within very short periods of time people misremember both

what they did and what they were told to do, when such misremembering allows them to

believe that they had acted ethically. In particular, participants who cheated in the

experiment demonstrated higher moral disengagement, and those who had read an honor

code before the experiment demonstrated significantly worse recall of the code than did

those who did not cheat.Our compliance with the law is dependent on our remembering

the relevant law, therefore the notion that memory itself could be corrupted by our self-

interest becomes relevant to how we design laws and to the consequences of our design

decisions.

Balcetis et al. (2006) used a great variety of physiological measurements,

including eye tracking, to show that motivation affected visual capabilities. Her studies

showed that preconscious processing of stimuli in the visual environment was affected by

people’s preferences, suggesting that awareness is indeed highly limited when it comes to

the ability to process information that is inconsistent with one's self-interest.

2.4. Awareness The above lines of research have implications for one of the fundamental issues

underlying behavioral ethics: even if self-interest is automatic, are we at least aware that

we are behaving in a self-interested manner? The previous two experiments indicate that

in all likelihood we are not. Indeed, most of the literature on bounded ethicality in

general, and on self-deception in particular, has found that we are unaware of the effects

that System 1 has on our level of awareness to these biases.7

6 With the exception of theories such as moral licensing or compensatory ethics, seen in the work

of Merritt et al. (2011), which focus on how people maintain their self-concept by making a

change in their behavior. But even these theories take into account some change in people’s

understanding of the action to maintain a positive view or receive some potential credit. 7 For further reading, below are some findings by scholars who participated in the symposium on

self-deception and awareness. The Evolution and Psychology of Self-deception. William von

Hippel and Robert Trivers. In this symposium the question of awareness to behavioral ethics

biases is addressed by a by a group of scholars from various perspectives, Behavioral and Brain

Sciences 2011. Due the importance of the views on awareness to self-deception to any legal

treatment of bad deeds performed by people who suffer from these biases.

The following authors supported the notion that we can engage in self-deception without

6

Naturally, whether or not we are aware of these automatic processes and of the behaviors

they cause has implications for how we approach criminal law. Even beyond the criminal

domain, however, the lack of awareness should lead legal policy makers to rethink what

interventions may be necessary in other domains of law.

2.5. Automaticity and Ethical Behavior

The automaticity of self-interest is one of the most basic tenets in behavioral

ethics, yet at the same time one of the most controversial ones, both theoretically and

empirically. Moore and Lowenstein (2004) were among the first to attempt to show that

self-interest and concern for others affect our behavior through different cognitive

systems, and that self-interest is automatic, viscerally compelling, and often unconscious.

By comparing people's private (incentivized for accuracy) and public (incentivized to be

biased) beliefs, Moore (2010) demonstrated that people really believed their own biased

judgments, not recognizing that their behavior was problematic. A similar view is

advanced by Gino et al. (2011), who demonstrated that the level of control needed to

behave ethically is much higher than that following from the decision to be unethical.

A related but different mechanism of automaticity of self-interest is that of egotism.

In contrast to the automaticity of self-interest, which focuses on what's good for me,

egotism focuses on the tendency of people to appreciate and unconsciously prefer things

they associate with themselves. This includes various personal details such as their name

awareness. Bandura (2011) suggested that self-deception is possible whenever we strongly

suspect the truth but deceive ourselves by avoiding actions that would confirm the truth of our

suspicions.

Frankish (2011) went beyond System 1 and System 2 analysis, arguing that individuals develop

attitudes that support their ability to accept phenomena as true.

Buss (2011) maintained that there is not enough empirical data to prove whether or not unaware

self-deception is possible.

Huang and Bargh (2011) suggested that our ability to separately pursue multiple goals at once

allows for conscious self-deception.

Johansson et al. (2011) added evidence to the possibility of conscious self-deception by analyzing

the results of choice blindness experiments.

Kenrick and White (2011) proposed a modular self beyond System 1 and System 2, whose

independent modules are capable of creating conscious self-deception.

Lu and Chang (2011) pointed out memory systems that allow the brain to hide and retrieve

information later, allowing for conscious self-deception.

Mckay et al. (2011) supported the development of conscious self-deception by elaborating on its

social benefits.

Mercier (2011) suggested that conscious self-deception is a form of confirmation bias.

Pretti and Miotto (2011) examined how self-deception allows individuals to become better at

cheating and detecting cheating.

Troisi (2011) used somatoform disorder to show why conscious self-deception is possible.

A minority of authors argued against the concept of unaware self-deception. In separate studies,

Smith, Pinker, and Dunning (2011) explained why current examples cannot show that self-

deception is conscious rather than unconscious.

7

and even items to which they were exposed in the past (Nuttin 1987). The role of

egotism, like that of priming and embodiment, is troubling insofar as we cannot

determine how aware we are of the biasing influence of such seemingly unrelated facts

such as shared birthday

2.5.1. The Automaticity of Morality

A most intense debate has been raging over whether System 1 or System 2 is

responsible for making moral judgments. Jonathan Haidt (2001, 2004) demonstrated

experimentally the role of System 1 in forming our moral reasoning. Haidt argued that

moral reasoning, a product of System 2, is likely not to drive our moral judgments

because the automaticity with which we evaluate targets undercuts the notion that

deliberative, System 2 processes come into play in moral evaluations. He also cites

behavioral evidence of our tendency to create post hoc reasons to explain events, and the

importance of moral emotions in moral judgment, to demonstrate the likelihood that

System 1 is responsible for moral judgments (see also Mikulincer and Shaver 2011).

The views advocated by Haidt have been subject to a variety of criticisms by

psychologists. For example, Cushman et al. (2006) argued that the intuitiveness of

morality cannot be assumed across the board and that there is a variation in the

accessibility of different moral principles in people’s automatic and conscious

mechanisms.

2.5.2. The Debate on Whether System 1 or System 2 Is Responsible for Unethical Behavior

In contrast to the above discussion on moral judgment, the majority view in the

literature grants System 1 the leading role in accounting for times when we choose to act

unethically. Epley and Caruso (2004) concluded that automatic processing leads to

egocentric ethical interpretations. Similarly, Ven den Bos et al. (2006) found support for

the notion that when appraising a situation, we prefer outcomes that benefit ourselves,

and only later correct to take into account fairness toward others. Using an implicit

association test, Marquardt and Hoeger (2009) showed that decisions were made based

on implicit rather than explicit attitudes (although they also found that implicit attitudes

were correlated with choices that subjects believed to be moral). Lowenstein et al.

suggested that the effect of self-interest is automatic and associated with System 1. They

wrote that “in many instances of conflict of interest, self-interest tends to operate via

automatic processes whereas ethical and professional responsibilities operate via

controlled processes” (Moore and Loewenstein 2004).

Thus, according to the majority view, System 1 is responsible for most types of

soft unethical behaviors, where, given more time and reflection, most people (“good

people” or those of standard moral character) would choose to behave in the ethically

appropriate manner. As further proof of this notion, in the context of dishonesty research,

Shalvi et al. studied the time pressure effects on honesty and suggested that people's

automatic reaction tends to be more dishonest (see also Gino et al. 2009 and Chugh,

Bazerman, and Banaji 2005).

8

A minority opinion in this literature holds that people's automatic responses are at

least some of the time more cooperative than their deliberative responses. Green et al.

(2009) showed that dishonesty involves more of the control-related parts of the brain than

honesty does. In a more recent work, Rand, Greene, and Nowak (2012) showed that

people’s immediate response is more cooperative than their reflective response. An open

question is whether it is possible to reconcile these approaches, at least partially, by

suggesting that being cooperative and dishonest serves people's intuitive self-interest,

especially in a social context, where appearing better seems to be the rational move from

an evolutionary perspective. Although this in itself does not suggest that self-interest is

deliberative, at least in some accounts of self-interest it suggests that automatic reasoning

is superior to deliberative reasoning. In support of this dissenting view, Kahan (2013)

demonstrated that people who score high on cognitive reflection (i.e., are more likely to

use System 2 reasoning) are more likely to engage in motivated reasoning, which, as

noted, is an enabling mechanism for unethical behavior.

2.6. Where Did All the Bad People Go?

In his recent book about dishonesty, Dan Ariely (2012) cited an observation by

locksmiths that locks are not used against “bad people,” who can easily break into locked

homes, but rather against good people who would nevertheless walk through an open

door when they see one The view espoused in this anecdote seems to be shared by many

of the newer paradigms: we are all potentially bad people.

This focus represents the growing recognition that many ethical decisions are the

result of implicit, not explicit choices. Simply reviewing the titles of current papers

shows how central this theme has become.8 Admittedly, none of the authors listed would

suggest that there are no bad people. But the contrast between saying that people are good

or bad and saying that a person’s individual deeds are good or bad suggests a growing

recognition that many ethically relevant behaviors that were previously assumed to be

choice-based, conscious, and deliberative are anything but.

Along the same lines, Bazerman and Banaji (2004) suggested that incentives and

similar concepts fail to correct a large portion of ethical behaviors because "such

measures simply bypass the vast majority of unethical behaviors that occur without the

conscious awareness of the actors who engage in them.” The lack of awareness suggests

that fundamentally much of the unethical behavior is engaged in by ordinary, good

people who are unaware of their unethical behavior.

8 For example, The Dishonesty of Honest People: A Theory of Self-Concept Maintenance (Mazar, Amir,

and Ariely 2008), Why Good People Sometimes Do Bad Things: Motivated Reasoning and Unethical

Behavior (Bersoff 1999); How Good People Make Tough Choices: Resolving the Dilemmas of Ethical

Living (Kidder 2009); When Good People Do Wrong: Morality, Social Identity, and Ethical Behavior

(Pillutla 2011); Why Good People Do Bad Things: Understanding Our Darker Selves (Hollis 2008);

Blindspot: Hidden Biases of Good People (Banaji and Greenwald 2013). Many others do not use the term

“good people” in their titles but make the same argument in the text (for example, De Cremer 2011).This is

also the view held by Bazerman et al. (2002) in their studies on why "good" accountants conduct bad

audits. Note that the “good people” scholarship is usually different from the type of research

conducted by Zimbardo (2007) on the Lucifer effect or more recently by Valdesolo and DeSteno

(2009) on character. These works generally try to explain how ordinary people end up doing evil

or at least engage in gross criminal behaviors.

9

Our concern, however, is not what people do but rather understanding the

situational and psychological circumstances that allow them to do it. Therefore, much of

the discussion in the following descriptive section is about understanding the various

mechanisms responsible for bounded ethicality.

With the focus on "good" people, an array of questions arises with regard to the

meaning of good and bad. Do all people have the same chance of engaging in this form of

unethical behavior? Is there truly no awareness of unethicality in the case of these

behaviors? And if there is no awareness, are these behaviors impossible to control or

prevent? Should we replace current enforcement methods (e.g., deterrence, legitimacy)

because they are ineffective when it comes to automatically triggered unethical

behaviors? Current studies are far from answering these questions, and some portions of

the literature contradict the notion that good people perform bad deeds.

2.7. Are We All Truly Good?

An important question for the law is whether or not people are indeed blameworthy

for the types of white collar misconduct that behavioral ethics addresses. A small portion

of the research in behavioral ethics suggests that much of the variance in mindless

unethical behaviors can be attributed to personal traits rather than simply to the power of

the situation (Feldman and Smith 2013). For example, Green (2009) suggested that there

are strong individual differences in people's basic morality, which affects the likelihood

that they will engage in automatic reasoning (note that automatic reasoning has been

associated with dishonest responses).9 An additional approach suggests indirectly that

there are strong individual differences associated with the “moral hypocrisy,” and that

people who have low self-awareness are more likely to ignore their own behavior and to

judge others more harshly. Neither of these theories answer the underlying question of

whether we are aware, and therefore responsible, for our actions, but they may suggest

that we must rethink our definition of what it means to be a good person.

3. Taxonomy of Main Bounded Ethicality Mechanisms

The following paragraphs review some of the main new processes that underlie

behavioral ethics, which policymakers and academics must consider when evaluating the

effectiveness of legal interventions.

3.1. Self-concept Maintenance Self-concept maintenance suggests that we employ motivated reasoning to bridge

the dissonance that appears between our bad deeds and the desire to view ourselves as

9 The study involves an experiment under fMRI in which participants were asked to predict the

outcome of a coin flip. Participants were presorted based on a questionnaire into honest,

dishonest, or ambiguous, and were asked to either self-report accuracy or to write down their

guesses in advance. Results show that members of the honest group generated no more brain

activity when they lost money and had no opportunity to cheat than they did when they lost

money but had an opportunity to cheat. This supports the “grace” hypothesis that being honest is

not the result of a “choice” involving greater brain activity.

10

good people. For example, Ariely et al. (2008; 2012) found that people cheat only to the

extent that it allows them to maintain their self-concept of being honest.

Bersoff (1999), one of the early researchers working in this area, showed how

people use neutralization methods to distance themselves from unethical acts. Shalvi et

al. (2011) explored people’s ability to change their view of reality in order to justify their

unethical behavior. They found that after merely having been shown an alternative

outcome of a game, participants were willing to lie about which outcome had befallen to

them. Participants rolled dice under a cup so that only they could see the results.

Instructed to report the results of the first roll in exchange for money corresponding to the

value of the die, participants felt morally comfortable reporting the second, third, or other

subsequent rolls in order to receive a higher reward. Dana et al. (2007) used a version of

the dictator game in which players could choose to keep 5 or 6 dollars, and were not

immediately told how this choice affected the other player. But participants were allowed

to reveal without any cost the outcome of their choice. In this scenario, 63% chose the

inequitable outcome compared with 26% in the standard game, despite the fact that

players could choose at no cost to see how their choice affected the other player.

Participants preferred the “moral wiggle room” of believing that they had not behaved

unethically. In doing so, they engaged in a conscious process of information avoidance,10

leading the authors to the dualist view that people desire to pursue self-interest while

maintaining the illusion of behaving fairly.

A related theory that emphasizes the adjustment of reality is referred to as "ethical

fading” (Tenbrunsel and Messick 2004), according to which multiple processes make it

possible for us to deceive ourselves, allowing ethical concerns to fade into the

background of the decision-making process. The need to maintain our self-concept is also

central to the moral hypocrisy theory. According to this theory, we can maintain our self-

concept by not comparing our behaviors to our preexisting moral standards. Batson

(1999) found support for this idea and showed that self-interested behavior decreased

when participants were placed in conditions of high self-awareness. This theory also

suggests that people reconcile the dissonance in a different way: instead of perhaps

thinking of ourselves as objectively good people, we instead focus on the fact that we are

“better” people than those around us.

The moral licensing theory, also referred to as moral credentials, is based on the

same notion of self-concept maintenance but suggests that we use good deeds to excuse

later bad deeds (see also Merrit et al. 2010 and Effron and Monin 2010). Relying on this

theory, Monin and Miller (2001) found that participants who believed that they had

previously established their moral credentials (in this case a lack of prejudice) felt

empowered to subsequently express views that conflicted with moral norms. Sachdeva et

al. (2009) examined not only past deeds but also whether our self-identity was threatened,

and found a similar effect. Although traditionally we tend to view those who behaved

badly to be more likely to do so in the future, Merritt et al. (2010) were able to show that

in practice the reverse is true: those who have behaved well in the past are more likely to

behave badly in the future if the behaviors are focused on the same dimension.

10

We discuss the question of awareness in greater detail below.

11

3.2. Moral Disengagement Another theory based on dissonance is the application by Moore et al. (2012) of

Bandura’s (1999) theory of moral disengagement. Bandura suggested that people use

moral disengagement in order to justify performing inhuman actions like murder.

Similarly to previous work by Sykes and Matza (1957), Bandura offered eight

mechanisms by which individuals are able to convince themselves that their actions are

not immoral, thereby preventing the self-sanctions that they would normally apply to

keep their actions consistent with their personal ethical standards: (a) moral justification

used to explain why an immoral act actually has a moral purpose; (b) euphemistic

labeling used to reclassify an action like “stealing” as a more innocuous act such as

“shifting resources;” (c) palliative comparisons used to explain why the immoral action is

a better option than its alternatives; (d) displacement of responsibility accomplished by

blaming a superior who is believed to have ordered the immoral act; (e) diffusion of

responsibility by stating that the immoral individuals were merely part of a group in

which everyone was performing the immoral action, or that their part of the action was

not immoral; (f) trivializing the consequences of the act to make it seem more

acceptable; (g) dehumanizing the victim to render the action acceptable; and (h) blaming

the victims to render the immoral action acceptable by claiming that they provoked it.

Moore at al. (2012) applied the eight categories to explain self-deception in the

context of employment. Among the dimensions discussed, the use of euphemism,

changing the views of others, or reduction in one’s own are self-deception mechanisms

that people employ to justify their bad behaviors. Broadly, Moore proposed that moral

disengagement has three important effects on institutional corruption. First, it can make

unethical decisions easier by reducing psychological discomfort in making such

decisions. Second, because moral disengagement excludes moral values from

consideration, Moore went on to say that it expedites unethical behavior by freeing up

cognitive resources to work on the unethical goal. Finally, Moore believed that because

these actions promote the interests of the corporation, individuals who morally disengage

rise in the corporation, perpetuating unethical behavior.

Yet another attempt to suggest a taxonomy of all available mechanisms of self-

deception was conducted by Ayal and Gino (2011), who analyzed much of the existing

literature and suggested a list of the most important mechanisms that allow people to

behave dishonestly.11

The list focuses on the exploitation of ambiguity and the use of

moral cleansing, a process of mental appeasement that people use to distance themselves

from dishonest behavior. Ayal and Gino suggested a novel mechanism that individuals

may use to increase the good that comes from committing the bad act in order to justify

the bad act. They called this mechanism “local social utility.”

Another list of mechanisms is offered by Ashforth and Anand (2003), who

examined the type of arguments people use to legitimize corruption in organizations. The

authors referred mostly to highly deliberative processes that people perform when

considering whether or not to engage in a certain act. Many arguments listed are highly

relevant to legal policy makers attempting to understand the likelihood that good people

will not fully comply with the law. The following rationales are among the most

11

See the discussion in the normative section about whether dishonesty is a broad enough concept

to accommodate the interest of the law in driving people to make them behave in a desirable way.

12

important. First, actors may excuse corrupt practices on the grounds that they are not

illegal. Second, actors construe that they have no choice because of circumstances

beyond their control, such as management orders, peer pressure, dire financial straits,

being deceived, or existing precedent (everyone else does it). Third, actors play a small

part in the deed. Fourth, denial that anyone was harmed (e.g., the organization is insured

or can easily recover the costs, the actual damage is slight, the organization does not

appear to care, etc.). Fifth, actors blame the victim. Sixth, the actor discounts the values

and beliefs of another (e.g., “the corrupt may characterize a disliked law as vague,

complex, inconsistent, rarely enforced, punitive, or politically motivated such that

enforcement is capricious or malicious”). Seventh, the actor appeals to higher loyalties:

the group construes that universalistic ethical norms must be sacrificed for more

important causes (e.g., group loyalty). Finally, there is the metaphor of the ledger, similar

to moral licensing “where good works (whether actual or anticipated) earn a credit that

can be used to offset corrupt acts.”

3.3. Summary of the Descriptive Part The common theme in literature on the bounded ethicality is that unethical

behaviors are not the product of explicit choices to do wrong but rather are largely the

product of System 1, mindless behaviors. But a closer look at some of the mechanisms

described in the bounded ethicality literature shows that in many cases there is awareness

of the misconduct, and yet we still regard the behavior as “bounded” because of

mechanisms such as rationalization, lack of will power, or lack of attention, which

prevent individuals from acting ethically. Another common theme in the literature is the

need to protect our self-image, to resolve the dissonance that exists between believing

that we are good people and our desire to maximize self-interest at all cost. Resolving this

dissonance is reflected in almost all the paradigms reviewed in this chapter. A final

theme, suggested by most theories, is that because the decisions are produced by System

1, they are made with limited awareness of the self-interest motivating them.

Although most scholars believe that self-interest is automatic, the lack of conclusive

research about whether we can control behavior despite its automaticity, or at least be

aware of the behavior and compensate for it, should cast doubt as to whether those who

engage in these automatic, unethical behaviors are still good people. Moreover, because

these biases serve rather than contravene self-interest, people are unlikely to want to give

them up.

Above we have reviewed a body of literature with great potential for shaping legal

policy making. At the same time, we recognize a lack of consensus on most important

aspects of bounded ethicality, including competing and redundant underlying

explanations of the relevant behaviors. Furthermore, the literature contains several

theoretical and methodological flaws: abstract experimental paradigms, limited

accounting for individual differences, heated debate on methods such as priming, IAT,

and fMRI (in the context of morality) (Cameron et al. 2012; Doyen 2012; but see also

Bargh 2012 and Yang 2013). There is also limited research about the long-term effects of

many of the celebrated BE experimental manipulations. As we have shown, the

methodological difficulty of BE derives from the fact that awareness, notoriously difficult

13

to measure, is a crucial aspect in this field: awareness may dictate whether individuals

engage in these self-interested behaviors intentionally or unintentionally.

The approach of limited moral responsibility to unethicality makes the normative part

of the present chapter especially challenging. Some theories suggest possible solutions,

but the questions raised by the literature are far more numerous than the solutions

currently offered. Naturally, the stronger the case that unethical acts are committed by

people who do not realize they are behaving unethically, the more limited our ability is to

change this situation.

4. Normative Implications of Ethical Behavioral Mechanisms

It seems to be a straightforward argument that individuals’ limited awareness of

their bad behavior requires a substantial revision in the design of legal rules that govern

people (Feldman and Smith 2014). Although the management literature is burgeoning

with papers attempting to offer various prescriptions for organizations to respond to this

concern, the legal scholarship has mostly ignored this need. For this reason, many of the

papers reviewed here are from the management field, and I attempt to suggest that their

insights are applicable to the field of law as well. Indeed, a few of the leading scholars in

behavioral ethics were quick to point out that classical intervention techniques, such as

penalties, are rendered ineffective if ethical decisions are produced without awareness by

System 1 (Banaji et al. 2003, but see also Amir and Lobel 2009). In later parts of the

chapter I suggest ways in which the law should respond to this insight. As in the first

parts of the chapter, I use "traditional" law and behavioral economics as a baseline

relative to which I evaluate the ways in which the law may be able to account for people's

automatic reasoning either directly (e.g., the nudge approach) or indirectly, by forcing

people to overcome their automatic reasoning by means of System 2 (see also Amir and

Lobel 2012).

Overall, according to the normative section, the classical dichotomy of intrinsic

vs. extrinsic motivation in law may have to be replaced by a different, broader dichotomy

that combines the controlled and automatic modes of cognition underlying ethical

reasoning and motivation. Discussing these areas of law, I attempt to account for

questions such as: How strong is the dichotomy between the common bad deeds of good

people and the more extreme bad deeds of bad people, and how should the law respond to

both? To what extent can reactions to System 2 enforcement change behaviors that are

triggered by System 1? To what extent can fairness, social norms, and morality be used to

curb automatic unethical behavior?

4.1. The Nudge Approach Following Sunstein and Thaler’s influential book on the use of nudge to solve bounded

rationality, some behavioral ethics scholars attempted to use the nudge approach in the

context of bounded ethicality. For example, Shu et al. showed that people were more

honest if they signed an agreement at the top of the document, making salient the need to

be honest before completing the document, than if they signed at the end of the

document, at which point the decision on whether to be honest or lie has already been

made. Another nudge approach is illustrated by Bohnet et al. (2012), who have shown

14

that when people evaluate others in a between-subjects comparison rather than in a

within-subject comparison, negative stereotypes regarding race became less prominent.

The authors believe that comparing multiple candidates requires more deliberative,

System 2 reasoning than making a simple yes-or-no evaluation of a single candidate,

which is decided by System 1 and is therefore more prone to being biased.

Amir and Lobel examined the limitations of using nudge-like approaches in trying

to solve policy problems. Through their review of the recent books of Ariely and

Sunstein on irrationality and nudge, respectively,. Amir and Lobel noted that if

policymakers are to implement successful policy solutions, they need to do a more

thorough job in classifying the mechanisms underlying behavioral biases, for example,

determining whether bounded ethicality or bounded rationality is at issue.

Although some of the normative suggestions outlined in this chapter could be

seen as related to the nudge approach, there is a limit to the ability of these tactics to lead

to a sustainable change. In the original nudge context of behavioral economics, people

prefer to save money, quit smoking, choose the best schools for their children, enroll in

health programs, lose weight, and avoid taking too expensive loans. In other words, the

nudge works in part because it helps people achieve the goals they already have

formulated.

As noted earlier, in an ethical context the situation is much different: to a large

extent, especially when enforcement is limited, people benefit from cutting corners,

avoiding taxes, and under-performing in a contract, but they delude themselves into

believing that they have not misbehaved. Thus, if the objective is to drive them away

from their self-interest, we are likely to encounter significant resistance to nudge-like

approaches. We must note an additional concern regarding the face validity of some of

the nudge approaches: often the studies examine ethical shifts produced by atypical cues.

For example, Gino and Desai (2012) found that cues from childhood, such as soft toys

and nursery rhymes, decrease unethical behavior in participants. Although studies such as

these contribute greatly to understanding how people make ethical decisions, there are

obvious limitations to their applicability in the context of the state. Indeed, given that this

research is led by management scholars, some face validity issues are less of a concern

because organizations are more likely to be able to engage in enforcement of unethical

behaviors than the state is.

4.2. Debiasing Apart from the nudge approach, probably the best known area of research in this field is

that of debiasing, which is an attempt to help people overcome their biases by making

them reevaluate their errors. Christine Jolls and Cass Sunstein are among the leading

scholars in this area. In a thorough analysis, both theoretical and applied, they addressed

a long list of legal contexts including litigation, employment, and consumer protection

law. In this context too, some research in ethical decision-making has begun to examine

the possibility of debiasing people using various techniques that appear to be similar to

those described in the behavioral economics literature. The most elaborate process is

illustrated in the work of Tenbrunsel et al. (2007) on “ethical mirage.” The authors’ focus

is on the biases responsible for the awareness gaps between the ethicality of what people

do, what they want to do, and what they think they did. In particular, they focus on the

15

concept of a “should self,” which is dominant before and after decision making and is

capable of describing level-headedly how we ought to act.

Similarly, Schur et al.12

suggested broadening individuals’ decisions and forcing

individuals to plan future choices, making it difficult for them to engage in self-deception

to justify decisions based on self-interest. The concern, in this case also, is that people

have little incentive to adopt these solutions because they run counter to their self-

interest.

4.3. Limits of Fairness as a Remedy to Self-interest

Traditionally, the force acting to balance the effect of self-interest is fairness (for

a recent review and discussion of the limitations of its effect, see Feldman and Tyler

2012). The original behavioral economics literature focused heavily on fairness. In their

well-known paper on fairness, Kahneman, Knetsch, and Thaler (1986) put forward the

idea that people’s sense of fairness could overcome their desire to act in their self-

interest. Using survey data on the different roles people, for example, consumers play,

they showed the importance of maintaining a sense of fairness. The literature on

behavioral ethics, however, has responded with a relatively intuitive counter-argument:

fairness is a concept that is highly susceptible to self-interested interpretations, casting

doubt on its ability to counter self-interest effects (Thompson and Lowenstein 1992;

Babcock et al. 1995). In a more recent paper on morality in war, Uhlman et al. (2009)

have shown how sharing an affiliation with the victims affected one's judgment about

them.

Social Norms Similarly to fairness, social norms can affect the likelihood that people will

engage in unethical behavior. The effects of social norms have been incorporated into law

and economics based on rational choice approaches, taking into account aspects such as

reputation, expressive effects, shaming, and social sanctioning. Cooter (1988) developed

an economic theory of how the expressive values of law can shape social norms and

individual preferences. Ellickson (1998) described the embrace by law and economics of

status and social norms as mechanisms for the informal enforcement of the law. In

another example, looking at trade secret law, Feldman (2009) found experimental support

for the notion that the expressive function of law operates through a combination of

intrinsic and extrinsic forces.

Shaming in particular received much attention in the law and economics literature.

Kahan and Posner (1999) examined how shaming penalties can work to shape

preferences against crime and express social norms. Skeel (2001) expanded this literature

beyond formal law, examining how shaming can be used by private parties to shape the

preferences of corporations. In contrast, the behavioral ethics literature suggests that

social norms change behavior mostly by subconscious effects that are not associated with

12

Schurr, A., Ritov, I., Kareev, Y & Avrahami, J. (2012). Is that the answer you had in mind? The

effect of perspective on unethical behavior. Judgment and Decision Making, 7976-9666.

16

the costs and benefits of following social norms. For example, Gino and Galinski (2012)

examined the effect of psychological closeness on the likelihood that social norms cause

people to engage in unethical behavior. Gino, Ayal, and Ariely (2009) showed that

salience and group identity are important factors in the effect of unethical norms on the

likelihood that people engage in dishonest behavior.13

Another aspect of social norms with regard to which the behavioral ethics

literature innovates over earlier accounts is the asymmetric influence of unethical and

ethical norms on our behavior (Cooter, Feldman, and Feldman 2008; Epply and Dunnig

2000). Given our tendency for self-concept maintenance, we are more attentive to

unethical behavior in order to subsequently justify our ethical missteps. This concept

underlies the previously discussed theory of moral hypocrisy (Barkan et al. 2012).

5. Tentative Suggestions

In the previous sections I focused on broad ideas such as fairness, social norms,

debiasing, and the regulation of situations as some of the main normative approaches

used to apply behavioral ethics to behavioral law and economics. Below I explore some

concrete research directions for legal policy making that are sensitive to the unethical

behavior associated with automatic processes.

5.1. Statistical Unethicality Given the difficulty of determining individuals’ awareness of their unethical

decisions, behavioral ethics suggests focusing on aggregating people’s decisions as proof

of wrongdoing. This view can be derived from work in the area of employment

discrimination, where the inability to penetrate people's mind (Krieger 1995) has led in

some cases to an aggregated approach to evaluating decision making. Beginning with

Griggs v. Duke Power Co., the US Supreme Court has recognized that although it is not

mandated that the workforce of a company should replicate the composition of the

general population, statistical disparity between the two can be used as compelling

evidence of employment discrimination under a disparate impact theory. According to

this theory, discrimination is not based on the notion that it is possible to prove that the

employer intended to treat candidates differently, but rather on the fact that they were

relying on criteria that resulted in discriminating against a class of individuals (Shoben

1983).

Similarly, in the realm of bounded ethicality, rather than assuming bad intentions

where we cannot prove them, it may be possible to collect data on ethical decisions over

time and create criteria to be applied if the aggregation of behaviors indicates that one

should have been aware of the negative effect of one’s actions even if each individual

decision was made unawares. For example, misuse of office supplies or the improper

acceptance of gifts can represent examples of misbehavior even if any individual instance

is merely questionable, but a sufficient number of marginal instances can warrant

13

In an organizational context, Blake E. Ashforth and VikasAnand (2003) focused on three

processes: institutionalization, rationalization, and socialization.

17

sanctioning, regardless of the actor’s intent. Important jurisprudential work remains to be

conducted to account for the ability to allocate responsibility for one event based merely

on the fact that it has been repeated. But given the complexity of determining

responsibility for single events, and the ability of System 2 to predict the likelihood that

such unethical act reoccurs, a solution of this type may be necessary.

Along the same lines, another change that must be taken into account has to do with the

legal enforcement needed, especially in criminal law, when we require a “smoking gun”

to inform us that a wrong has been committed (Shapiro 1989). In itself, the need for a

smoking gun assumes a dichotomy between deliberate bad and good acts, and it does not

recognize the need for the state to conduct screening in order to catch behaviors that are

not committed with full awareness.

5.2. Disambiguation Many of the theories of bounded ethicality reviewed above suggest that legal

ambiguity causes people to engage in wrongdoing (see also Feldman and Harel 2008). In

rational choice settings, ambiguity and uncertainty appear as areas in which people may

exhibit better compliance given their aversion to risk and ambiguity (Garoupa 2003;

Logue 2006), but behavioral ethics suggests an opposite effect.

One of the principal techniques that people use to self-justify unethical behaviors is to

employ the mechanisms listed earlier in this chapter in order to construct self-serving

interpretations of legal and organizational requirements. For example, Feldman and

Teichman (2009, 2011) have shown that people may use legal ambiguity strategically to

generate a self-serving interpretation of what is required from them by laws or contracts.

Banaji, Feldman, and Willard (work in progress) have found these types of interpretations

of legal ordinances to be sensitive to the individual’s type of interest, level of awareness,

and amount of cognitive resources. As expected, people's need to feel that they are

following the letter of the law on one hand, and their motivation to enhance their self-

interest on the other are responsible for the fear that they will routinely engage in

constructive interpretation of the law.

The focus on the need to avoid ambiguity may be related to other areas of law,

where the lesson of behavioral ethics is that at times less fear of behaving poorly

produces more ethical behavior. For example, in the context of conflicts of interest,

according to rational choice predictions we should be most concerned in situations that

present the greatest conflict. But behavioral ethics suggests that the opposite may be true:

if there is ambiguity about the existence of a conflict, individuals are most likely to

deceive themselves about the wrongdoing involved in their behavior. In clear-cut

situations individuals do not have such an excuse, and therefore they are less likely to

misbehave. More generally, the state should seek to eliminate solutions that solve only

part of a problem for fear of creating similar, exploitable situations.

Along the same lines, although much of the research on conflicts of interest and

corruption tends to suggest that money is more easily transferable and therefore more

dangerous to the integrity of society than other sources of conflict, such as prestige and

loyalty, some of the research on self-deception suggests that the opposite approach makes

more sense. Most good people recognize that taking money from various entities limits

their ability to remain objective. In contrast, the influence of non-monetary interests such

18

as prestige, esteem, loyalty, and competence is less likely to trigger an alert by System 2,

and therefore these interests are more likely to have an automatic effect on people which

will not be constrained by system 2

5.3. Reflection and Accountability Bazerman’s examination of blind spots has called attention to the limited-awareness

processes in which people engage without thinking. The classic response to the limited

awareness problem is to force people to reflect more on what they do, making sure that

they cannot brush aside the unethicality of their behavior using any of the numerous

rationales reviewed in the first part of the chapter. Indeed, research on the interplay

between System 1 and System 2 suggests that accountability may prevent people from

over-relying on their automatic reasoning (Tetlock and Lerner 1999). Unfortunately, in

many legal contexts in which some reporting is necessary the opposite is true: forms are

constructed to promote quick, thoughtless binary decisions, leaving little room for open-

ended reflection and thought (e.g., taxation, disclosures made when selling a car or a

house, customs declarations, financial reports by businesses, reports by oil and gas

companies of payments to foreign countries related to processing minerals). In many

cases, all matters that individuals or corporations may decide not to report will never be

detected, and if they are revealed they are usually regarded as honest mistakes (compare

with Kahan 1997). Creating situations in which people’s dilemmas should be reported

explicitly in these forms may make it possible for people to reflect about what they need

to do. For example, it may be possible to include open-ended rubrics on tax forms, where

taxpayers are required to indicate the areas about which they feel uncertain.

1.1. Regulating Situations rather than Incentives [“the mouse is not the thief, the hole is the thief” has a new meaning in this new literature]

Behavioral ethics also highlights the need for changing the underlying situations

individuals face, which can shape both explicit and implicit choice, as opposed to

changing the incentives presented to individuals, which can only shape their conscious

decisions (Aquino et al. 2009). Rather than changing the social cues, as nudge-like

approaches do, it is necessary to shape and curtail the biases that determine the way in

which people approach dilemmas even before consciously exploring how to solve them.

The recommended approach is to force people to recognize when they engage in bad

behavior; if we assume that people engage in unethical behaviors only because they are

unaware, making them recognize the unethicality should curtail the behavior.

Trevino (1986), one of the pioneers of business ethics, focused on the importance

of situations and business environments in facilitating unethical behaviors, above and

beyond any characteristics of the individuals themselves. This focus has become much

more dramatic in the new behavioral ethics literature. In a recent comprehensive review,

Bazerman and Gino (2012) outlined various characteristics of situations that are likely to

increase unethical behavior, such as the extent to which the situation allows for the

rationalization of unethical behaviors. People also appear to be more likely to behave

dishonestly when they face losses (Kern and Chugh 2009), especially if behaving

unethically can help them turn the loss into a gain (Shalvi 2012). Naturally, as discussed

19

in the descriptive part of the chapter, owing to recognition of the effect of System 1, the

more prominent effect of the situation does not undermine the need to focus on variations

among individuals. In a world that pays attention only to deliberate unethicality, law and

economics models based on incentives are more than enough to change behavior in

ethical contexts. But the better we understand the fact that people’s choices are

responsible for only part of their unethicality, the more regulatory resources we must

allocate to creating situations that reduce all forms of unethicality and legal

incompliance. Blind Spots and Ethical Training

Bazerman and Tenbrunsel (2011) suggested a comprehensive set of solutions for

policy makers based on insights derived from the behavioral ethics literature. The authors

suggested various techniques for people to become aware of their limited ability to

recognize their ethical failings, and ways of remedying this situation. For example,

focusing on the aforementioned concepts of the “should” rather than the “want” self,

Bazerman and Gino suggested that by considering beforehand what “want” desires may

come into play at moments of decision making: individuals can better prepare to resist

these desires and instead implement a decision based on their ethically sound “should”

preferences. The authors use the example of a common interview question: What pay

does a competing employer offer? The “want” self is likely to wish to inflate the number,

encouraging the applicant to lie to the potential employer. By anticipating this, one can

come up with a more acceptable answer, such as “I’m not comfortable sharing that

information,” which serves self-interest but also does not violate moral rules.

Similarly, in a series of experiments, Devine et al. (2012) were able to show that it

is possible to train people to be less affected by their implicit racial biases. At a

minimum, these experiments offer support that even if we are unaware of our self-

interested behaviors, we may be able to control them.

2. Conclusion Behavioral ethics makes an important contribution to the law and behavioral

economics literature. For the most part, this literature supports the view that the self-

serving effects of motivation on cognition allow people to do harm when it serves their

self-interest and at the same time feel good about it. Whereas according to the traditional

literature biases prevent people from realizing their self-interest, in this literature the

concern is with unaware biases that prevent people from understanding that their

behaviors are self-interested and unethical.

Uncovering these biases is especially important because society is being harmed

by non-deliberate bad deeds, while state interventions, as well as interventions called for

by the law and economics literature, do not target these implicit behaviors. At the same

time, as the present review suggests, the ability of the current literature to make concrete

suggestions for the law is limited. Many important aspects are still being debated in the

literature, both theoretically and methodologically. Within the concept of bounded

ethicality, the interrelations between automaticity, awareness, and controllability are still

the subject of controversy, and potential solutions are elusive. For example, the ratio of

System 1 involvement in ethical decision-making is unclear. Furthermore, we know more

20

about the effect of System 1 on System 2 (which is of descriptive interest) than we know

about the effect of System 2 on System 1 (which is of greater normative interest). Even

one of the most momentous questions for the law – are we intuitively good or bad –

seems to be more likely to be affected by context than by individual differences, although

many scholars think otherwise.

Finally, the current literature raises various methodological problems. For

example, most of findings in behavioral ethics are collected from laboratory experiments,

which accounts for short-term effects of various ethical manipulations. For the most part,

the law is more interested in the long-term effects of these practices. The literature also

raises general concerns about the methods used to measure automatic behavior, such as

IAT, fMRI, CRT, and various priming techniques. Most of the research in behavioral

ethics has been in the area of micro-management, which is not entirely suitable for

incorporation into legal theory and policy making. Although a preliminary abstract and

concrete arguments are offered for the consideration of legal scholars and policy makers

in the conclusion of this chapter one can only hope that further research conducted by

legal scholars will enrich the much-needed behavioral law and economics literature and

make it more inclusive.

References

Amir, O., and O. Lobel. 2008. Stu2mble, Predict, Nudge: How Behavioral Economics Informs Law and

Policy. Columbia Law Review 2098-2137.

Ariely, D. 2012.The (Honest) Truth About Dishonesty: How We Lie to Everyone–Especially Ourselves.

Harper Collins.

Ashforth, B. E., and V. Anand. 2003. The normalization of corruption in organizations. Research in

organizational behavior 25:1-52.

Aquino, K., et al. 2009. Testing a social-cognitive model of moral behavior: the interactive influence of

situations and moral identity centrality. Journal of Personality and Social Psychology 97.1:123.

Ayal, S., and F. Gino. 2011. Honest rationales for dishonest behavior. The Social Psychology of Morality:

Exploring the Causes of Good and Evil. Washington, DC: American Psychological Association.

Babcock, L., et al. 1995. Biased judgments of fairness in bargaining. The American Economic Review:

1337-1343.

Babcock, L., and G. Loewenstein. 1997. Explaining bargaining impasse: The role of self-serving

biases. The Journal of Economic Perspectives 11.1:109-126.

Balcetis, E., and D. Dunning. 2006. See what you want to see: motivational influences on visual

perception. Journal of personality and social psychology 91.4:612.

Banaji, M. R., and A. G. Greenwald. 2013. Blindspot: Hidden Biases of Good People: Hidden Biases of

Good People. Delacorte Press.

21

Banaji, M. R., M. H. Bazerman, and D. Chugh. 2003. How (un) ethical are you? Harvard Business

Review 81.12:56-65.

Bandura, A. 1999.Moral disengagement in the perpetration of inhumanities.Personality and social

psychology review 3.3:193-209.

Bargh, J. A., et al. 1992. The generality of the automatic attitude activation effect. Journal of personality

and social psychology 62.6:893.

Bargh, J. A., et al. 1996. The automatic evaluation effect: Unconditional automatic attitude activation with

a pronunciation task. Journal of Experimental Social Psychology.

Bargh, J. A., M. Chen, and L. Burrows. 1996. Automaticity of social behavior: Direct effects of trait

construct and stereotype activation on action.Journal of personality and social psychology 71.2:230.

Bargh, J. A., and T. L. Chartrand. 1999. The unbearable automaticity of being. American

psychologist 54.7:462.

Bargh, J. A. (2012). Priming effects replicate just fine, thanks. Psychology Today blog. Available:

http://www. psychologytoday. com/blog/the-naturalunconscious/201205/priming-effects-replicate-just-fine-

thanks, published May, 11, 2012.

Barkan, R., et al. 2012. The Pot Calling the Kettle Black: Distancing Response to Ethical Dissonance.

Batson, C. D., et al. 1999. Moral hypocrisy: appearing moral to oneself without being so. Journal of

personality and social psychology 77.3:525.

Bazerman, M. H., and M. R. Banaji. 2004. The social psychology of ordinary ethical failures. Social Justice

Research 17.2:111-115, 111.

Bazerman, M. H., G. Loewenstein, and D. A. Moore. 2002. Why good accountants do bad audits. Harvard

Business Review 80.11:96-103.

Bazerman, M. H., and M. R. Banaji. 2004. The social psychology of ordinary ethical failures. Social Justice

Research 17.2:111-115.

Bazerman, M. H., and A. E. Tenbrunsel. 2011. Blind spots: Why we fail to do what's right and what to do

about it. Princeton University Press.

Bazerman, M. H., and F. Gino. 2012. Behavioral ethics: Toward a deeper understanding of moral judgment

and dishonesty. Annual Review of Law and Social Science 8.1.

Benkler, Yochai. The penguin and the leviathan: How cooperation triumphs over self-interest. Crown

Business, 2011.

Bersoff, D. M. 1999. Why good people sometimes do bad things: Motivated reasoning and unethical

behavior. Personality and Social Psychology Bulletin25.1:28-39.

Bohnet, I., M. Bazerman, and A. Van Geen. 2012. When Performance Trumps Gender Bias: Joint Versus

Separate Evaluation.

Cain, D. M., G.Loewenstein, and D. A. Moore. 2005. The dirt on coming clean: Perverse effects of

disclosing conflicts of interest. The Journal of Legal Studies 34.1:1-25.

22

Camerer, C., G. Loewenstein, and D. Prelec. 2005. Neuroeconomics: How neuroscience can inform

economics. Journal of economic Literature 43.1:9-64.

Cameron, C. D., Brown-Iannuzzi, J. L., & Payne, B. K. (2012). Sequential Priming Measures of Implicit

Social Cognition A Meta-Analysis of Associations With Behavior and Explicit Attitudes. Personality and

Social Psychology Review, 16(4), 330-350.

Chaiken, S., and J. A. Bargh. 1993. Occurrence versus moderation of the automatic attitude activation

effect: Reply to Fazio:759.

Chugh, D., M. H. Bazerman, and M. R. Banaji. 2005. Bounded ethicality as a psychological barrier to

recognizing conflicts of interest. Conflict of interest: Challenges and solutions in business, law, medicine,

and public policy:74-95.

Cooter, R. D., M. Feldman, and Y. Feldman. 2008. The misperception of norms: The psychology of bias

and the economics of Equilibrium. Review of Law and Economics 4.3:Article-7.

Cushman, F., L. Young, and M. Hauser. 2006. The role of conscious reasoning and intuition in moral

judgment testing three principles of harm. Psychological science 17.12:1082-1089.

Dan-Cohen, M. 1984. Decision rules and conduct rules: On acoustic separation in criminal law. Harvard

Law Review:625-677.

Dana, J., R. A. Weber, and J. X. Kuang. 2007. Exploiting moral wiggle room: experiments demonstrating

an illusory preference for fairness. Economic Theory 33.1:67-80.

Devine, P. G., et al. 2012. Long-term reduction in implicit race bias: A prejudice habit-breaking

intervention. Journal of Experimental Social Psychology.

Doyen, S., Klein, O., Pichon, C. L., &Cleeremans, A. (2012). Behavioral priming: it's all in the mind, but

whose mind?. PLoS One, 7(1), e29081.

Effron, D. A., and B. Monin. 2010. Letting people off the hook: when do good deeds excuse

transgressions?Personality and Social Psychology Bulletin 36.12:1618-1634.

Epley, N., and D. Dunning. 2000. Feeling" holier than thou": are self-serving assessments produced by

errors in self-or social prediction? Journal of personality and social psychology 79.6:861.

Epley, N., and E. M. Caruso."Egocentric ethics." Social Justice Research 17.2 (2004): 171-187.)

Evans, Jonathan St BT. 2003. In two minds: dual-process accounts of reasoning. Trends in cognitive

sciences 7.10:454-459.

Fazio, R. H. 1986. How Do Attitudes Guide Behavior? In The Handbook of Motivation and Cognition:

Foundations of Social Behavior. R. M. Sorrentino, and Higgins E. T. New York: Guilford, 204-243.

Feldman Y and AlonHarel, 2008Social Norms, Self-Interest and Ambiguity: An Experimental Analysis of

the Rule vs. Standard Dilemma. Review of Law & Economics

Feldman Y 2009 The Expressive Function of the Trade Secret Law: Legality, Cost, Intrinsic Motivation

and Consensus 6(1) Journal of Empirical Legal Studies, 177

23

Feldman Y and DoronTeichman,(2009) Are all Legal Probabilities Created Equal 84(4) NYU Law Review,

980,

Feldman Y and Tom Tyler, (2012)Mandated Justice: The potential promise and possible pitfalls of

mandating procedural justice Regulation and Governance

Yuval Feldman and Henry Smith, Law vs. Equity: Lessons from Behavioral Ethics in What Makes

Intervention Legitimate? 31st Seminar on New Institutional Economics J. Economic Perspectives (2013,

forthcoming (symposium volume).

Fine, C. 2006. Is the emotional dog wagging its rational tail, or chasing it? Philosophical

Explorations 9.1:83-98.

Gino, F., S. Ayal, and D. Ariely. 2009. Contagion and Differentiation in Unethical Behavior The Effect of

One Bad Apple on the Barrel. Psychological Science 20.3:393-398.

Gino, F., D. Moore, and M. Bazerman. 2009. No harm, no foul: The outcome bias in ethical

judgments. Harvard Business School NOM Working Paper 08-080.

Gino, F., et al. 2011. Unable to resist temptation: How self-control depletion promotes unethical

behavior. Organizational Behavior and Human Decision Processes 115.2:191-203.

Gino, F., and A. D. Galinsky. 2012. Vicarious dishonesty: When psychological closeness creates distance

from one’s moral compass. Organizational Behavior and Human Decision Processes.

Gino, F., and S. D. Desai. 2012. Memory lane and morality: How childhood memories promote prosocial

behavior. Journal of personality and social psychology 102.4:743.

Gino, F, and J. Margolis. "Bringing ethics into focus: How regulatory focus and risk preferences influence (Un) ethical behavior."Organizational Behavior and Human Decision Processes 115.2 (2011): 145-156.

Greene, J. D., and J. M. Paxton. 2009. Patterns of neural activity associated with honest and dishonest

moral decisions. Proceedings of the National Academy of Sciences 106.30:12506-12511.

Griggs v. Duke Power Co 401 U.S. 424 (1971).

Guthrie, C. 2003. Prospect Theory, Risk Preference, and the Law. In Symposium:

Empirical Legal Realism: A New Social Scientific Assessment of Law and Human Behavior.

Northwestern University Law Review 97:1115-63.

Haidt, J. 2001. The emotional dog and its rational tail: a social intuitionist approach to moral

judgment. Psychological Review108.4:814.

Haidt, J., and C. Joseph. 2004. Intuitive ethics: How innately prepared intuitions generate culturally

variable virtues. Daedalus 133.4:55-66.

Haidt, J., and F. Bjorklund. 2007. Social intuitionists answer six questions about morality.

Haisley, E. C., and R. A. Weber. 2010. Self-serving interpretations of ambiguity in other-regarding

behavior. Games and Economic Behavior 68.2:614-625.

24

Hsee, C. K. 1995. Elastic justification: How tempting but task-irrelevant factors influence

decisions. Organizational Behavior and Human Decision Processes 62.3:330-337.

Jolls, C., C. R. Sunstein, and R. Thaler. 1998. A behavioral approach to law and economics. Stanford Law

Review:1471-1550.

Jolls, C. 2007. Behavioral law and economics.No.w12879.National Bureau of Economic Research.

Jones, J. T., B. W. Pelham, M. C. Mirenberg, and J. J. Hetts. 2002. Name letter preferences are not merely

mere exposure: Implicit egotism as self-regulation. Journal of Experimental Social Psychology 38:170-177.

Jones, J. T., B. W. Pelham, M. Carvallo, and M. C. Mirenberg. 2004. How Do I Love Thee? Let Me Count

the Js: Implicit Egotism and Interpersonal Attraction. Journal of Personality and Social Psychology 84:5

665–83.

Kaham, D. M. 1997. Ignorance of Law in an Excuse--But Only for the Virtuous.Michigan Law

Review 96:127.

Kahneman, D, J L. Knetsch, and R H. Thaler. 1986. Fairness and the assumptions of economics. Journal of

business:285-300.

Kawakami, K., J. F. Dovidio, and ApDijksterhuis.2003.Effect of social category priming on personal

attitudes." Psychological Science 14.4:315-319.

Kay, A. C., et al. 2004. Material priming: The influence of mundane physical objects on situational

construal and competitive behavioral choice. Organizational Behavior and Human Decision

Processes 95.1:83-96.

Kern, M. C., and D. Chugh. 2009. Bounded Ethicality The Perils of Loss Framing. Psychological

Science 20.3:378-384.

Kish-Gephart, J. J., Harrison, D. A., &Treviño, L. K. (2010).Bad apples, bad cases, and bad evidence about

sources of unethical decisions at work. Journal of Applied Psychology, 95, 1-31.

Korobkin, R. and U. Thomas, "Law and behavioral science: Removing the rationality assumption from law and economics." California Law Review (2000): 1051-1144.

Korobkin, R. 2003. The Endowment Effect and Legal Analysis. In Symposium:

Empirical Legal Realism: A New Social Scientific Assessment of Law and Human Behavior

Northwestern University Law Review 97:1227-93.

Krieger, L. H. 1995. The content of our categories: A cognitive bias approach to discrimination and equal

employment opportunity. Stanford Law Review:1161-1248.

Kunda, Z. 1987. Motivated inference: Self-serving generation and evaluation of causal theories. Journal of

Personality and Social Psychology 53.4:636.

Kunda, Z. 1990. The case for motivated reasoning. Psychological bulletin 108.3:480.

Langevoort, D. C. 1998. Behavioral Theories of Judgment and Decision Making in Legal Scholarship: A

Literature Review. Vanderbilt Law Review 51:1499.

25

Logue, K. D. 2006. Optimal Tax Compliance and Penalties When the Law Is Uncertain.

Marquardt, N, and R. Hoeger. 2009. The effect of implicit moral attitudes on managerial decision-making:

An implicit social cognition approach. Journal of business ethics 85.2:157-171.

Mazar, N., O. Amir, and D. Ariely. 2008. The dishonesty of honest people: A theory of self-concept

maintenance. Journal of marketing research 45.6:633-644.

Merritt, A. C., D. A. Effron, and B. Monin. 2010. Moral Self‐Licensing: When Being Good Frees Us to

Be Bad. Social and Personality Psychology Compass 4.5:344-357.

M. Mikulincer& P. R. Shaver (Eds.), The Social Psychology of Morality: Exploring the Causes of Good

and Evil. Washington, DC: American Psychological Association

Monin, B., and D. T. Miller. 2001. Moral credentials and the expression of prejudice. Journal of

personality and social psychology 81.1:33.

Moore, C., et al. 2012. Why employees do bad things: Moral disengagement and unethical organizational

behavior. Personnel Psychology 65.1:1-48.

Moore, D. A., and G. Loewenstein. 2004. Self-interest, automaticity, and the psychology of conflict of

interest. Social Justice Research 17.2:189-202.

Moore, D. A., L. Tanlu, and M. H. Bazerman. 2010. Conflict of interest and the intrusion of bias. Judgment

and Decision Making 5.1:37-53.

Moore, D. A., et al. 2006. Conflicts of interest and the case of auditor independence: Moral seduction and

strategic issue cycling. Academy of Management Review 31.1:10-29.

Niedenthal, P. M., et al. 2005. Embodiment in attitudes, social perception, and emotion. Personality and

Social Psychology Review 9.3:184-211.

Nuttin, J "Affective consequences of mere ownership: The name letter effect in twelve European languages." European Journal of Social Psychology17.4 (1987): 381-402.

Pizarro, D. A., and P. Bloom.2003. The intelligence of the moral intuitions: A comment on Haidt(2001).:

193.

Rachlinski, J. J. 2003. The Uncertain Psychological Case for Paternalism. In

Symposium: Empirical Legal Realism: A New Social Scientific Assessment of Law and Human Behavior,

Northwestern University Law Review 97:1165-1225.

Rand, D. G., J. D. Greene, and M. A. Nowak. 2012. Spontaneous giving and calculated

greed. Nature 489.7416:427-430.

Sachdeva, S., R. Iliev, and D. L. Medin. 2009. Sinning Saints and Saintly Sinners The Paradox of Moral

Self-Regulation. Psychological Science20.4:523-528.

Sanbonmatsu, D. M., M. C. Powell, and F. R. Kardes. 1986. On the Automatic Activation of Attitudes.

Journal of Personality and Social Psychology 50.February:229-238.

26

Shalvi, S., et al. 2011. Justified ethicality: Observing desired counterfactuals modifies ethical perceptions

and behavior. Organizational Behavior and Human Decision Processes 115.2:181-190.

Shalvi, S., O. Eldar, and Y. Bereby-Meyer. 2012. Honesty requires time (and lack of

justifications). Psychological Science.

Shapiro, D. 1989. Statistical-Probability Evidence and the Appearance of Justice. Harvard Law

Review:530-554.

Shoben, E. W. 1983. Use of Statistics to Prove Intentional Employment Discrimination, Law and

Contemporary Problems 46:221.

Schurr, A., Ritov, I., Kareev, Y& ,.Avrahami, J. (2012). Is that the answeryou had in mind? The effect of

perspective on unethical behavior.Judgment and Decision Making, 7.976-966 ,

Shu, L. L., F. Gino, and M. H. Bazerman. 2011. Dishonest deed, clear conscience: When cheating leads to

moral disengagement and motivated forgetting. Personality and Social Psychology Bulletin 37.3:330-349.

Sonenshein, S. 2007. The role of construction, intuition, and justification in responding to ethical issues at

work: The sensemaking-intuition model. Academy of Management Review 32.4:1022-1040.

Stanovich, K. E., and R. F. West. 2000. Individual differences in reasoning: Implications for the rationality

debate? Behavioral and brain sciences 23.5:645-665.

Stout, L. Cultivating conscience: How good laws make good people. Princeton University Press, 2010.

Sykes, G and D.Matza. "Techniques of neutralization: A theory of delinquency." American sociological

review 22.6 (1957): 664-670.

Tenbrunsel, A., et al. 2007. The Ethical Mirage: A Temporal Explanation as to Why We Aren't as Ethical

as We Think We Are.

Tenbrunsel, A. E., and D. M. Messick. 2004. Ethical fading: The role of self-deception in unethical

behavior. Social Justice Research 17.2:223-236.

Tetlock, P. E., and J. S. Lerner. 1999. The social contingency model: Identifying empirical and normative

boundary conditions for the error-and-bias portrait of human nature. In Dual-process theories in social

psychology, S. Chaiken and Y. Trope. New York: Guilford Press, 571–585.

Thompson, L., and G. Loewenstein. 1992. Egocentric interpretations of fairness and interpersonal

conflict. Organizational Behavior and Human Decision Processes 51.2:176-197.

Trevino, L. K. 1986. Ethical decision making in organizations: A person-situation interactionist

model. Academy of management Review:601-617.

Turillo, C. J., et al. 2002. Is virtue its own reward? Self-sacrificial decisions for the sake of

fairness. Organizational Behavior and Human Decision Processes 89.1:839-865.

Uhlmann, E. L., et al. 2009. The motivated use of moral principles. Judgment and Decision

Making 4.6:476-491.

Valdesolo, P., and D. DeSteno. 2006. Manipulations of emotional context shape moral

judgment. Psychological Science 17.6:476-477.

27

Van den Bos, K., et al. 2006. On preferences and doing the right thing: Satisfaction with advantageous

inequity when cognitive processing is limited. Journal of Experimental Social Psychology 42.3:273-289.

Vohs, K. D., N. L. Mead, and M. R. Goode. 2006. The psychological consequences of

money. Science 314.5802:1154-1156.

Vohs, K. D., N. L. Mead, and M. R. Goode. 2008. Merely activating the concept of money changes

personal and interpersonal behavior. Current Directions in Psychological Science 17.3:208-212.

Yang, Qing; Wu, Xiaochang; Zhou, Xinyue; Mead, Nicole L.; Vohs, Kathleen D.; Baumeister, Roy F. Diverging Effects of Clean Versus Dirty Money on Attitudes, Values, and Interpersonal Behavior. Journal of Personality and Social Psychology, Vol 104(3), Mar 2013, 473-489

Ziemke, Tom. 2003. What’s that thing called embodiment. Proceedings of the 25th Annual meeting of the

Cognitive Science Society. Mahwah, NJ: Lawrence Erlbaum.