Willful Read online




  Willful

  Willful

  How We Choose What We Do

  Richard Robb

  Yale UNIVERSITY PRESS

  New Haven & London

  Copyright © 2019 by Richard Robb.

  All rights reserved.

  This book may not be reproduced, in whole or in part, including illustrations, in any form (beyond that copying permitted by Sections 107 and 108 of the U.S. Copyright Law and except by reviewers for the public press), without written permission from the publishers.

  Yale University Press books may be purchased in quantity for educational, business, or promotional use. For information, please e-mail [email protected] (U.S. office) or [email protected] (U.K. office).

  Set in Janson type by Integrated Publishing Solutions.

  Printed in the United States of America.

  Library of Congress Control Number: 2019938844

  ISBN 978-0-300-24643-8 (hardcover : alk. paper)

  A catalogue record for this book is available from the British Library.

  This paper meets the requirements of ANSI/NISO Z39.48-1992 (Permanence of Paper).

  10 9 8 7 6 5 4 3 2 1

  The delights of all the worlds wanted to reveal themselves to Rabbi Aaron, but he only shook his head. “Even if they are delights,” he said at last, “before I enjoy them, I want to sweat for them.”

  —MARTIN BUBER, Tales of the Hasidim

  Contents

  PART I Life Is a Mixed Drink

  1

  Venturing beyond Purposeful Choice

  2

  Two Realms of Human Behavior

  PART II Belief

  3

  Acting in Character

  4

  Making Money in Financial Markets:

  Anatomy of a Leap

  5

  For-Itself Decision-Making within a Group

  PART III People

  6

  Altruism

  7

  Public Policy

  PART IV Time

  8

  Changing Our Minds

  9

  Homo Economicus and Homo Ludens

  SUMMING UP

  Purposeful versus For-Itself:

  A Peace Treaty

  Notes

  Bibliography

  Acknowledgments

  Index

  Willful

  PART I

  Life Is a Mixed Drink

  1

  Venturing beyond Purposeful Choice

  I’ll begin with some confessions. When the facts change, I usually don’t change my opinions unless I’m backed into a corner, and then I’ll change them by as little as possible. I am a work­aholic. I pretend that work is a pain, but I’d be lost without it. I procrastinate because boring tasks become more exciting when I’m up against a deadline. I’m careful to buy milk at the store where it’s twenty cents cheaper, yet for eighteen years I have left my Columbia University retirement account in a low-yielding money market fund and missed out on a booming stock market—despite the fact that I teach economics. And I’ll occasionally go out of my way to aid a casual acquaintance even when there are far more deserving people I could help. All the while, I think of myself as a rational person.

  One final confession: I’m not all that embarrassed by any of this because it’s the human condition. I don’t believe myself to be particularly afflicted with behavioral biases, the place to turn nowadays when we’re not living up to a high standard of rationality. Well, maybe I do fall into traps from time to time, like the “endowment effect” (overvaluing things I already own) or the “Lake Wobegon effect” (rating myself a better-than-average driver, for example, along with 93 percent of Americans). It’s hard to be certain—after all, behavioral economics deals with blind spots. But I don’t think that biases are the cause of my pig­headedness, aversion to leisure, letting problems build up even though I know by now that an ounce of prevention is worth a pound of cure, sloppiness with personal finances, random displays of altruism, or other seemingly nonrational behavior.

  Instead, I think my behavior is the result of unproblematic, intrinsically human impulses. Holding beliefs that fit with each other and with our experience, that stick together over time, is part of having an identity. Robots might turn on a dime if it would help them reach their goals, but not me. Why should I revise my beliefs to gratify the desires of the new person I might become? I’ve also come to realize that work, like a lot of activities, is undertaken partly for reasons we can pinpoint—such as economic gain, camaraderie with colleagues, or improved status—and partly as a game. In a game, we simply play. We act on the world, and there’s little more to be said.

  But it doesn’t feel that way. We may choose badly or make the same mistakes again and again but at some level we feel as if we are trying to get what we want. When we do act without a purpose, we invent a reason after the fact, like a sleeping person who hears a barking dog and weaves it into the narrative of her dream. Inventing reasons in this way preserves our self-image as rational.

  It might sound like I’m rejecting the backbone of economic theory, rational choice, but to do so would be a mistake. Rational choice has illuminated huge swaths of behavior by emphasizing that we do our best to satisfy our desires with the information and resources at our disposal; we compare all available options and choose the one we prefer over the others.

  I’m not launching an attack on this theory; far from it. I start out each semester defending rational choice against two objections that students usually raise: they don’t feel like calculating machines and they are not materialistic. The first concern is unwarranted, because your actions may adhere to rational choice whether you know it or not. Arthur Schopenhauer tells the story of an elephant traveling through Europe, crossing many bridges. The elephant stops dead at one rickety bridge, even after seeing men and horses cross, having sensed that the bridge cannot bear its weight.1 The defiant elephant illustrates the intuition behind much of economics: when a decision really matters, people and even animals are pretty smart.

  As for the second objection, economics does not assume that people care only and unattractively about themselves and their material well-being. The satisfaction, or utility, that an individual chooses to maximize might depend on inputs like altruism, the well-being of others, or adherence to ethical standards.

  Even after allowing for altruism and accepting that calculations can be intuitive, the idea of yourself as a strictly rational actor may leave you a bit queasy. Conventional thinking offers a palliative: behavioral economics. Behavioral economics has extended rational choice to account for biases and heuristics. A person acting with a behavioral bias also tries to satisfy her desires but routinely misses the mark. Behavioral economists hope that identifying biases will help people mend their ways and act in conformity with economic models. If rational choice theory conceives of people as robots whose behavior is determined by their preferences, then behavioral economists believe that those robots are badly programmed.

  Both rational choice and behavioral economics assume that action is purposeful, that people seek the outcomes that best gratify their preexisting desires. People either know their preferences and can describe them out loud, or sense them and act as if they understood what they wanted. The purposeful choice model can explain many things, but not everything. Certain actions are undertaken not for any tangible benefit but for their own sake. They cannot be ranked against, or traded for, other actions. These actions belong to a second realm of behavior that is neither rational nor irrational, but for-itself.

  Suppose a woman is about to jump into a river to save her drowning husband. We would not expect her to behave rationally, that is, to calculate the present value of the future benefits that she might derive from keeping her
husband alive multiplied by the probability she will be able to save him (net of the probability he will save himself without her help) and then deduct the probability that she will drown multiplied by the value she attributes to her own life. It’s good enough that the drowning person is her husband whom she loves. Any justification, any model or calculation, any attempt to validate her action as a realization of some general principle, would be weaker than that fact. Any additional reason for her decision would be, in the words of the philosopher Bernard Williams, “one thought too many.”2

  The distinction here is not in the magnitude of the decision. A great deal of everyday non-husband-rescuing behavior belongs to the for-itself realm. In the 1942 Preston Sturges screwball comedy The Palm Beach Story, an elderly Texas sausage magnate, the “Wienie King,” decides to lend a hand to penniless Claudette Colbert. She reminds him of himself when he was young and poor, so in a spontaneous, one-time act of mercy, he peels off $700 from his money roll, gives it to her, and says, “so long.” The Wienie King can’t help everyone he meets even though other potential recipients may be more worthy of aid. His for-­itself gesture to Colbert was not predictable; he just did as he liked.

  While neither the husband rescuer nor the Wienie King acts on the basis of any calculation in these instances, they surely do in other contexts. I’m not asking you to jettison purposeful choice altogether, only to recognize that there’s more to the story. Perhaps most of your behavior fits into the purposeful model—sometimes you’re a rational agent, confident of the best course of action and able to explain your reasoning; sometimes you’re a super-smart elephant who knows intuitively what action is optimal; and sometimes you’re the victim of behavioral biases. But then, at other times, you’re none of the above.

  Admittedly, I’m an unlikely advocate for the idea that motives don’t have to be purposeful and behavior doesn’t have to be maximizing, that we’re not always trying to pick the best avail­able option given the information at hand. My stance is incongruous not only because I am an economist but also because I was trained at the University of Chicago, the high temple of rational choice economic theory, and still teach it enthusiastically to my students.

  Drunk on Theory

  I came to the realization that not all our actions have a purpose in a long and roundabout way.

  I began the 1980s drunk on neoclassical economics, the theory that assumes people choose rationally and that supply and demand are in equilibrium, and then tries to explain as much of the world as it possibly can. As a PhD candidate at the University of Chicago, I saw people acting rationally everywhere I looked. Economic theory applied not just to money and markets, but to everything. Why did the A&P package fresh green beans in little cartons? Simple: if the store placed loose beans in large bins, consumers would hunt for the best ones up to the point where the extra benefit equaled their wage. The store eliminated wasteful search by selling randomly selected beans to everyone. Consumers would pay more to avoid wasting time competing for quality. Should the A&P put the best beans on top of each package where they’d be visible to consumers? No, because the store would have to pay workers to hide the lower-quality ones, and rational consumers would learn to discount appearances. My classmates and I told stories like this all day long. Gradually, we thought, the world was revealing its unseen order.

  Not that doubt didn’t creep in around the edges. We wondered why we’d chosen to live at a lower standard of comfort than if we had tried some pursuit other than graduate school at Chicago. We had little money. It was freezing cold. My apartment was so infested with roaches, I’d given up trying to kill them. Approximately 80 percent of the entering class would be tossed out before receiving a PhD. A few of us had been accepted to equally prestigious programs but chose to tough it out at Chicago with its notoriously difficult qualifying exams. We told ourselves that attending the University of Chicago was the best way to build human capital—capital that would lead to reasonably high earnings in stimulating academic careers. But deep down, we knew that wasn’t the real reason. Somehow, we liked that it was hard. Our attraction to struggle seemed perverse as we tried to reconcile our actions with a cherished theory that felt not quite right.

  Early on in graduate school, my classmates and I stumbled on behavioral economics, which was then emerging as an alternative to rational choice orthodoxy. Cognitive biases were documented in all sorts of laboratory experiments. In one famous experiment, subjects were indifferent between receiving $10 immediately and receiving $21 in one year. They were also indifferent between paying $10 immediately and paying $15 in one year. Since a rational person ought to be willing to trade off small amounts of cash now for cash in one year at a single discount rate, whether paying or receiving, this discrepancy was interpreted as evidence of “gain-loss asymmetry”—meaning that people need more compensation to delay gains than they are willing to pay to delay losses.3

  Maybe it was that easy. If this were the case, all we had to do was document biases through experiments, like the one on gain-loss asymmetry, and adjust our models accordingly. Rational choice, with all its insights into markets and many other aspects of human behavior, could largely be preserved. But in the end, behavioral economics did not seem to be the solution to what we thought neoclassical theory was lacking. Usually, when behavioral economics offered a psychological solution for some ostensible puzzle, we could explain the data with rational choice if we worked hard enough. With the gain-loss asymmetry experiment, what about the cost of collecting the debt from the professor running the experiment? Subjects receiving a payment should be inclined to take the money now rather than have to track down the professor in a year and convince him to pay. Compensation of $11 for credit risk and inconvenience of collecting seems reasonable. Likewise, subjects who have to pay would be smart to gamble $5 in hopes that the professor would forget all about collecting and they’d never hear from him again. Considering these factors, the experimental results made sense.

  Ultimately, my classmates and I likened the behavioral economists’ experiments to optical illusions: entertaining and sometimes instructive, but hardly central to everyday life. In the absence of any better ideas, I made an uneasy peace with economic theory. I accepted that behavior is purposeful and choice is mostly rational with a bit of cognitive bias tossed into the mix.

  After graduating in 1985, I took a job in the bond business in Chicago. As time passed, I remained convinced of the power of neoclassical economics and wary of the popular alternatives. Yet I also grew increasingly uncomfortable with the extent to which the traditional model failed to square with my own life.

  First, I found plenty of truth in the saying that the journey is more important than the destination, even though the journey has little place in a worldview predicated on purposeful choice. On the job, I became blissfully lost in challenges that took on their own meaning. Sport often seemed like an apt analogy for how I competed to outsmart the markets and how my firm competed as a team against other firms.

  Second, I was troubled by how I clung to my beliefs, more or less, even when they came in conflict with new data or the views of experts. I was amazed by the wide range of opinions I encountered outside the graduate school bubble. Why didn’t all these supposedly rational actors converge on the common view that was best supported by the evidence?

  Third, while some of my dealings with other people could be understood in terms of rational choice, as I had been taught, many could not. They were more complicated, or perhaps less complicated, than I could explain. Why, for example, would I give this person a break today but not tomorrow, and why not someone else equally close to me or equally worthy? A cost-­benefit calculation didn’t always apply.

  Finally, I began to wonder what effect seeing the world in terms of rational choice has on our inner lives. Does removing everything from its context to determine the rate of exchange at which we would trade this for that—even if subconsciously—impoverish our experience? I wondered whether John Mayna
rd Keynes might have been right when he warned, the “pseudorational view of human nature [leads] to a thinness, a superficiality, not only of judgment, but also of feeling.”4

  Theory Collides with Evidence

  In 1992, I moved to New York City to become the head options trader for the derivatives subsidiary of the Dai-Ichi Kangyo Bank (DKB), Japan’s largest bank at the time. I came to love DKB. My job was a sport that I could play every day. The work felt important—we were solving problems that mattered to the bank and its clients—and the challenges we faced were stimulating and continually shifting. Eventually I was promoted to global head of DKB’s derivatives and securities subsidiaries in New York, London, and Hong Kong. The job left me with little leisure time, but that was okay. There was nothing I’d rather do.

  My most memorable experience at DKB came during the Asian crisis in November 1998. Vaunted Japanese financial institutions like Nippon Credit, Long-Term Credit Bank of Japan, and Yamaichi Securities had gone bust, and DKB was teetering on the brink. We were set to underwrite our third Japanese auto-­loan-backed security for the giant consumer finance company Orico, but the managers in Tokyo told me to cancel the deal. They were worried we would fail to sell Orico’s securities to investors and embarrass the bank.

  I was enraged. I had committed to raising this money for Orico and was looking forward to demonstrating that DKB could proceed with business as usual even when others had lost their nerve. I told my bosses in Tokyo that I would quit, and probably everyone else on the team would too, unless we were allowed to complete the deal. In response to this threat (a bluff), we were allowed to proceed. We agreed to cut the size of the offering and promised to sell every last bond. If we failed, we would not have the chance to quit; we would be fired. The DKB bond sales force rallied to the challenge, and the day the deal closed was one of collective joy.