KEMBAR78
Useful Game Theory | PDF | Game Theory | Infants
0% found this document useful (0 votes)
1K views256 pages

Useful Game Theory

Useful Game Theory: Fundamentals of Decision Making explores human decision-making through the lens of game theory, using relatable examples and minimal mathematics. The authors aim to make the concepts accessible and applicable to everyday life, addressing both positive and negative aspects of human behavior. The book serves as a practical guide for understanding strategic interactions in various contexts, from personal choices to broader societal issues.

Uploaded by

Trang Thi Giang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views256 pages

Useful Game Theory

Useful Game Theory: Fundamentals of Decision Making explores human decision-making through the lens of game theory, using relatable examples and minimal mathematics. The authors aim to make the concepts accessible and applicable to everyday life, addressing both positive and negative aspects of human behavior. The book serves as a practical guide for understanding strategic interactions in various contexts, from personal choices to broader societal issues.

Uploaded by

Trang Thi Giang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 256

USEFUL

GAME THEORY
Fundamentals of
Decision Making

Jay Prag
Amanda Ishak Prag
Palgrave Executive Essentials
Today’s complex and changing business environment brings with it a number
of pressing challenges. To be successful, business professionals are increas-
ingly required to leverage and spot future trends, be masters of strategy, all
while leading responsibly, inspiring others, mastering financial techniques
and driving innovation.
Palgrave Executive Essentials empowers you to take your skills to the next
level. Offering a suite of resources to support you on your executive journey
and written by renowned experts from top business schools, the series is
designed to support professionals as they embark on executive education
courses, but it is equally applicable to practicing leaders and managers. Each
book brings you in-depth case studies, accompanying video resources, reflec-
tive questions, practical tools and core concepts that can be easily applied to
your organization, all written in an engaging, easy to read style.
Jay Prag · Amanda Ishak Prag

Useful Game Theory


Fundamentals of Decision Making
Jay Prag Amanda Ishak Prag
Upland, CA, USA Upland, CA, USA

ISSN 2731-5614 ISSN 2731-5622 (electronic)


Palgrave Executive Essentials
ISBN 978-3-031-75153-0 ISBN 978-3-031-75154-7 (eBook)
https://doi.org/10.1007/978-3-031-75154-7

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature
Switzerland AG 2024

This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse
of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by similar
or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.

This Palgrave Macmillan imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

If disposing of this product, please recycle the paper.


Synopsis
Using parables, famous songs, and classic expressions, Useful Game Theory:
Fundamentals of Decision Making takes readers on an exploration of human
nature and the long road of choices known as “life.” Prag and Prag turn a
mathematical discipline into a digestible (dare we say “enjoyable”) read, with
a little wit, a lot of sarcasm, a humanist approach—and almost no math.
From nuclear disarmament to Friday movie nights, each chapter guides you
through a Game Theoretic analysis of the decisions humans make every day.
A grandmother’s wit and wisdom are never lost, especially if her grandkids are
writers.
Granny Jean
Granny Sue
Sitou Augenie
Sitou Assene
We love and miss you.
Robert, Thomas, and Julianna: it’s all for you.
—J&A, 2024
Preface

Using parables, famous songs, and classic expressions, Useful Game Theory
takes readers on an exploration of human nature and the long road of
decision-making known as “life.” Prag and Prag turn a mathematical disci-
pline into a digestible (dare we say “enjoyable”) read, with a little wit, a lot
of sarcasm, a humanist approach—and almost no math. From nuclear disar-
mament to Friday movie nights, each chapter guides you through a Game
Theoretic analysis of the decisions we make each day.
While this book will occasionally wander into the realm of controversy, our
hope is that it makes some of the world a little less confusing. Game theory
as an intellectual exercise is math-heavy, hypothetical, and full of controls
that can render it almost useless. This book takes a more practical approach,
applying the tools and lessons of game theory to the real world—a place that
is controversial, complicated, uncontrolled, and very confusing.
In these pages, you’ll find an approach that’s relatable and not heavily
analytical. Our goal is to enhance your understanding of game theory by
applying it to the decisions and problems humans face every day. Some appli-
cations in this book are uncomfortable and challenging to navigate, topics like
politics, religion, violence, and societal dysfunction. There are also unconven-
tional uses of economics, sociology, public relations, management, and other
academic disciplines. Game theory exists at the nexus of these practices, and
it is at the heart of the humanity-based decisions we make every day. We
didn’t want to show only the good stuff. After all, the human experience is
both good and bad.

ix
x Preface

For those in academia who are uncomfortable with our anecdotal adoption
of your areas of study, we offer this advice: focus on the goals of game theory.
The discipline grew from a desire to formalize how human beings (from indi-
viduals to groups to companies to countries) make decisions. If you’re so into
math you forget it’s about the human experience, you’ve missed the point.
All academic disciplines stem from casual observations of the real world. So,
focus on game theory, chill out, and remember: game theory is just another
way of thinking about situations. It exists side-by-side with other ways of
thinking. It is not a substitute; it is a complement. If applied science is a test
of empirical knowledge, let’s take the test.
For students of game theory, a bit of advice as well: we’re going for breadth,
not depth. In the pantheon of game theory games, this book focuses on a
relatively small number. There are many complex games that are, more or
less, for AI only. There are others that are less complex, but also less relevant
to the choices you make every day. We are going to analyze a dozen or so
games that resonate with your day-to-day life, and we’ll apply those results to
quite a few real-world situations.
Jay Prag
Upland, USA Amanda Ishak Prag

Acknowledgments This book was made possible through the partnership


and commitment of Bronwyn Geyer and the editorial team at Palgrave
Macmillan.
Contents

1 What Is Game Theory? 1


What Is Game Theory? 4
A Four-Way Conundrum 4
Game Structure 6
Zero-Sum Games 6
What’s in a Game? 7
Understand the Players 8
Rules Versus Rules of Thumb 9
Choices, Choices, Choices! 11
Payoffs and Outcomes 12
The Payoff Matrix 13
Learning Versus Learning Game Theory 14
What About the Numbers? 15
What Isn’t Game Theory 18
Conclusion 19
Questions 19
2 No Honor Among Thieves: The Prisoner’s Dilemma 21
Honor 22
Thieves 22
The Prisoner’s Dilemma 23
Game Setup Matters 24
Confound It! 25

xi
xii Contents

Empathy and the Prisoner’s Dilemma 26


The Payoff Matrix and Jollies 27
Honor, Reputation, and Lunch 30
The Golden Rule and the Prisoner’s Dilemma 32
Dog Walking: The Game 34
Timing 35
Prisoner’s Dilemma as a Sequential Game 35
Repetition 36
Conclusion 37
Questions 37
Appendix 1: More Choices 38
Go Up A Hill… 43
3 Know Thyself: The Players 45
What Makes People Tick 46
Better Safe than Sorry: Maximin 48
A Method to the Madness: Irrationality 50
The Control Freak 51
You’ll Understand When You’re Older: Rationality and Age 53
Know Thyself 53
Too Tired to Think 54
In Hot Pursuit of an Irrelevant Success 56
Sincerity 58
A Sense of Fairness 59
Conclusion 60
Questions 61
4 Fool Me Once: The Value of Information 63
The Fine Art of Information Sharing 64
Separating the Wheat from the Chaff 67
What’s the Worst That Could Happen? 69
Diluted Signals 70
Mixed Signals 71
The Winner’s Curse 73
Negotiations 76
Information, Education, and Patriarchy 77
The Pitfalls of “Reply All” 78
Conclusion 79
Questions 80
Contents xiii

5 Do Unto Others: Cooperation in Games 81


The Game Within the Game 82
Cold War Cooperation 83
The Nash Equilibrium 86
Punishment 88
Punishment, Mafia Style 90
Automatic Punishment 92
Let the Punishment Fit the Crime 93
Too Many Cooks Spoil the Broth 95
The Golden Rule: Words Well Chosen 96
Conclusion 98
Questions 98
6 A Stitch in Time…: Sequential Games 101
Sequential Games: The Setup and the Approach 102
The Nancy Reagan Game 102
Rollback Solutions and First-Mover Advantages 106
First-Mover Advantage: The Coke and Pepsi Game 107
Second-Mover Advantages 110
Home Field: A First-Mover Advantage? 111
The Farm System 113
Unpacking: A Stitch in Time 114
Moving a Big Ship/Building the Plane While Flying It 116
We’ll Cross That Bridge When We Come To It 117
Conclusion 118
Questions 119
Appendix 6.1: The “Marxism Versus Capitalism” Game 120
7 I Solemnly Swear…: Promises, Commitments, and Threats 123
Commitments 124
Commitments with Threats 127
Threats 128
Credibility and Leniency 129
Threats, Information, and Prejudice 130
Be Careful What You Wish For 132
Threats and Future Games 133
xiv Contents

Promises, Promises 133


Reputation Redux 135
Game Structure and Strategic Moves 135
Bribes 136
Conclusion 138
Questions 138
8 Life Imitating Art: Repetition and Reputation 141
Prisoner’s Dilemma Redux 142
No End in Sight 146
What Goes Around Comes Around 150
Cheaters Never Prosper 150
Forever Prisoner’s Dilemma Games and Social Structure 151
Forgiveness and the Prodigal Son 153
The Good Lord Works in Mysterious Ways 154
The Devil Made Me Do It 154
On Mortality 155
Mortality, But Not People 156
Repetition and Reputation 157
Deadlines, Deadlines, Deadlines 157
Conclusion 158
Questions 158
9 If Everyone Jumped Off a Cliff: Brinkmanship 161
The Cuban Missile Crisis: A Brief Synopsis 162
Layers of Players 164
It’s a Slippery Slope 165
Tiananmen Square, Waco Texas, and Rationality 167
The Straw that Broke the Camel’s Back 168
The Tipping Point 169
Is This the Hill You Want to Die On? 170
Whistleblowing 171
If I Have to Stop This Car… 172
Hell and Brinkmanship 172
Brinkmanship and Credibility 173
Conclusion 174
Questions 174
Contents xv

10 All for One, and One for All: Collective-Action Games 177
What Collective-Action is NOT: Perfect 178
Representative Government 179
Follow the Leader Redux: Getting Lost in the Crowd 180
Cartels 181
Labor Unions 182
Fair Representation 183
Agenda Setting 185
Threats Versus Persuasion 186
The Median-Voter Model: Hot Dog Stands on the Beach 187
Third-Party Candidates and Parliamentary Governments 191
Whose Team Are You On? 194
Conclusion 196
Questions 196
11 Granny’s Gold 199
The Good Parent Game 200
Turn Signals 200
Secrets and Lies 201
Threats and Secrets 203
One Man’s Secret Is Another Man’s Lie 203
An Option Is Only Worth More Alive Than Dead,
in Finance Books! 204
Laws, Rules, and Regulations 206
Last Clear Chance 207
Ignorance Is Bliss 208
The Curious Case of Mixed Strategies 209
Holding Grudges 210
Follow the Leader/Blind Leading the Blind 212
Gaslighting, Brainwashing, and “Psyching Out” 213
On Being Fooled 213
A No-Win Situation 214
Information, Expertise, and Decision-Making: “Physician
Heal Thyself ” 215
Game Theory Goes to the Doctor 215
Fairness, Future Outcomes, and the “From Here” Problem 217
The Cold Medicine Problem 219
The Airport Game 219
Conclusion 221
Questions 222
xvi Contents

Answers to Discussion Questions 225


Glossary 233
Index 241
About the Authors

Jay Prag is Clinical Professor of Economics and Finance at the Masatoshi


Ito and Peter F. Drucker School of Management, where he has taught
economics, finance, and strategy since 1986. He previously taught at Harvey
Mudd College, Claremont McKenna College, and Pomona College and
is a consultant to public industries and start-ups on market viability. Jay
holds a Ph.D. in Economics from the University of Rochester. He is the
coauthor of Financial Management for Executives, Macroeconomic Essentials,
Microeconomic Essentials, and Covid-U .

Amanda Ishak Prag is Director of Marketing for one of the largest architec-
ture and engineering conglomerates in the nation. A marketer for more than
two decades, she has helped firms streamline internal and external communi-
cations and develop structures for organizational growth. Her research focuses
on M&A integration and mass communications strategies. Amanda holds
a Ph.D. in Management from the University of California, Riverside, an
M.B.A. in Strategy from Claremont Graduate University, and Bachelor of
Arts in Economics and Bachelor of Arts in Public Relations from the Univer-
sity of Southern California. She is a coauthor of Covid-U and additional
empirical works.
Amanda and Jay reside in sunny and overpriced Southern California with
their daughter, Julianna, and their dog, Doc Brown.

xvii
List of Figures

Fig. 1.1 What’s in a game? Elements of games in game theory 7


Fig. 1.2 Game Structure 13
Fig. 2.1 The Prisoner’s Dilemma, payoff matrix 25
Fig. 2.2 Dinner out, part I 28
Fig. 2.3 The Prisoner’s Dilemma, restated 30
Fig. 2.4 Dinner out, part II 31
Fig. 2.5 Dinner out, with honor 32
Fig. 2.6 The Prisoner’s Dilemma, sequentially 36
Fig. 2.7 Winner, winner; what’s for dinner—in jollies 39
Fig. 2.8 No beets for you! 40
Fig. 2.9 No liver, no beans, no beets, oh my! 41
Fig. 2.10 Jack & Jill 43
Fig. 3.1 A big loss 48
Fig. 3.2 Maximin Jollies 50
Fig. 3.3 Joystick Choices 59
Fig. 3.4 Joystick choices—with guilt 60
Fig. 5.1 The disarmament game 84
Fig. 5.2 The dropped call game 87
Fig. 5.3 The Prisoner’s Dilemma, redux 89
Fig. 5.4 The Prisoner’s Dilemma, with punishment 89
Fig. 5.5 The Prisoner’s Dilemma, mafia style 90
Fig. 6.1 A tree! 102
Fig. 6.2 Smoking or non? 103
Fig. 6.3 Future careers 104
Fig. 6.4 A sweet choice (in revenue) 108

xix
xx List of Figures

Fig. 6.5 Sweet choice, Coke as first mover 109


Fig. 6.6 Sweet choice, Pepsi as first mover 109
Fig. 6.7 A stitch in time 115
Fig. 6.8 Two populations, A&B 120
Fig. 7.1 A tale of two stores 125
Fig. 7.2 I scream, you scream 128
Fig. 7.3 A “Type A” mugger 131
Fig. 7.4 A “Type B” mugger 131
Fig. 7.5 Bribe time 137
Fig. 8.1 Butch and Sundance are recaptured 143
Fig. 8.2 … Here we go again: Butch and Sundance, sequential 144
Fig. 8.3 Butch and Sundance, combined sentences 145
Fig. 8.4 Jane’s & John’s play the odds 147
Fig. 8.5 Jane’s and John’s forever 148
Fig. 10.1 Hot dog stands on a beach, Day 1 188
Fig. 10.2 Hot dog stands on a beach, Day 2 188
Fig. 10.3 Hot dog stands on a beach, Day 3 188
Fig. 10.4 Hot dog stands on a beach, final result 189
Fig. 10.5 Hot dog stands on a beach, polarized 190
Fig. 10.6 Hot dog stands × 3 191
Fig. 10.7 No equilibrium to be found… 192
Fig. 11.1 The good parent game 201
Fig. 11.2 The airport game 220
Fig. 11.3 The airport game, for you 221
1
What Is Game Theory?

Your Granny knew game theory. She may not have called it “game theory,”
and she probably didn’t review a whiteboard of advanced calculus to make
every decision… But, Granny used it every day. We know this because she
understood people. From the nosy neighbor to the annoying aunt to the
helpful clerk at the grocery store, Granny knew how to navigate the world and
get what she needed. She did it with a lot of wit and the goal of being helpful.
We’re willing to bet you’d take your Granny’s advice over the predictions of a
mathematical model any day.
Granny really knew game theory. So did Shakespeare, Aesop, Confucius,
Machiavelli, most country singers, and the writers of every major children’s
book in history. From The Art of War to “In the Jailhouse Now,” many of the
songs, expressions, and parables we heard growing up are actually solutions
or approaches to the games we play in everyday life.
Game theory, also known as strategic decision-making, is a simple concept:
you and another person make decisions, and together your choices determine
a specific outcome. That’s really it.
So, why do game theory books have so much darn math? Well, it turns
out that all the little things matter. For example: How many people are making
decisions? Are they all choosing at the same time? Do I get to choose first and,
maybe, more than once? How often am I choosing, and with whom? These
factors (and so many more) impact how a decision turns out for you, and
math can help us quantify and model these factors to get to a predictable
outcome. Math is useful, just not in this book. Here, we’ll focus on under-
standing the basics. So, game theory helps us understand the structure and

© The Author(s), under exclusive license to Springer Nature 1


Switzerland AG 2024
J. Prag and A. I. Prag, Useful Game Theory, Palgrave Executive Essentials,
https://doi.org/10.1007/978-3-031-75154-7_1
2 J. Prag and A. I. Prag

probable outcomes for things like which freeway to take, how to get a raise,
and how to choose a movie on a Friday night.
Game theory doesn’t teach you how to win games. It teaches you how
to set them up to maximize the likelihood of your best option. What’s the
difference? Well, as much as mathematics and logic can provide humanity,
they’re not foolproof. A game theory model can do its best to include all
possible variables, but when human beings are making decisions in the real
world, things get missed. When people read a book or take a class in game
theory, they are often disappointed to discover they aren’t learning how to
win every game. There are no books or classes to teach you such strategies.
All game theory can provide is a way to understand the structure of a game
and ways to increase the likelihood of your best outcome.
Long before Brouwer, Morgenstern, von Neumann, and John Nash,
governments and societies set rules to increase the likelihood of best societal
outcomes for situations that happen every day. Laws determine how people
in modern society are allowed to play a game. When you drive up to an inter-
section with stop signs on every corner (a four-way stop), why not just always
go through first? That’s an approach to solving the game, right? Well, yes, but
it’s a unilateral decision. It doesn’t consider what other players might do (and
if everyone followed that approach, there would be a lot of crashes at that
intersection). So, the government says you have to stop at a stop sign and
look around, and society has given you a preferred way of making a choice
(usually, the first person to arrive or the person to the right—in the US—goes
first). Rules help set up a game by providing a framework for the players, and
any decision strategy is bound by those rules.
Organizations do it too. Disneyland welcomes thousands of visitors
daily—visitors from different cultures and countries with different expecta-
tions of waiting in line. The thing is, it’s Disney. You’re going to wait in
line. So, how does the mouse make sure things run smoothly? Each park sets
up clear boundaries with ropes, walls, fences, and hedges, to guide visitors
where they need to go. A trip around the Magic Kingdom shows you there
is no ambiguity about what visitors are expected to do. Compare this level of
control to your local grocery store the Wednesday before Thanksgiving and
you’ll start to see organizational boundaries in action.
Unilateral decisions, rules, organizational boundaries—now we’re getting
somewhere. Game theory is very much a byproduct of human evolution.
History, anthropology, and biology are full of examples where unilateral deci-
sions (ex, I should kill you before you kill me) gave way to more cooperative
solutions that helped our species thrive. When other people make choices
that ultimately determine an outcome, self-interest can turn out poorly for
1 What Is Game Theory? 3

everyone. It’s why Granny told us “Do unto others as you would have others
do unto you.” Granny knew game theory.
Strategic decision-making is not only limited to games here and now. On
the contrary, many of the choices we make are playing games with our “future
selves” (as Doc Brown might have said to Marty McFly: you have to think
fourth-dimensionally). Your choices today combine with your future choices
to determine an outcome later in life. The marketing slogan “What happens
in Vegas stays in Vegas” is an attempt to convince you that your current
choices do not affect your future outcomes. One trip to Sin City makes it
clear that the slogan isn’t really true. Most of the time the only thing that
stays in Vegas is the money that you gambled away.
Game theory in a classroom is rooted in mathematical theory. Strategic
decision-making, on the other hand, is the system we humans have grown
up within. You learn a lot about life through experience and daily trial-and-
error. That’s the point of many of Granny’s expressions. She’s trying to save
you from some potentially painful learning curves. On a good day, Granny
understood context and how to set up a game for her benefit. When you told
her you were going to Vegas, she might have said, “waste not, want not.” She
was warning you about the value of saving and the risks of gambling. On the
other hand, if she was really cool, she might have just asked you to put $10
on red.
Granny and all of the purveyors of worldly wisdom give us approaches to
problem-solving, but they didn’t have time to teach all of the nuances: when
does that approach work? Why is it better? Why does setup matter? This book
will fill in a few of those details.
Many games that Granny played during her lifetime look different today.
We’ll teach the game theory behind the expressions to see when, or if, they
still apply in the modern world. When Granny’s Granny taught her “be
careful what you ask for, you might get it,” there was no Internet, Facebook,
AI, or screenshotting. Granny might not know that in the current world, the
expression is cautionary in a lot of new ways.
“You can’t tell a book by its cover” is an expression that teaches us not
to overstate the importance of external factors. Arguably, this was Granny’s
way of saying “don’t be prejudiced,” but it’s going to have special meaning
in this particular book. This book’s title makes game theory seem folksy and
perhaps even easy. It is not. Game theory is complicated, subtle, and easy to
get wrong. Granny was smarter than any PhD. So, we might as well start
with her best hits.
4 J. Prag and A. I. Prag

What Is Game Theory?


Spoiler Alert: it’s not really a game, and it isn’t really a theory.
Game Theory is an attempt to formalize the structure and outcomes of a
situation when two or more people are making choices that together lead
to an outcome. No one player can unilaterally determine the outcome;
it depends on the decisions of both people. Strategic Decision-Making is
the process of making choices and decisions knowing that there are other
“players” making their own choices and decisions, and that the combination
determines how you all end up. Game theory is the discipline that looks at
how these decisions are made, with an eye toward how to achieve mutually
beneficial outcomes.
That’s easy! In everyday life, we always think strategically, analyze a
situation accurately, run the numbers, and make the best choices, right?
Nope.
In the real world, we are emotional, spontaneous, unpredictable, and
sometimes wrong. You use game theory every day. It’s innate, it’s automatic.
You do it when you’re in a relationship. You do it when you are a parent.
You do it when you are driving. You do it when you decide to have another
helping of ice cream with sprinkles. Game theory began by observing human
decision-making and applying math to explain patterns and predictability. At
its foundation, though, is the thought process (or lack thereof ) we engage in
multiple times a day.
Let’s do the four-way stop game, now with more detail.

A Four-Way Conundrum

You’re driving your new car, and you arrive at an intersection at the same
time another car arrives perpendicular to you (across from you) signaling its
intention to turn across your path (in the US, it has its left blinker on). Clap
those hands, it’s game theory time.
If you both go, you’re going to crash. Let’s assume that’s not the outcome
either of you wants.
If neither of you go, you’ll just wait there forever. That too is not, you
know… a great outcome.
You might be thinking “there are driving rules and regulations that tell us
what to do.” Are you sure both of you know the same rules and always use
them correctly?
You’re not in a hurry and you’re driving your brand-new car, so you wave
at the other driver to go. They turn out of your way first, you drive through
1 What Is Game Theory? 5

the intersection second, there are no collisions, and everyone makes it to their
destination.
You just did a lot of game theory! You and the other driver made choices—
you waited and the other driver proceeded—and that combination of choices
led to a good outcome. You and your car made it to your destination and
you’re happy with how things turned out. In the game’s payoff matrix, a list
of potential outcomes, you are both better off. The other car used a literal
signal to tell you where it needed to go. You used a signal, a wave, indicating
you would let them go first. All is well.
The structure of the four-way stop above is what we call a two-player, two-
choice, simultaneous game: you each made choices at the same time. Game
theory will help us break down other variations of this everyday choice. For
example, would you have done the same if you were in a beat-up, hand-me-
down car instead? If you were in a rush? As we will see, structure combines
with preferences to determine an outcome.

[box]
Four-Way Stop—Rush Hour Version
While the four-way stop is a nice, generally painless type real-world game, we
can complicate things by adding more information. When you and another
driver get to a four-way stop at 1:30pm on a lazy Thursday afternoon, there’s
usually a lot of cooperation and the game solves quickly and easily. If you play
the same game with the same people and at the same intersection at 7:45am
when people are trying to get to work or get their kids to school (or both!),
things are very different.

As the clock ticks toward “you’re late,” you find yourself blowing your horn at
drivers. That’s a very different type of signal. You might also coast through the
sign, rather than stopping. Your nice, generous wave changes to a not-so-nice
one-finger gesture. Time is just not on any player’s side.

There’s a deeper issue here. As we discuss our games and work through the
paths to solutions, think about how choices and outcomes change with factors
like duress. How you perceive and play games changes with the circumstances
under which the game is being played.
[end box]
6 J. Prag and A. I. Prag

Game Structure
When we talk about structure, we mean all of these nuances and factors that
can affect players, choices, and outcomes. Let’s not wander too far from our
first game, the four-way stop. What was the structure of the game? There were
two drivers, so two decision-makers; they had two choices (wait or go); and
they both had some internal desired result. They also had constraints: they
were in a hurry, or they worried about their new car. Implicit and important
to the outcome is an assumption of what’s often called rationality. There’s
more to rationality (a lot more), but for now, let’s just say neither had an
incentive to crash.

Zero-Sum Games

The payoff of a game should be familiar for most of us: it is the set of possible
outcomes for each player based on their decisions. If you’re a sports fan,
a payoff is the potential score from a play or series of games. More often,
though, it’s simply a matter of somebody winning and somebody losing.
This is called a zero-sum game, where there’s one winner and one (or
many) losers. Zero-sum games got their name by applying numbers to a win/
loss combo: a win is + 1, a loss is − 1, so, technically, the sum is zero. There is
no middle ground. There’s no possibility for everyone to come out a winner.
These definitions are perfectly fine when we’re thinking about payoffs (ex, 0/
1), but the vast majority of interpersonal games we play every day, the sort of
games we’ll be discussing, are not zero-sum games.
Arguably, zero-sum games are easier to figure out. More points = win;
not very much to unpack here. Having no middle ground between winning
and losing removes having to think in terms of “better off ” or “worse off.”
Real-world problems are not so easy to solve. When you “get your way” in a
decision with your significant other, are you both always better off? You won!
But it doesn’t always feel like a win, right? Does getting your way ever conflict
with your relationship dynamic?
Sometimes, we wish we could resolve every conflict with a game of pickup
basketball, but that’s not real life. The thought process you just went through
is precisely why zero-sum games are easier than most interpersonal games,
and why most games in life are not zero-sum.
1 What Is Game Theory? 7

What’s in a Game?

The traditional approach to game theory starts by defining three elements


present in every game: the players, their choices (often called a “choice set”),
and the outcomes. We call these the “who,” “how,” and “what” of a game,
something like this (Fig. 1.1):
Most books start with the outcomes. These are called payoffs, or simply
the results of the game. Remember, the goal of game theory is to find the
“best possible outcome” and see how, if at all, players can get there.
Starting with outcomes is, well, backwards.
Humans evaluate their environment every minute of the day. Then, they
make choices. In this book, we show that the setup of the game—under-
standing the players and the choices—is, in fact, the most important aspect
for players. If we want to test theories and apply them to the real world, we
should take the same approach humans do in everyday life. Think of it like
this: you can negotiate a four-way stop better now than you did when you
first started driving. Why? Even though the outcome is the same (don’t crash),
your older, wiser, more-experienced self knows how to read other players and
the potential choices better. Understanding players and choices first helps you
win more games.
Our goal is to examine the “best possible setup,” to achieve an outcome
that may change over time. So, let’s start with the Players, then Choices. In
theoretical games, the outcome only needs be proven (usually mathemati-
cally) once. In applied game theory, results need to be repeated to have any
validity. At the end of this book, we hope the reader has a broader under-
standing of how to evaluate players and choices toward better outcomes across
many types of games.

WHAT’S IN A GAME?
PLAYERS CHOICES OUTCOMES
• Preferences • Frequency • Payoff / Results
• Biases • Timing
• Needs • Location
• Capabilities • Availability

Fig. 1.1 What’s in a game? Elements of games in game theory


8 J. Prag and A. I. Prag

Understand the Players


One of the things that we learn and teach in the realm of game theory is that
you have to know things about the other players before you can determine
your approach to the game. This issue gets us into a lot of tricky, complicated
problems. After all, we’re dealing with people, and human beings are not so
easy to fit into neat little predictable groups (no matter how hard Marketing
and Advertising firms try).
Consider the difference between parenting an infant and parenting a fully
verbal (perhaps precocious) toddler. The infant has no ability to process or
even react to your choices in a strategic way. She will cry until her problem
is fixed (she is fed, her diaper is changed, she gets all the snuggles). Some-
times, she’ll just cry for no obvious reason. Waxing poetic about the trials of
everyday life probably isn’t going to get you any peace and quiet. The differ-
ence in communication methods is a clear barrier, and you have to evaluate
the game based on the infant’s capabilities.
A toddler, by comparison, is learning about games and game theory. He
won’t be very good at it yet, but he is starting to make active choices that
he thinks will get his best possible result. He’ll test boundaries, play repeated
games to see what affects the outcome, and learn what works. He also starts
to understand the consequences of his actions. Maybe waxing poetic doesn’t
work here either, but a stern talking to just might impact the outcome.
Playing the “just give me a minute!” game with a toddler could work.
Parenting strategy is different for parents of infants (who don’t understand
consequences) and toddlers and teenagers (who, hopefully, do). Without a
sense of time and outcomes, an infant is rarely ever “playing” a game.
We say “rarely,” because, at some point, instinct does seem to kick in. By a
baby’s rationale: “if I cry, I get held. If I cry, I get food. If I cry, I get warm.”
At a certain point, however, this becomes learned behavior, and it begins to
influence how a baby navigates his world. Indeed, some child development
experts say to refrain from rewarding crying after a certain age. If it becomes
standard practice to always “win” by crying, a toddler grows up characterizing
the game very differently than you might want him to. Babies begin pretty
self-interested (and they’re so cute!), but raising a self-interested child is not
usually a goal of most parents.
Selfishness or self-interest poses different levels of risk depending on your
relationship with the other player. Strangers and friends are very different
players in Strategic Decision-Making. In this book, we’ll talk a lot about the
Golden Rule: “do unto others as you would have others do unto you.” Sure,
we wish we could always make the cooperative, best-for-both-of-us choice in
1 What Is Game Theory? 9

all games, and we hope the other player will be doing the same. That’s a lot
more likely to work if the other player is a friend, a family member, or just
someone you interact with on a regular basis (neighbors, co-workers, etc.). It
may not work so well with a passing stranger, an enemy, or a faceless gnat in a
comments section. It’s harder to see direct consequences if another individual
just isn’t visible or known to you.
Total strangers aren’t necessarily mean or universally self-interested, they
just don’t know you and you don’t know them. So, if the downside of
acting cooperatively when the other player is being self-interested is relatively
big, neither player will act cooperatively. As we’ll see, this outcome varies if
the players are strangers or friends, or if they’re playing the game once or
repeatedly over time.
[box]

Wisdom and Truth


Familiarity affects another issue: truthfulness. As Granny used to say, “out of
mouth of babes…” Young children are often credited with having clarity and
intelligence that belies their age. Not wishing to deflate the proud parents
of wise children, what you’re often hearing is really straightforwardness, not
wisdom. While they are learning about strategic decision-making, children
usually don’t have the capacity to be political. They are simply more inclined
to state the truth than they are to finesse reality.

Suppose Mommy wants Daddy to take her to her office Christmas party, but he
really needs a haircut first. Daddy isn’t prone to getting haircuts. Mommy says,
“darling, I’m swinging by the barbershop later—can I set up an appointment
for you?” Junior says, “Dad, you look like a yeti!” Mom is acting strategically;
Junior is telling the truth. So, when you hear the expression “honesty is the
best policy,” remember that in a game-theoretic context, honesty—best policy
or not—and wisdom are two different things.

[end box]

Rules Versus Rules of Thumb

In formal, what might be called “classroom” game theory, we follow


approaches to games that lead to solutions in a mathematical way. In that
context, we set up mathematical barriers, or rules. In the real world with real
people and impossible-to-know things like precise outcomes of a game, the
rules that we’ve learned in the formal setting often become joined with rules
of thumb. Academics call these heuristics. Rules of thumb are just inclinations
10 J. Prag and A. I. Prag

(fancy people call this instinct), rather than absolute ways to make choices.
They’re the guidelines we hone over time for how to make our decisions.
Instinct and self-interest go hand-in-hand. A baby knows herself based on
satisfaction. You grow into an adult knowing yourself first. You know what
makes you happy and sad. Over time, you get to know about those close
to you, what makes them happy and sad. This is how your rules of thumb
develop, and they help you achieve an optimal outcome pretty well when the
other player is a relative. The less familiar the other players are, however, the
less you know about them. In that case, you’ll have to rely more on set rules
for making your decisions.
Rules of thumb change games via familiarity. We should note, however,
that something unfamiliar is not the same as something different. You can
play games with rules of thumb even when the other player is different if you
have a pretty good idea of “what makes them tick.” If you are a kind, caring,
giving person, but you know the other person in the game is selfish and
uncaring, you won’t make cooperative choices even though you would like
to do so. The selfish person may be familiar, maybe a relative or somebody
known well to you; but you are diametrically opposed in how you operate.
Different and unfamiliar are not the same.
If used incorrectly, rules and rules of thumb can give you inferior outcomes
for no good reason. Such is the case with bad assessment criteria like prej-
udices. Prejudice makes you think you are familiar with a person’s choice
set, but you are often wrong. Information is key in playing a game correctly,
and an incorrect perception of another can lead to bad information and bad
outcomes.
[box]

You Can’t Teach Height


All rules are made up—some in alignment with game theory and others not
so much. In this book, you’ll see examples of rules that exist to take away or
create strategic (game-theoretic) advantages, such as laws that determine who
goes first at a stop sign, who gets to move first in chess, blocks you can or can’t
rezone, or what happens when you take things that don’t belong to you.

One big example of a strategic advantage is inherent skill. Many rules are
in place to limit (or at least control) the impact of skill on a game. Rules
in competitive sports, for example, focus on leveling the playing field (get it?)
regarding athleticism, age, weight, and training time. A player or team with the
most skill is supposed to win most of the time, right? Without rules, probably
not.
1 What Is Game Theory? 11

Take basketball.

Basketball’s origins date back to the 1890s when a Massachusetts gym teacher
wanted to give people an indoor sport to play during winter months. He put
a fruit basket on the wall and gave players a point for getting a ball into the
basket.

From there, things were formalized (and improved, considering the original
basket didn’t have a hole in the bottom), and eventually we got to modern
professional basketball with a hoop with a net set ten feet above the ground.
Jump ahead (seriously, get it?) a hundred years or so, and you find many of
the successful basketball players are, well, tall. If you ignore the other rules, the
easiest way to get a point is to drop the ball directly in the hoop. All the
training in the world doesn’t matter compared to that ability. Granny says:
“you can’t teach height.”

Without rules, basketball teams would compete to recruit only the tallest indi-
viduals; not the fastest, not the strongest, not the most agile. But there are other
rules. For example, you get more points if you make a basket from far away.
Interestingly, the popularity of the game increased when they changed to “three
points beyond the free-throw line.” Shorter, skilled, agile players can succeed
if they are good at making “three pointers.” The rule made skill, training, and
agility relevant again.

game theory is concerned with how your choices can change outcomes, and
that will always depend on the rules of the game. If rules are written (more or
less) entirely to benefit a set of inherent skills, you won’t have an opportunity
to use much game theory.

Food for thought: Given the importance of training, who’s probably better
at putting together a successful basketball team—the coach or the general
manager?

[end box]

Choices, Choices, Choices!


In your everyday interactions, choices can be simple (regular or decaf?) or
extremely complex (move the family to Houston?). When we find ourselves
in a situation that calls for the tools of game theory, we depend on our
ability to define a clear set of choices. As humans, we parse decisions out into
12 J. Prag and A. I. Prag

a million different ways of actually doing them (e.g., you can roll forward
slowly instead of coming to a full stop), but for ease of analysis, we’ll assume
choices are binary. This or that (e.g., everything that allows the other driver
to go first counts as “waiting” on your part, whether you’re creeping forward
or not).
We acknowledge that there is always an alternative to all of the choices in a
game: you could choose not to play. This and other options will be discussed
later, but it’s important to recognize this is almost always an option in daily
life. Sometimes, Granny advised us to “just walk away.”
Perhaps the most important lesson of growing up is how to effectively navi-
gate multiple choices. Not playing is always an option; but there are several
others (procrastination: choose later; abdication: make someone else choose;
impatience: choose quickly and get it over with) we employ. Life exists some-
where in between a two-choice, neat, straightforward game on a whiteboard
and the processing of millions of pieces of information we do every day.
Our games will be as applicable as we can make them, but there’s always
room to interpret choices for yourself.

Payoffs and Outcomes


We’ve defined players and examined how choices are organized. When it
comes to payoffs, however, the discussion is a bit more complex. Your average
game theory book shows you matrices and tree diagrams with several possible
outcomes. We’ll use a few of those, sure. Sometimes, payoffs are presented in
currency or revenue or cost. Other times, they are communicated in some-
thing less tangible called a “utils,” short for a measure of utilization. We can
thank Economics for that one. (Side note: we have yet to meet anyone who
can accurately describe what a “util” really is. We think it’s a type of guinea
pig or maybe a small fruit.)
Payoffs and outcomes, however you specify them, are just numbers that
illustrate how players feel about a particular decision.
Outcomes are tied to the setup of each game. They come from the prefer-
ences, biases, hopes, dreams, and prayers of each player. If you buy a donut
for $2 and you love donuts, your payoff (your happiness) might be 100. If
your friend buys the same donut for the same cost and she doesn’t like donuts
at all, her payoff might be − 50 (because she spent money on something she
doesn’t even like). If your brother kind of likes donuts, but not as much as
you, his payoff is a solid 60. Each of you paid $2 for each donut, but your
happiness—your utility—is vastly different. The payoff is not your financial
1 What Is Game Theory? 13

gain or loss; it is how much happiness you gain/lose based on how the game
plays out.
It might be easier to explain it the other way: if each game’s payoff was
structured only in dollars, you wouldn’t have enough information to make
a real decision. How much do those dollars matter to each player? Is the
person money-driven or motivated by something else? Until we understand
how each player feels about possible outcomes, we can’t figure out how a game
works. Different players—different human beings—have individual choice
sets, so the outcomes depend on your ability to understand and process these
preferences in real time.

The Payoff Matrix

When we organize our thoughts around Players, Choices, and Outcomes, we


often simplify things in a Payoff Matrix—an easy way to show the structure of
a game. Throughout this book, we’ll employ many matrices to show different
games. Figure 1.2 shows a sample Game Structure matrix (Fig. 1.2).
First, on the outer edge, we identify the players in each game. For this
foundational text, we’ll usually use two players, but you can always create a
matrix with more.

PLAYERS

PLAYER 2
CHOICES

A B
OUTCOMES

PAYOFF PAYOFF
A

PAYOFF PAYOFF
PLAYER 1

PAYOFF PAYOFF
B

PAYOFF PAYOFF

Fig. 1.2 Game Structure


14 J. Prag and A. I. Prag

Second, choices are presented in the gray bar. As we’ve said, these are the
only things you can actively decide on in a game.
Finally, payoffs are the outcomes for each player based on potential choices,
presented in the matrix itself.
The matrix form of games (spoiler alert: there are other ways to illus-
trate games!) is one of the easiest and most straightforward ways to visualize
choices and outcomes. Don’t be fooled by the presence of payoff informa-
tion. Remember: even though you have full information, the only thing you
control in the game is choice (the gray bar). Payoffs inform your choice, but
you can’t directly choose what you get.

Learning Versus Learning Game Theory


“Learning by doing” is an expression that has as many limits and exceptions
as it does applications. “Learning the hard way” means you’re learning by way
of painful failure or mistakes. You might learn the hard way that some people
are more selfish than you, that you can’t always be selfish, and that you can’t
make choices with all people the same way. That’s learning about the other
players, choices, and outcomes in your games.
We’re going to focus on learning how to play games, and how to specify the
factors of each situation you’re in. With that knowledge, you should be able
to skip a lot of mistakes that come from misunderstanding the choices before
you. You might find yourself in a predicament one day with someone at work
and a friend will say “they’re playing you.” That’s your friend telling you that
your work colleague better understands the game-theoretical situation than
you do. It’s a good time to apply some game theory.
You might also learn when not to apply game theory. You’ll sometimes hear
sports commentators say that a coach or a player is “being too cute.” This
odd expression actually means they are doing something tricky and outside
of the playbook. It’s unsanctioned, off-strategy—it goes against everything
the payoff matrix tells you to do—but it just might work. Game theory and
Strategy can be thrown out the window sometimes.
For movie buffs, there’s a famous scene in “The Princess Bride” when the
bad guy, Facini, is trying to figure out which of two wine goblets is poisoned.
Facini uses a lot of game-theoretic reasoning, ultimately making his choice
after believing he tricked the masked man into revealing the answer. When
Facini keels over dead, the masked man reveals they both had poison, but he
was immune. In fact, the game was won when the masked man convinced
Facini it was a game and not a foregone conclusion.
1 What Is Game Theory? 15

[box]

Practice Makes Perfect


Here’s some circular logic for you: strategy can beat skill, but you have to learn
strategy like a skill. It is important to learn, evaluate, and revise your strategy
over sequential games (and, given long life, it’s always a sequential game).

Skill is based on repetition, and the goal is to have it become innate, unre-
hearsed, and automatic. Athletes, coaches, and commentators use metaphors
like “muscle memory” to describe the desired outcome of practice, which
isn’t exactly to “make perfect” but rather to automate performances so they’re
not affected by the stress of the game. Similarly, you’ll hear people talk
about “clutch performance,” good performance under stress. Is being clutch
a strategic choice, or simply the result of repetitive practice and innate skill?

A good strategy, on the other hand, is adaptive. The goal of this book is to show
good examples of “think about it, then decide.” Mastering it means entering a
mental state where you are ready to make a decision. In studying, practicing,
and understanding game theory, you’ll make better choices even in stressful
situations. The choices are not automatic, but the process of thinking through
your choices is.

Think everyday decisions aren’t stressful? Think again! Game-theoretic situa-


tions, as we’ll see, can be literal life-or-death. In basketball, musical perfor-
mance, treaty negotiations, and bedtime stalling tactics, routine and practice
make perfect. Game theory can help change your performance and better your
outcomes even against more “skilled” players because you’ve learned how to
evaluate a game.

[end box]

What About the Numbers?


A question that game theory students frequently ask is “does it have a lot of
math?” In its commonly taught form, yes, tons. In its most useful form, no.
You usually don’t take a graphing calculator along to get engaged.
When we start describing and solving games, we use a “payoff matrix”
that has a few numbers. In showing a game’s payoffs, we’ll use numbers of
something (like dollars) that each player will get depending on who does
what. We’ll then try to determine what each player should do by assuming
they will want their best number (usually the biggest number). Saying that
16 J. Prag and A. I. Prag

ten is bigger than five is not math. It’s a simple comparison. So, assuming
you know that ten is bigger than five, this book won’t use a lot of math.
You might say “hey, you told me this didn’t have a lot of math!” The matrix
is just an illustration. This game theory book does not have calculations, it
doesn’t take derivatives, and it won’t require anything more than what most
people can do in their head. If you know 3 is less than 4, you’re all set.
There’s a whole other part of game theory that’s full of math. It works out
complicated versions of some of the problems we’ll be looking at and other
problems that are beyond the scope of this book. The math version of game
theory has its place (proving Granny right in a mathematical sense). Other
times it’s proving things that theorists like to prove for the sake of proving
them. Either way, math has its place, but not in these pages.
We’re taking a humanist approach to game theory: learn how to identify
the situation you’re in—the players, the payoffs, and the rules—and make a
strategic, beneficial choice. You already do this daily, and you learned to learn
over time. To get to a more strategic level, we’ll use a few numbers, but not
math.
Even with our approach, though, game theory is a tricky discipline. In
a classroom, we show students a bunch of games with straightforward rules
and known outcomes and illustrate how different approaches work for solving
those games. In doing this, we are practicing. We are abstracting from reality
and doing drills. We are in the batting cage, running scrimmages, practicing
our free throws, on a driving range, and hitting tennis balls shot to us by
a machine. We are doing “wax on, wax off ”—practicing the mechanics of
decision-making so we’re ready when a real-world opportunity presents itself.
If that reference made you smile, you already know what a lightbulb
moment is. For the uninitiated, this is a famous quote from the 1984 film
The Karate Kid. An old karate expert tells the teenager that he will teach him
karate, but first, he has to do a lot of chores. Clean the floor, paint the fence,
and wax the cars. In each case, there was a specific way the teenager was told
to do the task: back and forth, up and down, circles to the right then circles
to the left with each hand. Exhausted from days of chores and having seen
no karate lessons, the young man tells the old teacher he’s had enough and
he’s leaving.
The teacher says, “I have taught you karate!” He proceeds to show the boy
how each chore was training his body to do certain things. “Wax on, wax
off,” circles to the right and circles to the left with each hand, is how you
deflect your opponent’s attacks. Aha!
Once the lightbulb was on, the young man did what he was asked to do
knowing it had a point and it was teaching something important that had
1 What Is Game Theory? 17

nothing to do with waxing or painting. Why can’t we teach every skill and
lesson directly, without drills? To quote a proverb, “the journey of a thousand
miles begin with the first step.” If you told a 6-year-old, relatively new piano
student they were going to play Rachmaninov’s Concerto Number 3 and you
showed her the sheet music, she might quit on the spot. So, you start with
chopsticks and eventually, she plays a concerto.
Such is the case for most game theory examples. We aren’t doing the Pris-
oner’s Dilemma game to teach you what to do when you’re being interrogated
by the police. (hopefully, that particular situation won’t ever apply to many
of you). The setup, approach, and ultimate solution to that particular game
shows up many times and in many places, so we’ll refer to it a lot.
Often, learning game theory feels like trying to learn the unlearnable and
knowing the unknowable: what people want, believe, know, and care about. Is
it easier to play games with family members than with strangers? Sometimes.
Is it easier to do games that are one time only as opposed to repeated play
games? Sometimes. Is it easier to get a straight answer to a question from
anyone other than these authors? Okay, well this—sometimes—is what the
book is all about. Game theory is rooted in context. “When” does “what”
work—and “why.”
[box]

A Real-World, Nontraditional Application: Leadership


We characterize a lot of this introduction as games between equals or people
of equal power (and, yes, we are arguing that a parent and a crying baby
have equal power). However, game theory is often used in hierarchical rela-
tionships, with leader(s) and follower(s) and mutual decisions and outcomes.
A lot of what’s involved in successful leadership is best explained using a
little game theory. Recall the alternative name for game theory is strategic
decision-making. That process (really, all decision-making processes) involves
information, delegation, trust, relationships (do people like me?), and many
other factors.

Consider information, the topic of Chapter 4. Followers are often closer to the
front lines and thus have critical information. Suppose that reliable information
is crucial for a leader’s ability to make correct decisions and for a situation—a
game—to reach its best outcome. Now, suppose the leader hasn’t developed
trust or has proven to be too self-interested. Will the followers tell the leader
that crucial information? Suppose earlier in the relationship, a follower did
communicate relevant information, and it wasn’t used correctly or ignored.
Will the follower try again?
18 J. Prag and A. I. Prag

Along similar lines, economics uses a concept called Agency Theory (aka, the
Principal – Agent Problem) to discuss challenges in delegating responsibility to
those under you. Leaders should ask themselves: what do your team members
(followers) want to see happen to you? If they don’t like, trust, or believe in
you, will they tell you what you need to know to allow you to succeed? This
is game-theoretic stuff. As we’ll see, there are important connections between
leadership, trust, and game theory.

[end box]

What Isn’t Game Theory


There are many times when you and another person interact to determine
an outcome, and it has nothing to do with game theory. When you run to
the end of the block faster than your 3-year-old, you didn’t use game theory;
you used the physical skill that you possess. You’re bigger and stronger. That’s
all. One might argue that you made a bad game-theoretic choice, depending
on how it made your toddler feel or what she did when you left her behind,
but that’s not what we were talking about. We’re saying that some outcomes
between people are determined purely by physical differences or differences
in skills. Those aren’t the subject of game theory.
Game theory also doesn’t justify statistical choices, like buying a lottery
ticket every week from the same store at the same time. You never win the
big jackpot. There’s nothing wrong with your ticket buying “strategy,” it’s just
not the sort of strategy that we’re talking about in this book. You’re basically
being superstitious about something that is actually determined entirely by
statistics and probabilities. There are a lot of books that can help you with
that kind of game, but this isn’t one of them.
Game theory is used when the choices and decisions made by you and
another person determine the outcome together. Knowing that you’re in such
a situation and being aware of how your choices interact is the key to using
game theory.
Of course, we sometimes attribute success to luck when in fact you stum-
bled into a successful game. If you “get lucky” repeatedly, you’re probably
playing a game that’s part skill, part strategy. When you go out to a social
gathering with your friends and you say you hope you “get lucky,” you really
mean you hope you can play the game. Your payoff at the end of the night
has very little to do with luck.
1 What Is Game Theory? 19

Conclusion
Game theory is everywhere! It is family dynamics, work dynamics, political
negotiations, grocery shopping, bathtime with Junior, and landing on the
moon.
So, we’ll relate many popular expressions to game theory. If your interpre-
tation of timeless wisdom is different from ours, don’t let that prevent you
from seeing the usefulness of our alternative view. We’re certainly not saying
that the only thing Granny knew was game theory.
Game theory and Granny’s wisdom—both—teach you to be considerate.
It’s why people like living where other people are nice to them, and it’s why
we are taught to trust, respect, and care about others. This book also explores
the downside of human interaction, how we treat those who are naïve and
gullible. Strategic decision-making, like all things, can be misused.
Ultimately, game theory is a process. It is an approach to situations that
makes you ask things like “what do you know?”, “what makes people tick?”,
and “what are my choices?” It will explain how, why, and when commitments,
promises, and threats work. It will make you think a little harder about those
words of wisdom that you heard your entire life and get a better sense of their
value. Hopefully, by the end of these chapters, you’ll understand when these
expressions apply and when they do not.

Questions

1.1. Parenting is all about game theory. Most of us have our first experience
with strategic decision-making when we were parented, but we didn’t
know it. Let’s try to look back and figure out our parents’ strategy:
“Do your homework! Practice your piano or your gymnastics or your
free throws! Do something comparatively boring instead of playing that
game on your smartphone.” The 12-year-old thinks that her parents
don’t understand her or love her. The opposite is actually true. How are
these commands part of a game-theoretic situation? What “advantage”
do parents have for helping their kid make better choices.
1.2. Granny gave you two bits of advice: “A fool and his money are soon
parted” and “Do unto others as you would have others do unto you.”
One day, you were swindled out of $100 by a woman who told you
(falsely) that she could tell your future by reading your palm. Which of
Granny’s tidbits of advice was not heeded by the palm reader?
20 J. Prag and A. I. Prag

1.3. Congrats—you’re in a long-term relationship with someone you’ve fallen


in love with. Discuss what’s wrong with the popular expression “love at
first sight.” (Hint: what information do you have?).
1.4. You took a class in game theory (or maybe finished reading this book)
and decided to apply your new approach in Las Vegas. Not being experi-
enced in many of the games there, you decide to play the lucky number
game; you put a pile of chips on a number and the woman behind
the table spins a giant wheel. Now, reread this setup carefully: which
word lets you know that game theory—strategic decision-making—is
not going to be useful?
1.5. “The Game of Life” is an old box game, now a computer game, and a
book title. “The Games People Play” and “The Circle Game” are both
song titles. If we were writing a book of lists, you could make a long
one of titles that include the word “game.” Most of these references are
actually to game theory sort of games, not competitive sports games.
As you understand it at this point, what’s the fundamental difference
between the two?
2
No Honor Among Thieves: The Prisoner’s
Dilemma

We’re going to learn game theory by way of the familiar: typical expressions,
known life lessons, and easy stories. As we invoke these familiar things, we
add in some unknowns and ambiguities, helping us build an approach to
problem-solving that does two things. First, it will help you recognize game-
theoretic situations—circumstances where you alone cannot determine the
outcome. Second, it will highlight the deeply personal factors that we must
consider if we’re going to succeed at real-world, non-mathematical game
theory.
Let’s hit the ground running with the most famous game of all: The
Prisoner’s Dilemma.
The Prisoner’s Dilemma is a good starter because it gives us a set of
parameters, structure, that is relatable and applicable to real-world situations.
Structure, as we will see, is the most important part of any game. This setup
is the foundation for understanding returns, expectations, and goals of each
player. Prisoner’s Dilemma will also allow us to explain and contextualize
many oft-heard expressions in future chapters.
Before we get started…
… as suggested in the introduction, we have to get used to unpacking
expressions. The borrowed imagery of the word “unpacking” makes us think
about a travel bag or suitcase tightly stuffed with various items, each used
only after the bag is unpacked . Consider, for example, the expression “no
honor among thieves.” A good expression, like a good travel bag, this one
holds a lot of stuff that needs to be unpacked before we can put it to good
use. We need to think about each word, what goes into it, its context, and
how it helps explain a result.

© The Author(s), under exclusive license to Springer Nature 21


Switzerland AG 2024
J. Prag and A. I. Prag, Useful Game Theory, Palgrave Executive Essentials,
https://doi.org/10.1007/978-3-031-75154-7_2
22 J. Prag and A. I. Prag

Honor
Historically, societies have created different approaches to get people to follow
through on things like promises, commitments, and day-to-day life (which
we discuss in detail in Chapter 7). One approach that has endured over time
is a thing we call honor. It’s a small word but with large connotations. Let’s
unpack it.
How people feel about outcomes, what economists usually call the utility
that people get from their choices, is actually a programmed response. We
begin as babies with sensory pleasures—how things taste or feel or smell or
sound or appear—and our reactions are natural and inherent. In subsequent
years, we “develop a taste” for certain things. This is the learned behavior part
of it. Combining our inherent reactions with developed preferences, we figure
out what a certain thing really gives us. Roller coasters might not be fun as a
toddler, but your teen self loves the experience. Your utility—the benefit you
receive from doing something—changes over time. This change is a well-
accepted part of life, and it’s helped along by reinforcing good choices and
good behaviors.
The same learned-taste approach can be used to convince people that they
don’t get pleasure from some things; and, therefore, they shouldn’t do them.
This applies to things like stealing, murder, and speeding through a crosswalk.
Stealing feels wrong, intrinsically. Also, society put rules in place to reduce
people’s incentive to do it. Even if you can’t possibly be caught, you (hope-
fully) don’t steal. That stealing is “bad” is an inherent reaction, and society’s
rules reinforce it.
People are trained that the happiness one gets from consuming something
that was stolen doesn’t give you the same amount of pleasure that you get
when you acquire the same good in a more appropriate manner (buying it,
making it yourself, bartering for it). One word that we use to explain the
lack of happiness we get from consuming ill-gotten goods is honor. In other
words: acquiring things by stealing or deception is not honorable, so this
approach should not give you pleasure.

Thieves
By definition, a thief is a person who steals things. So, the expression “no
honor among thieves” tracks. Let’s unpack it even more. This expression
might mean different things to different people, based on your pre-conceived
notion and individual biases.
2 No Honor Among Thieves: The Prisoner’s Dilemma 23

Here’s a question: A 6-year-old boy takes another kid’s lunch money 1 day
in first grade. Does that make him a thief?
Before you answer, consider your own feelings on the following question.
When and how do we learn about the inappropriateness of things? Should all
first graders know right from wrong when it comes to stealing?
Let’s add more information: the boy’s teacher saw him take the lunch
money and told his father, who promptly sat the boy down on the stairs of
their house and had a calm conversation about how and why stealing is bad.
The father taught him the basic lessons of right and wrong, good and bad,
and other things we might ultimately call honor. The boy didn’t steal again
(and went on to coauthor a book on game theory). Most people would agree
the boy is not what they think of as a “thief.” He’s not the person the expres-
sion “no honor among thieves” is referring to. When we unpack Granny’s
expression and try to use them to understand game theory, we can’t be cava-
lier about the meaning of the words. We cannot assume an all-encompassing
definition of each word. We have to be deliberate about the application of
certain terms and clear about what is and isn’t relevant.
When Granny said, “No honor among thieves,” she was talking about
hardened criminals who also happen to be the basis of one of our best known
and beloved game theory games. On to the Dilemma.

The Prisoner’s Dilemma

If you’re a fan of police dramas, you’ll be familiar with the setup of this famous
game. A bank is robbed, and two people, Butch and Sundance, have been
arrested. There were no eyewitnesses because the perpetrators were wearing
masks. Police need to solve this case.
Hoping to get one of them to confess, detectives put Butch and Sundance
in separate rooms and offer each of them the same deal:

(a) If you confess and your partner does not, you’ll get 1 year in prison and
your partner will get 10 years.
(b) If you both confess, you’ll both get 5 years in jail.
(c) If neither of you confesses, you’ll both get 3 years because you resisted
arrest anyway. (this is a very important part of the game and its outcome!)

As a reader or a student, you’re probably saying “well, obviously, neither will


confess. It’s best for both!” Here’s where game theory is useful. Butch and
Sundance are evaluating the same choice in separate rooms. They don’t know
what each other will do. The thought process goes like this:
24 J. Prag and A. I. Prag

“If I don’t confess and my partner does, I’m going to jail for ten years. If he
does confess, I’m a lot better off confessing too. Five is less than ten!
“If he doesn’t confess, I should probably still confess anyway, though. Either
way, I’ll be going to jail for three years or one year. One is a lot less than three,
and they’re both less than ten years. I’ll confess.”

Player one, let’s say Butch, has reasoned that he’s better off confessing regard-
less of what Sundance does. Sundance, in another room, thinks the same.
So, they both confess because it’s best for them, given what the other might
do. The Prisoner’s Dilemma’s famous result is that both players confess and
get 5 years, even though they could have both gotten 3 years. There was an
outcome that made both of them better off, but they didn’t get there.

Game Setup Matters


Butch and Sundance couldn’t talk to each other; they decided in a vacuum.
In this case, the setup of the game makes reaching a superior result difficult.
That’s why game theory is so great. We examine the setup (or structure), walk
through the players and their choices, and see how we arrive at an outcome.
Often times, this outcome doesn’t seem logical to the reader—but you’re not
a player yet. Sometimes, the way a game ends up isn’t particularly ideal for
anyone involved. It’s just the solution.
Outcomes can be portrayed in words, or a little more formal in a payoff
matrix. Remember: game theory is all about players, choices, and outcomes,
known before we start playing. You can think of game theory as the art of
navigating a payoff matrix. The two choices detectives gave the gentlemen
are illustrated in Fig. 2.1.
Using this visual representation of the game, we can see how each player
approaches his decision. If one confesses, the other would be better off also
confessing. Both choose to confess, and both get 5 years.
Why? Because neither player can be sure what the other will choose.
The setup prevents collusion. Butch and Sundance both (logically) prefer to
reduce their jail time (notice, the Detectives never say they can walk free;
that’s not an option). So, each man will choose to confess.
Look at it another way: Butch cannot look at the outcomes in the payoff
matrix and say, “I’m going to get 3 years.” He and Sundance are making
choices (confess or don’t confess) that can lead to many outcomes, and the
final outcome depends on both player’s choices.
Let’s practice using a payoff matrix. Look “up and down” in Fig. 2.1 to see
Sundance’s choices and outcomes. If Butch confesses—the left column in the
2 No Honor Among Thieves: The Prisoner’s Dilemma 25

BUTCH
DO NOT
CONFESS CONFESS
BUTCH BUTCH

CONFESS
5 YEARS 10 YEARS
SUNDANCE
SUNDANCE SUNDANCE
5 YEARS 1 YEAR

BUTCH BUTCH
1 YEAR 3 YEARS
CONFESS
DO NOT

SUNDANCE SUNDANCE
10 YEARS 3 YEARS

Fig. 2.1 The Prisoner’s Dilemma, payoff matrix

matrix—Sundance will either get 5 years or 10 years. He’ll choose to confess,


since 5 < 10. If Butch does not confess—the right column—Sundance will
get either 3 years or 1 year. 1 < 3, so he’ll confess again. Regardless of what
Butch does, Sundance is better off confessing. In game theory parlance, we
say Sundance has a dominant strategy to confess.
Run it the other way for Butch, looking right and left. If Sundance
confesses—the top row—Butch will either get 5 years or 10 years. He’ll
choose to confess, because 5 < 10. If Sundance does not confess—the bottom
row—Butch is choosing between 1 and 3 years. He’ll confess again. Butch
also has a dominant strategy to confess. The game solves at Confess/Confess,
with each man serving 5 years in prison.

Confound It!
The Prisoner’s Dilemma is a great way to start learning game theory. It is
well-structured, has a small number of easy-to-understand choices, a small
number of players, and the famous result that both players end up worse off
than they could have been if they both had chosen differently. Based on their
dominant strategies they will both be going to jail for 5 years, but we see from
the payoff matrix that they both could have gone to jail for 3 years. Annoying
result, but true.
26 J. Prag and A. I. Prag

Well… why didn’t they just talk about whether to confess before they
committed the crime? Aha, that’s a cooperative solution and we’ll talk about
that in Chapter 5. The implication is, if they could have cooperated, they
would have both chosen not to confess and they both would have gotten
the shorter, 3-year prison sentence… maybe. The problem: the decision to
confess is actually driven by self-interest.
Let’s prove that by loosening the rules a little bit. Suppose Butch knows
that Sundance will not confess. Maybe the police let him see the interrogation
and it’s clear he isn’t “going to break.” Butch will still confess; 1 year in jail
is still better for Butch than 3 years. You’d get the same result the other way
around, if Sundance knew with certainty that Butch was not going to confess.
As it turns out, the driving force for each player to confess is their desire
to minimize their own time in jail. The outcome is dependent on the other
person, but the choice is not.

Empathy and the Prisoner’s Dilemma

You can show the importance of self-interest in determining the outcome of


the Prisoner’s Dilemma in another way. Suppose Butch and Sundance made
their choice based entirely on getting the best possible outcome for the other
person. They want to minimize their partners’ time in prison and don’t care
about their own outcome at all. This is tricky; using the payoff matrix again.
Now, they make the choice that’s best for the other player.
Here we go: if Sundance confesses, Butch can either get him—get
Sundance—a sentence of 5 years or 1 year. Butch is still looking side-to-
side at his choices but he’s looking at Sundance’s outcomes. Butch would not
confess in order to get Sundance the 1-year sentence. If Sundance does not
confess, Butch can get Sundance either 10 years or 3 years. Butch would not
confess again and get Sundance 3 years. In this case, Butch—with his singular
focus on Sundance’s well-being—has a dominant strategy to not confess.
Now, let Sundance also be only interested in his partner’s outcome (and
remember, you’re now looking up and down). He too has a dominant strategy
to not confess and—voila!—they both get 3 years! If Butch and Sundance
play the game with an eye toward the other’s outcome, they are both better
off.
Before we pick apart the game or find limitations to its applicability, let’s
consider how useful the Prisoner’s Dilemma and thus game theory can be in
dealing with day-to-day situations. You make a lot of choices in situations
that involve one other player and a small number of choices. Most romantic
or familial relationships, transactions, and negotiations fit this description.
2 No Honor Among Thieves: The Prisoner’s Dilemma 27

Here’s an example: on any given day, after dinner, you can choose to sit
and watch TV—your preferred choice if your partner does the dishes—or you
can help clean up. Your partner can make the same choices and has the same
preference. Even if you don’t want to watch the same thing on TV (which
might complicate the decision process), if no one ever cleans the dishes, you’re
going to be watching TV in a pretty nasty smelling house.
We make choices like these multiple times a day, every day—usually with
similar parameters. It’s a big part of adulting. Being completely self-interested
will probably get you into couple’s therapy or in search of a new partner.
While the therapist might help you communicate better, the only real solu-
tion is to find a partner who isn’t always self-interested. Empathy promotes a
solution that is better for all parties—but everybody has to get on board with
that approach for it to be effective.
What about Butch and Sundance? Well, they’re thieves first. The point of
the expression “no honor among thieves” tells us a lot. If you’re in a game
where your choice and the ultimate outcome depends on your belief in the
honor of the other player, their tendency toward empathy matters. Granny
was saying, if you’re playing with a thief, you can safely assume self-interest
will always drive the process.

The Payoff Matrix and Jollies


We’ve already done one important tweak on the Prisoner’s Dilemma game.
When Butch and Sundance chose without regard to their individual
outcomes and focused on minimizing the sentence of their partner, they
achieved the optimal outcome. In essence, their own time in prison didn’t
affect their happiness, but the amount of time their partner spends in prison
did. Time each other spends being in prison became a rule of thumb.
The Prisoner’s Dilemma can be a little confusing since higher numbers
(years in prison) are bad. Let’s switch to something where higher numbers
are really good: pizza.
You consume pizza and that makes you happy. Economists try to quantify
that happiness in a measure they call utils. Since nobody knows what a util
is, we created our own measure: jollies. Your consumption of pizza results in
a certain number of jollies, and more is better. We swear this relates to the
Prisoner’s Dilemma. Bear with us.
Imagine you and your partner are going out to dinner at an Italian restau-
rant. You agree to order together and share. You really really like pizza, with a
28 J. Prag and A. I. Prag

JAY
PIZZA SALAD
JAY JAY
25 JOLLIES 30 JOLLIES

PIZZA
AMANDA AMANDA
AMANDA
25 JOLLIES 60 JOLLIES

JAY JAY
50 JOLLIES 20 JOLLIES
SALAD

AMANDA AMANDA
40 JOLLIES 20 JOLLIES

Fig. 2.2 Dinner out, part I

side salad. He prefers a big salad with a half slice of pizza. Your payoff matrix
might look something like Fig. 2.2:
In the top left and bottom right quadrants, you’re both ordering pizza or
salad. So, nobody’s really getting the whole meal they prefer. In the top right
corner, Amanda gets to order her favorite pizza and take a few pieces of lettuce
from Jay. She’s happy. In the bottom left, Jay gets to order the super big salad
and take a small slice of pizza from Amanda. He’s happy.
It turns out, we need some help to solve this game, and it’s right there
in the setup: you can’t order at exactly the same time. This is a sequential
game (which we’ll discuss in Chapter 6). Amanda takes a look and says “If I
want to maximize my jollies, I should choose Pizza (since 60 is the highest
number on the board). If I choose Pizza, Jay is choosing between 25 and 30
jollies. He’ll choose salad (30), and we both get what we want in the top right
corner).
It works the other way too: if Jay goes first, he sees his largest number is
in Pizza (50). He knows, if he chooses Pizza, Amanda will choose salad (40)
over another Pizza (25). The game solves at the bottom left corner.
(Pro note: if you’ve taken a game theory class before, you may be saying
“hey, this game has two Nash Equilibria,” and you’d be correct. We’ll talk
about that in Chapter 5).
Jollies are just a placeholder for how much you prefer one outcome over
another. The numbers can be called anything. They are not prices or revenues
or costs, per se. They are a quantitative measure of perceived benefit in the
outcome.
2 No Honor Among Thieves: The Prisoner’s Dilemma 29

Having numbers in a payoff matrix actually allows us to represent some


cool things, like preference. In this case, we see that Jay and Amanda would
be okay only having salad or only having pizza, but they clearly prefer a mix
of food. The numbers also indicate empathy: all jollies are positive, which
means there’s no resentment between Amanda and Jay if only one person
gets their way.
The number of jollies aren’t meant to be literal units of happiness. This is
just our approach to show a person’s preferences, or relative happiness, when
deciding.
[box]

“Can’t Buy Me Love”


Granny was a HUGE fan of The Beatles. Paul McCartney and John Lennon
probably didn’t realize when they composed it, but “Can’t Buy Me Love” is
applicable to game theory in a big way. Money doesn’t motivate people to make
choices very well. Beyond a certain point, people will have enough money.
Money isn’t usually represented on payoff matrices because it is only part of
the set of factors that impact happiness. Money isn’t love.

If you want to grasp the depth of this problem, ask yourself this: what is love?
This is going to be, for most people, what we call a Potter Stewart definition.
This former US Supreme Court Justice famously said, when asked to define
pornography, “I’ll know it when I see it.” Such is the case with love. You can’t
really define it, but you know it when you feel it.

Money and many other tangible possessions are at best a proxy for the things
that truly motivate people—the results and outcomes that we deal with in
game theory. Emotional motivators are ultimately more important. Trying
to motivate a player with money will not get you cooperative behavior if
you treat them with disrespect. Work-life balance has emerged as a more
influential factor for recruiting—more than base salary and additional compen-
sation. John and Paul were right, and we’ll come back to this issue frequently
throughout the rest of the book.

[end box]
Back to Prisoner’s Dilemma: Let’s restate the payoff matrix in terms of
empathy; each player now loses one jolly for every year the other has to spend
in jail.
The advantage of reimagining the payoff matrix with empathy (Fig. 2.3)
is that we no longer have to look at the other player’s outcome to make a
better choice. Butch sees: if Sundance confesses, I lose 5 jollies or 3 jollies.
30 J. Prag and A. I. Prag

BUTCH
DO NOT
CONFESS CONFESS
BUTCH BUTCH

CONFESS
-5 JOLLIES -1 JOLLY
SUNDANCE
SUNDANCE SUNDANCE
-5 JOLLIES -10 JOLLIES

BUTCH BUTCH
-10 JOLLIES -3 JOLLIES
CONFESS
DO NOT

SUNDANCE SUNDANCE
-1 JOLLY -3 JOLLIES

Fig. 2.3 The Prisoner’s Dilemma, restated

I’ll choose the lesser of the two evils, not to confess, and lose only 3 jollies. If
Sundance does not confess, I will either lose 10 or 3 jollies. I will not confess
and lose only 3. My dominant strategy is to not confess.
An empathetic Sundance will do the same and the game ends up with no
one confessing. Both players get the optimal, 3-year sentence.
Unfortunately, restating the payoff matrix in terms of jollies moves the
game into a much fuzzier place. It’s easy to say just make the choice that
minimizes your years in prison. It’s much harder to know how that relates
to happiness units. We assumed that every year in prison led to a decrease in
jollies of one unit. That’s just our estimate, but we can’t know that for sure.
Still, most games’ payoff matrices have to be done with jollies, and not
the years in prison, dollars, or whatever else the original game is structured
around. Jollies are what we ultimately get from our choices. Happiness is why
we, as humans, make decisions. If that makes games harder to play, so be it.
Even the most penny-pinching Grannies prioritized happiness once in awhile.

Honor, Reputation, and Lunch

We’ve talked a lot about honor in this chapter, and we’ll talk about reputation
over and over throughout the rest of the book. Let’s add something to the
pizza story:
2 No Honor Among Thieves: The Prisoner’s Dilemma 31

JAY
PIZZA SALAD
JAY JAY
25 JOLLIES 30 JOLLIES

PIZZA AMANDA AMANDA


AMANDA
25 JOLLIES 60 JOLLIES

JAY JAY
50 JOLLIES 20 JOLLIES
SALAD

AMANDA AMANDA
40 JOLLIES 20 JOLLIES

Fig. 2.4 Dinner out, part II

Suppose Amanda and Jay are on their way to dinner. During the drive,
they have a chance to discuss what they’re going to get. Look at the payoff
matrix in Fig. 2.4.
Again, neither player has a dominant strategy and games like this are harder
to solve. One way that this game could be solved is by direct communi-
cation—someone simply says what they’re going to get. That should work,
right? Well, kind of. What if Amanda changes her mind when they get into
the restaurant?
Let’s add another factor. Amanda has a powerful sense of honor and she’s
always worried about her reputation. Specifically, she loses jollies if she lies
or, in this case, changes her mind. Sadly, this is not true of Jay (at least, not
when food is involved).
Amanda loses 20 jollies any time she lies about anything. She announces,
as they approach the restaurant, that she’s going to get pizza. Let’s incorporate
her honor—her potential lost jollies—into a new payoff matrix in Fig. 2.5:
Jay still doesn’t have a dominant strategy, but now Amanda does. She’s
going to order pizza. Knowing this (presumably knowing about Amanda’s
personal honor code), Jay is getting salad. This would have worked equally
well if Amanda had said she was getting salad. Salad would be her dominant
strategy and Jay would order accordingly.
Honor and reputation have real effects on our decision-making in the
real world. They are forms of commitments because they change the payoff
matrix. We’ll talk more about this in Chapter 6.
32 J. Prag and A. I. Prag

JAY
PIZZA SALAD
JAY JAY
25 JOLLIES 30 JOLLIES

PIZZA
AMANDA AMANDA
AMANDA
25 JOLLIES 60 JOLLIES

JAY JAY
50 JOLLIES 20 JOLLIES
SALAD

AMANDA AMANDA
20 JOLLIES 0 JOLLIES

Fig. 2.5 Dinner out, with honor

Honor and reputation are not exactly the same thing. Reputation is usually
applied to situations in which you expect to interact with the same player
again at some point in the future. For Jay and Amanda, that fits nicely: they’re
married. Reputation is a factor in repeated games. Honor is more internal,
more endogenous, and immovable. Reputation asks, “what do people think
of me” and honor asks “what do I think of myself.”
Turns out, reputation and honor also affect how the game is played if a
player changes. Amanda may be more likely to impose the jolly penalty if she
lies to her best friend (reputation is the driving force) rather than a perfect
stranger she’ll never see again (honor is the driving force).

The Golden Rule and the Prisoner’s Dilemma


Granny must’ve said this a thousand times: “Do unto others as you would
have others do unto you.”
That’s a simple life strategy to be sure. The question is, when does it work
and (if it does work), why? This is the basic question we come back to time
and time again in this book. When you’re taught a solution to game-theoretic
situations: why and when do they work?
The Golden Rule actually helps Prisoner’s Dilemma type games. It can help
take a noncooperative, often repeated game with a dominant strategy solution
that is inferior for both players and bring it to a cooperative solution.
What does all that mean?
2 No Honor Among Thieves: The Prisoner’s Dilemma 33

Suppose you are 12 years old, and your sibling is thirteen years old. You
share a bathroom in your family home. After you take a bath, you leave your
towel and clothes on the floor and the bathroom is a complete mess. Your
grandmother, upon seeing this, says “do unto others as you would have others
do unto you.” Being an inquisitive, polite person, you say “huh?”.
Granny explains: … you know your sibling is going to use the bathroom
next and will have to clean up your mess before they can take a bath. Your
messiness has made the bathroom experience a little better for you, but worse
for your siblings. That’s all well and good today but what about tomorrow?
Your sibling might use the bathroom before you and they will follow your
lead, leaving the bathroom in equal disarray. Tomorrow, your bathroom expe-
rience will be the one that’s diminished (apparently your grandmother talked
like an economist…).

“But Granny,” you point out, “when we were all staying in that hotel in
Orlando a while ago everyone - including you - left the towels on the floor.
Why is that different?”

This would seem to be a “gotcha” moment, but Granny can explain the differ-
ence. In that case, there’s someone who is paid to clean the bathroom. Those
people would have to clean the bathroom thoroughly whether you left it a
mess or folded every towel. That’s not the same game and you don’t play it
the same way.
Let’s think about what else mattered in this messy bathroom game, specifi-
cally the age of the players. Age matters for two reasons: you and your sibling,
both young, are going to be around awhile. Logic tells you if you act in your
own best interest in the short run, and the other player does as well, you will
both be worse off in the long run. You’re going to be sharing a bathroom with
each other or somebody else—for a long time.
Age also matters on the other end of the spectrum. Suppose the players are
both in their nineties. While you might get the same cooperative result, you
might also get a couple of people who correctly say, “what long run?” The
point: cooperative behavior is most effective (results in more good payoffs) as
a strategy when it’s adopted early in a long-term, repeated, sequential game.

[box]
Put Me In, Coach
In basketball, there are five players on each team, all trying to score more
baskets than the five players on the other team. Team A has five players of
34 J. Prag and A. I. Prag

varying skill levels who each shoot the ball to score every time they get posses-
sion. Team B has “role players.” Some are good at passing the ball, others are
good shooters, etc.

It’s not hard to imagine that Team B will score more points and win more often
than Team A. If everyone on a team gets jollies only from winning the game,
then we can see how giving (passing the ball instead of shooting the ball) on
the part of some players can lead to winning and thus to more happiness.

This particular example might feel like a bit of a reach. Game theory isn’t really
about sports, and there are a lot of strategies for success in team sports. But,
the deep logic of the Prisoner’s Dilemma is valuable. Players of different skill
learn that success comes from cooperation rather than self-interest. When the
five play as a team, rather than individual shooters, they reach their optimal
outcome.

This is the goal of coaches and mentors. Is winning everything, or is there a


greater purpose to team sports? Do you want one star player or a five-person
team? These are the questions of coaches the world over.

[end box]

Dog Walking: The Game


When are dog owners more inclined to pick up their pet’s poop: if they own a
home in a small, gated community? Or if they rent an apartment in a massive
complex?
As we’ve seen, you don’t need to specify the structure of the game explic-
itly. This is a Prisoner’s Dilemma-style game where the players have learned
that they are only better off cooperating—cleaning up their dog’s poop—
if everyone else cooperates. Your surroundings reinforce the correct decision
when you specify a payoff matrix with jollies. We don’t even need to provide
numbers: dog owners lose jollies when neighbors look scornfully at them for
letting their dog do its business in their front yard and not cleaning it up.
Homeowners frequently establish their own rules about such things just in
case your jolly calculator isn’t the same as the community norms. Sometimes,
they enforce these rules with fines. Do fines work as well as glares? As with
The Beatles, people are better motivated by their own standards, their own
jollies. In this case, jollies come through a sense of belonging to the commu-
nity, and belonging, like love, can’t be bought. People in a gated community
2 No Honor Among Thieves: The Prisoner’s Dilemma 35

don’t smile at you or wave to you because you’re the richest person there.
They have to like you. So, you pick up the poop.

Timing
We’ve been playing games with different expectations of the timing of players’
choices. There are two obvious ways that the timing of players’ choices could
be made. When players choose at the same time, we call this a simultaneous
game. When they choose in a pre-established order, we call it a sequential
game. As described, the Prisoner’s Dilemma is a simultaneous game. Both
players make their choices before they know what the other player has chosen.
Sequential games are ones like checkers and chess where someone goes first
then someone goes second and that repeats until the game is done. We’ll talk
about sequential games in detail in Chapter 6. If Amanda and Jay are going
to get food, that’s a sequential game. One person orders before the other.
Timing is usually part of the structure or rules established when the game is
first created. Interestingly, changing the timing from simultaneous to sequen-
tial (or vice versa) can alter outcomes from the same payoffs. In some cases,
it can help a game turn out better for all players.

Prisoner’s Dilemma as a Sequential Game

What if the Prisoner’s Dilemma was played sequentially? Recall that we were
really specific that Butch and Sundance choose at the same time in separate
rooms. It’s the reason we get to the “both-will-confess” outcome.
Or is it? Let’s play the game sequentially, and with self-interested players.
We haven’t done any sequential games yet, but this one is easy.
Suppose Butch goes first; he will choose before Sundance and his choice
will be known to Sundance. Butch is thinking about choosing “Don’t
Confess,” figuring that Sundance will then do the same and they’ll both
get 3 years. But ultimately, a self-interested Sundance will only look at his
own outcomes. So, if he knows that Butch did not confess, Sundance will
choose to confess and get a sentence of 1 year instead of 3 years. Knowing
that Sundance is likely to confess, Butch would never choose “Don’t Confess”
when he makes the first choice because he’d end up with 10 years, not three.
No honor among thieves.
Same thing happens if Sundance chooses first. (Pro Tip: run the last
paragraph through Fig. 2.6 again to see the numbers).
36 J. Prag and A. I. Prag

BUTCH
DO NOT
CONFESS CONFESS
BUTCH BUTCH

CONFESS
5 YEARS 10 YEARS
SUNDANCE
SUNDANCE SUNDANCE
5 YEARS 1 YEAR

BUTCH BUTCH
1 YEAR 3 YEARS
CONFESS
DO NOT

SUNDANCE SUNDANCE
10 YEARS 3 YEARS

Fig. 2.6 The Prisoner’s Dilemma, sequentially

If the players are self-interested, the result will be the same whether the
game is simultaneous or sequential. This result is the embodiment of Granny’s
expression “no good deed goes unpunished.” In games where other players
are entirely self-interested, any attempt on your part to reach the cooperative
outcome unilaterally will fail and you will actually end up worse off.

Repetition
Another timing factor that has to be discussed in the setup of a game is
played is frequency. Is this game going to be played just one time or will
it be repeated many times (perhaps even forever)? As we’ll see in Chapter 8,
this issue matters a lot in many games, and it can even push games like the
Prisoner’s Dilemma toward the cooperative outcome.
Why a whole chapter for repetition? Well, repetition is tricky. Will we play
the game repeatedly with the exact same players or will it be the same game
with new players? If Butch and Sundance literally play this exact same game
repeatedly and they know they will be put in the same jail together, wouldn’t
that incline them to make different choices? To communicate? To collude?
(or, just to steal better and stop getting caught?) We have to amend each
game to show new factors each time it’s played.
Prisoner’s Dilemma is a great framework for understanding the who, how,
and what of a game, and we’ll come back to it a lot in future chapters.
2 No Honor Among Thieves: The Prisoner’s Dilemma 37

Conclusion
As Granny said, a journey of a thousand miles begins with the first step.
Thus, we begin our journey into game theory. We worked through our first
game, the famous Prisoner’s Dilemma. We’ve talked about game structure—
the players, their choices, the payoff matrix, and timing. We learned how to
approach and solve a game and discussed dominant strategies, cooperation,
and honor.
We waded into deeper, harder issues like what’s really in the payoff matrix?
Is it the explicitly stated payoff, jail time in the case of the Prisoner’s
Dilemma, or the real payoff, jollies? Turns out, this game is applicable to
basketball, pizza, and walking your dog. We looked at games with many
choices and, in doing so, decided what’s for dinner.
With this foundation, we can discuss other types of games that feature
heavily in our lives, games that don’t solve out as easily. Most games have a few
possible solutions and thus need additional information to find a solution.
This type of game will allow us to define and employ the Nash Equilibrium—
a way of knowing if a particular outcome is “final.” Sequential games have a
player’s choices being made in a prespecified order and repeated games. All
of these factors increase the fun in future chapters.
Game theory, for most people most of the time, is informal. Every day, you
are put in the position of making choices knowing that others are making
their own choices, and together you determine the outcome. The point of
game theory is to expand the information we seek, see the genius in game
setups, and better evaluate the choices that determine the outcomes in your
life.

Questions

2.1. You had an opportunity to get someone fired at work, which would have
gotten you a promotion and a nice raise. This did not involve lying or
doing anything inappropriate. Still, you chose not to do it. What aspect
of your payoff matrix told you to, correctly, not make the choice that
would have given you more money for firing someone?
2.2. In searching for a dominant strategy in a game, you look at your best
outcomes under all possible choices the other player could make. You
say to yourself, “if Bill does this, my best choice is that; if Bill does
the other, my best choice is whatever….” Which word in your thought
process is, arguably, the cornerstone of game theory?
38 J. Prag and A. I. Prag

2.3. You’ve been neighbors with Janet for 12 years and you can now success-
fully navigate most game-theoretic situations with her. Janet has a lot
of dominant strategies and you’ve figured out her payoff matrix. You’ve
been butting heads a bit with the new guy who just moved into the
house on the other side of yours. Why can’t you work it out with him
as easily as you work with Janet? What’s the difference?
2.4. Coordination games exist everywhere—even on a preschool playground.
Three- and four-year-old kids need a lot of guidance and intervention
to enjoy many of the things on a playground. When they are older,
however, kids become able to play what are called “pickup games”—
spontaneous games often with total strangers. Why can you do this as
a teenager but not a toddler? (hint: think about what you need for a
coordination game).
2.5. Too Many Choices. It can be a problem in many situations (not all
game-theoretic). Think about how you approach your decision about
what to have for dinner at a restaurant that has thirty choices. Do you
do anything to narrow your options? Having done that, do you ever
proceed to make it a game-theoretic decision? How did the first step
help the second?

Appendix 1: More Choices


An easy complaint about classroom game theory is that most of the games
only involve two choices and two players. Life’s games, one could argue, are
rarely as simple as the ones we analyze on the marker board (or, frankly, in
most of this book). The fact is games with many choices are much harder to
solve in the classroom and in real life. If the number of choices is too big,
people are tempted to throw up their hands and make decisions haphazardly.
When the choice set is large, people tend to stop evaluating payoffs and look
for other ways to choose (like what’s the fastest, first, clearest, or otherwise
easiest evaluation). This is certainly a strategy, but not a very good one.
Good news: the approaches we’ve developed can be used to solve bigger
games. In fact, we can turn a game with many choices into a game with only
a few legitimate choices that our human minds can understand. Let’s see how
that might work. And yes, we’re talking about food again.
Suppose a couple has agreed that each person will shop for part of dinner
on the way home from work, and they also agreed not to communicate; they
want to surprise each other. So romantic.
2 No Honor Among Thieves: The Prisoner’s Dilemma 39

Sam is going to buy the protein and Dan is going to buy the vegetables.
Let’s suppose these people know each other well and they know their own
preferences as well as their partner’s preferences for all possible combinations
of protein and vegetables. Their potential payoff matrix is Fig. 2.7.
Ew, numbers, gross.
What’s your instinct when you face a complicated problem in life with
what appears to be too many choices? Well, you might try to eliminate some
of the choices you hate and make your menu smaller. Game theory takes the
same approach.
First, let’s take a look at a dominant strategy: is there a choice a player
would always make? Spoiler alert: nobody has a dominant strategy in this
game (you can run the matrix to double check, if you want). The jollies

DAN
CHICKEN BEEF FISH LIVER
DAN DAN DAN DAN
ASPARAGUS BEANS

2 3 0 -1
SAM SAM SAM SAM
-5 -5 -5 -2

DAN DAN DAN DAN


3 2 10 0

SAM SAM SAM SAM


5 15 10 -15
SAM

DAN DAN DAN DAN


PEAS

4 20 2 1
SAM SAM SAM SAM
0 0 10 -20

DAN DAN DAN DAN


BEETS BROCCOLI

15 4 3 2

SAM SAM SAM SAM


5 10 0 -10

DAN DAN DAN DAN


6 5 4 30
SAM SAM SAM SAM
-10 -10 -10 -30

Fig. 2.7 Winner, winner; what’s for dinner—in jollies


40 J. Prag and A. I. Prag

from each meal combination are so intertwined, no single strategy emerges


for either player.
So, we’ll take another approach to solving this game: is there a choice a
player would never make?
Look closely at the bottom row. Sam is never going to choose beets. Look
at Sam’s jollies for any protein choice that Dan makes and there’s never a
case in which Sam would choose beets. In game theory parlance, beets are
dominated by all of the other choices. Functionally, that means we can cross
that row off. Act as if that row isn’t there. Beets are NOT on Sam’s menu.
That leaves us with a somewhat smaller version of the game, shown in
Fig. 2.8.
In this smaller game, we can see that Dan will (now) never choose liver.
The only time that he would have chosen liver was if he could pair it with
beets, but he knows (because he knows Sam’s jollies) that Sam will never buy
beets. Liver is also a dominated choice. We can remove that column. Sam can
also say that Beans give no pleasure at all (-5 jollies across the board). So, we

DAN
CHICKEN BEEF FISH LIVER
DAN DAN DAN DAN
ASPARAGUS BEANS

2 3 0 -1
SAM SAM SAM SAM
-5 -5 -5 -2

DAN DAN DAN DAN


3 2 10 0

SAM SAM SAM SAM


5 15 10 -15
SAM

DAN DAN DAN DAN


PEAS

4 20 2 1
SAM SAM SAM SAM
0 0 10 -20

DAN DAN DAN DAN


BROCCOLI

15 4 3 2

SAM SAM SAM SAM


5 10 0 -10

Fig. 2.8 No beets for you!


2 No Honor Among Thieves: The Prisoner’s Dilemma 41

can remove that row as well. The result of these dominated choices is shown
in Fig. 2.9.
So, we’ve been able to remove two rows (beets and beans) and one column
(liver) and make the game look and feel easier to work with. Note, this isn’t
a sequential game; they are choosing—shopping—simultaneously, but they
can certainly do these eliminations in a step-by-step way. On their own.
Dominated choices only get us so far, though. If we look at the resulting
subgame shown in Fig. 2.9, we see that Dan still doesn’t have a dominant
choice. If Sam chooses asparagus, Dan prefers fish; if Sam chooses peas, Dan
prefers beef; and if Sam chooses broccoli, Dan prefers chicken. So, let’s look
at the vegetables again and see if we can make headway there.
As it turns out, Sam now has a dominant strategy to choose asparagus (and
not a moment too soon). If Dan picks chicken, Sam is indifferent between
broccoli and asparagus (5 jollies in each case). If Dan picks beef, Sam prefers
asparagus outright. If Dan picks fish, Sam is indifferent between peas and
asparagus. Asparagus is either a win or in a tie through all three potential
combinations. Knowing Sam the way he does, Dan will figure out that Sam
is buying asparagus, Dan will buy fish, and dinner is as healthy as can be.
Notice, we were able to solve this game with only one player having a
dominant strategy (Sam) and what we called full information. Even though

DAN
CHICKEN BEEF FISH
DAN DAN DAN
ASPARAGUS

3 2 10

SAM SAM SAM


5 15 10

DAN DAN DAN


PEAS
SAM

4 20 2
SAM SAM SAM
0 0 10

DAN DAN DAN


BROCCOLI

15 4 3

SAM SAM SAM


5 10 0

Fig. 2.9 No liver, no beans, no beets, oh my!


42 J. Prag and A. I. Prag

Dan doesn’t have a dominant strategy, he knows what Sam’s dominant


strategy is and he simply picks what’s best for him given that knowledge.
This process feels artificial to most readers; no one really does this, right?
Actually, you do it quickly and automatically every day. You reduce the
choices in your daily decisions to a small, solvable game by process of elimina-
tion. Familiarity is the only requirement for doing this. This example works
on an assumption of full information. Sam knows Dan, and Dan knows Sam.
It’s easier to narrow down multiple options when you’re working, playing,
living, and partnering with people you know.
Indifference is the key issue here. Indifference, or a “tie,” is a very real
emotion in daily decisions. We usually deal with this by engaging in compar-
isons to get to a conclusion that “feels” right. We’ve formalized the process
here with numbers in this example, but you do it ten times a day by
evaluating your emotions toward particular choices.
Lest you think this is too analytical to be realistic (game theory teachers
have probably all heard students say “no one really does that”), think about
how you shop differently when you are shopping for your family than you did
when you were shopping only for yourself. You almost certainly incorporate
other people’s jollies into your decision process now. Maybe you put the jar
of Nutella back more than you used to. Maybe you buy more apple slices
instead of Nutella, or Diet Coke instead of Mountain Dew. Indeed, what we
call “compromise” is nothing more than a streamlined, multi-choice game.
[box]

Our Modern World: The Dark Side of “Too Many Choices”


Multiple choices can seem daunting, frustrating, and confusing. Why do there
have to be so many options? Economists will tell you that variety is good,
economically. It gives everyone in a society a chance to find something that
makes them happy (maximizes their jollies), which lowers per-unit prices and
raises the jollies of society overall.

Some retailers have turned “too many choices” into a marketing strategy. The
retailer (or restaurant) has made an active decision to give you so many choices
that occasionally you buy the item they want you to buy. Store brands that
look almost identical to name brands are placed alongside their counterparts
on crowded shelves in the hopes that you grab the “wrong” bottle. We don’t
normally want to consider games in which one player counts on the other
player to regularly make mistakes, but the too-many-choices game might be
an exception.
2 No Honor Among Thieves: The Prisoner’s Dilemma 43

Other retailers take the opposite approach: they remove choices for you. Large
chains like Costco and Trader Joes famously have only 1 or 2 options for each
product. From an economics perspective, this means there is limited variety, so
quality must be higher to ensure consumers continue to buy. For the consumer,
though, removing too many choices in beneficial; the choice set returns to
“buy” or “don’t buy,” rather than a more complicated and stressful matrix.

[end box]

Go Up A Hill…

Having more than two choices doesn’t always mean the problem is difficult.
Consider the game in Fig. 2.10:
Jack and Jill are choosing between three items each: A, B, or C for Jill, and
X, Y, or Z for Jack. When we use the process that we learned to search for a
dominant strategy, we see that Jack does not have a dominant strategy—but
Jill does. Jill will always choose C. She is indifferent between A, B, and C
if Jack picks X; she is indifferent between B and C if Jack picks Y; and she
prefers C if Jack picks Z. The logic is tricky, but she’s always getting the best
possible outcome when she picks C.

JILL
A B C
JILL JILL JILL
5 5 5
X

JACK JACK JACK


10 0 4
JILL JILL JILL
0 10 10
Y
JACK

JACK JACK JACK


5 1 4

JILL JILL JILL


-4 0 4
Z

JACK JACK JACK


-2 20 1

Fig. 2.10 Jack & Jill


44 J. Prag and A. I. Prag

Surely the fact that Jill has a dominant strategy makes this game easy to
solve. In the Shopping Game, we saw that all we need is one player to have
a dominant strategy and full information and we can find a solution. The
player that does not have full information knows what the other player is
going to choose. But this game is a little trickier.
Take a look at Fig. 2.10 again. In this case, knowing that Jill will choose C
leaves Jack indifferent between X and Y. That means the best we can do is
say this game will end up at either 4\5 or 4\10. (Jack\Jill).
We’ve actually made this game trickier still. While Jack does not have a
dominant strategy, he does have a dominated strategy: Y. Regardless of Jane’s
choice, Jack would never pick Y. If Jack’s first approach to playing this game
is to see if he has dominant then dominated strategies, he will find that he has
the latter. Then, when he discovers Jill’s dominant strategy, he will choose X.
But, suppose his first approach is to see if Jill has a dominant strategy. Then
the game could have either of two solutions 4\5 or 4\10.
On seeing this, we’re tempted to say that Jack will steer the result to 4\10
because he’s no worse off and Jane is better off. Two things work against that.
First, if we’ve specified the game correctly, everything Jack cares about is in
his number and that means he is completely indifferent. Second, if he did
look for a dominated strategy before looking for Jane’s dominant strategy,
he eliminated B and thus made 4\10 impossible. There is no rule about what
you do first when you try to solve a simultaneous game. Everyone is different.
The point of this example is to show that these formal approaches to
game-theoretic situations—dominant strategies, dominated strategies, etc.—
still have potential potholes. You can learn to do games “right” and still find
games that are hard to solve.
3
Know Thyself: The Players

Traditionally, we teach game theory by formalizing the structure of the


game—the players, the choices, the payoffs first. Then, we talk about
approaches for and possible solutions to various games. We assume that all
of that structure is identifiable and in general, independent. That means that
each player is choosing on their own, without any type of cooperation.
But players are people: human, living, breathing people with biases and
perceptions and attitudes and bad days. Sometimes, you’re in the mood to
play a long, drawn-out game (like a salary negotiation). Other times, you’re
hangry and you want a snack now. These preferences and tolerances directly
affect how a game is set up by changing the available choices and expected
payoffs. Let’s discuss the humanist issues that affect games and outcomes in
a variety of ways.
We started down this path in Chapter 2 when we said that the payoff
matrix really needed to be specified in jollies—happiness units—because
anything else (dollars, prison time, etc.) effectively assumes that all players
like, dislike, or are motivated in exactly the same way. Fundamentally, we
know people’s preferences are different. Economists have long worried that
measuring and modeling happiness is difficult, if not impossible. It’s why they
use utils and jollies to specify happiness. Doing game theory with anything
else in the payoff matrix will certainly have you playing games incorrectly;
you will not make the optimal decision, and you will not get the results that
you expect.
In all human interactions, we make assumptions about other players. We
project a value proposition on them in every activity (and, sometimes, we’re

© The Author(s), under exclusive license to Springer Nature 45


Switzerland AG 2024
J. Prag and A. I. Prag, Useful Game Theory, Palgrave Executive Essentials,
https://doi.org/10.1007/978-3-031-75154-7_3
46 J. Prag and A. I. Prag

wrong). Game-theoretic models are often inapplicable to real-world decision-


making because they’ve assumed away everything that makes us human. Why
is it easier to interact in a series of games with your friends, family, co-
workers, even members of your church, club, or organization? Well, you share
a source of jollies with these groups, and that gives you more insight into their
humanity.
Players and the payoff matrix are inexorably connected. Traditionally we
separate the two when we teach game theory, but that’s not the way things
work in the real world. Knowing the players in a game means knowing what
they value. Knowing what they value means you can more accurately evaluate
choices and outcomes before you play the game.

What Makes People Tick


In traditional game theory, we say all players are rational.
Well, there’s problem #1 right there.
The problem with the word rational is it overreaches. When we call
somebody “rational,” we think they are perfect, with an ability to make
unemotional, intentional decisions. This is the adopted meaning of the word,
what it has come to describe in the modern lexicon. In game theory, rational
means something very different.
The easiest way to remember what rationality means in game theory is to
recall it is a subdiscipline of mathematics. In math, rational means consistent .
So, let’s just restate the word rationality with something less academic: we
assume players have consistent preferences in their games.
What’s the difference? Well, somebody can make consistently bad choices.
They can always choose the most expensive item, the sugary drink, or the
trip with the most layovers. Somebody can be consistently risk averse or risk
tolerant. He can always act on emotion, rather than cold, hard facts. She
can go with the flow, never showing care or concern. When we say people
(players) are rational, we do not mean they always do the right thing. We just
mean they are consistent and predictable, which helps us better specify their
preferences and payoffs.
What does an irrational person look like? In a word: chaotic. An irrational
player does not have easily identified motivators, a set of heuristics, biases,
or preferences that allow us to construct an accurate payoff matrix. When
dealing with an irrational player, there isn’t enough information for you to
evaluate their jollies. To quote Granny: we just don’t know “what makes them
tick.”
3 Know Thyself: The Players 47

Leaders, therapists, and significant others use this phrase more than
“rationality,” because we don’t expect others to be perfect. It’s a little
less analytical, but it still implies a person has an “order,” or a level of
predictability. When all is said and done, that’s all rationality means: a consis-
tent choice. It means that a person presented with the same choices and the
same circumstances will consistently make the same decisions. It does not
mean that everyone uses the same approaches or “gets jollies” from the same
thing. It does not mean that these choices are consistently bad or good. Only
that there’s a little predictability to the player that allows you to assess the
game.
Are your employees motivated by money, benefits, or mobility, or some-
thing else (like respect)? Is your significant other happy with your choice of
wall art, or is she just choosing not to fight that particular battle? Does your
boss actually like your idea, or is he placating you so you’ll stay? Commu-
nication is the key to many game-theoretic situations. It may not be verbal
(actually, it almost never is), but communication happens between players
before, during, and after a game. This transmission of information is the
lifeblood of game theory. It can reveal, conceal, reinforce, invalidate, and
directly affect the setup of a game.
What makes people tick?
Sometimes, you meet people who are motivated by things you can’t
comprehend. Remember, rational just means predictable. It doesn’t mean you
would choose the same things as another person. Their choices do not need
to seem “good” to you. Even if a person’s preferences are incomprehensible to
you, you can still play the game because you can specify it.
But, what if the person’s choices are all over the place? It’s not just that
your preferences don’t align, you don’t even know what the person wants.
Unpredictability, irrationality, a person who has no discernable sense of order,
is hard to play with. You can’t win if you can’t even figure out what it looks
like when they win.
Along these lines, a popular relationship-based cliché is “opposites attract.”
For magnets, sure. In relationships? Eh, not so much. The cliché is probably
Granny’s mistaken observation. It’s not opposites who are attracted to each
other, but complements. Players who are better described as complements are
arguably trying to reach a cooperative outcome that’s better for both parties.
Each player in the relationship is looking for a partner to fill in their gaps,
but they’re still trying to achieve similar goals.
People who are true opposites might find that a relationship is pretty
challenging. Every game-theoretic decision—and there are many in a rela-
tionship—would find the couple at odds with each other because payoffs are,
48 J. Prag and A. I. Prag

well, inverse. Interestingly, they might be able to predict each other’s choices;
exact opposites would assume the other player made the opposite choice from
theirs every time. But, it’s hard to imagine either of them would be happy
with the outcomes, even if they could play the game.

Better Safe than Sorry: Maximin


Until now, we’ve been navigating games based on what a person wants. The
direction: find the best possible outcome. Is this always how we approach a
choice?
Is your priority always to achieve the best outcome?
Sometimes, rather than making choices based on the best outcomes and
your knowledge of the game, you consider things like worst-case scenarios.
Even if one particular outcome is unlikely (or statistically shouldn’t happen),
you might avoid the choice that could lead to it just to make sure it can’t
happen. Instead of finding the best possible outcome, you’re trying to avoid
the worst possible outcome.
We call this strategy maximin; a fancy way of saying that you are making
sure the worst-case scenario cannot happen. (We’ll see this again when we do
sequential games in Chapter 6.)
Consider the game in Fig. 3.1:

A
X Y
A A
$5 -$10,000
X

B B
$5 $6
B

A A
$6 $10
Y

B B
$8 $10

Fig. 3.1 A big loss


3 Know Thyself: The Players 49

Two players, A&B, are choosing simultaneously and with full information,
meaning both players know the entire payoff matrix.
Player B has a dominant strategy to choose Y. Regardless of Player A’s
choice, Player B is better off choosing Y.
Player A doesn’t have a dominant strategy. If Player B chooses X, Player A
would choose X; if Player B chooses Y, Player A chooses Y.
With full information, Player A knows that Player B will (should) choose
Y, so Player A will also choose Y and the game should solve out with both
players getting $10, right?
Maybe.
This nice, neat, game-theoretic process puts player A in a precarious posi-
tion. They are making the choice (X) that allows the possibility of their
worst-case scenario. If Player B, for whatever reason, chooses X, then Player
A will end up − $10,000.
Game theory academics say that won’t happen; players always will and
should pick their dominant strategy. Oh, but Player B is human. It could
happen. Player B might make a mistake or not understand the game or have
a different preference for money and think $10 k isn’t that big of a loss. Any
way you slice it, Player A could lose a lot of money, even though she shouldn’t.
Whatever the situation is, you can’t say Player A should always trust Player
B just because he has a dominant strategy. People go against game-theoretic
strategy all the time. Real-world probabilities tell us there’s always a chance
the worst thing can happen.
But, Granny already knew that—and she prepared you for it: “better safe
than sorry.”
She was specifically telling you to make choices that avoid the worst-case
scenario, even if it didn’t bring you or the other person the best return.
Note: it doesn’t have to be about the other player making mistakes playing
the game or wanting to see you suffer. We just can’t know with 100%
certainty what the other player’s payoffs really are. Any outcome—however
statistically improbable—is possible.
Maybe we can get a little closer to understanding, though. Money doesn’t
incorporate all of the potential emotions that could lead to an unfavorable
outcome (like spite). But, jollies do. Jollies allow us to incorporate things
like risk aversion, caution, and sensibility. It also allows us to evaluate the
scale of preferences; how drastically different a particular outcome will make
somebody feel .
Let’s respecify the payoff matrix incorporating Player A’s fear of loss. This
will allow us to get the maximin outcome without messing up our game
theory approach.
50 J. Prag and A. I. Prag

A
X Y
A A
5 JOLLIES -10,000 JOLLIES

X B
5 JOLLIES
B
6 JOLLIES
B

A A
6 JOLLIES 5 JOLLIES
Y

B B
8 JOLLIES 10 JOLLIES

Fig. 3.2 Maximin Jollies

We’ve made two changes in the payoff matrix in Fig. 3.2. First, we’ve
converted dollars into jollies, and we’ve taken half of the jollies away from
player A in the bottom right corner. Why? If A chooses Y, they’ll face anxiety
(because there’s a possibility of getting − 10,000 jollies). This feeling factors
into a payoff matrix with jollies, since it accounts for the relative happiness
in each outcome.
Player A now has a dominant strategy to choose X, and Player B still has a
dominant strategy to choose Y. The game solves out at the bottom left corner,
the previous maximin solution. Player A gets 6 and Player B gets 8, and this
is an equilibrium (nobody wants to change). Correctly specifying and using
jollies, the things that make people tick, is key to using game theory correctly.

A Method to the Madness: Irrationality


Irrational behavior is present when a player doesn’t seem to have an ability
to make choices that align with any interests in a game. Arguably, this is
the very nature of mental illness. To an extent, any mental illness that renders
one player completely unpredictable makes a game-theoretic approach almost
impossible. For most of what we’re doing in this book, that’s just a limitation
for the application of game theory we take as given.
Societal rules come from well-accepted individual beliefs, and individuals
learn that most aspects of game theory work differently (or don’t work at
all) if a player doesn’t have an ability to make rational choices. Punishment
3 Know Thyself: The Players 51

is intended to dissuade a player from making noncooperative choices. Basi-


cally, if a player makes a choice that intentionally takes jollies away from
another player, society punishes him. A person who has no ability to compre-
hend rational choice, however, won’t likely respond correctly to the threat of
punishment. So, societies state that this difference in rationality should be
treated as a special case with different provisions. In many societies, there
are legal exceptions for those who make self-interested choices as a result of
mental illness.
There’s a lot of alternative social policy here and we’ll leave it to the reader
to think about this area using game theory, but let’s go in a different direc-
tion. The fascinating expression “There’s a method to his madness” invokes
the possibility that the irrational player is, in effect, gaming these provisions.
“Method” implies a predictable pattern. A person who has a method to their
madness might be what’s sometimes called an “evil genius;” someone who is
very good at being badly unpredictable. Arguably, that player is completely
responsible, just very different from most other players, and much harder to
play against in any game. You never know what he might do.

The Control Freak


Control is an ability to determine, or at least affect, outcomes. It sounds
like something that’s entirely outside the realm of game theory. After all, the
fundament of game theory is to explore situations where you alone cannot
determine the outcome.
There are choices that are unilateral, though. We make them all the time.
In a game, a person makes a choice, another makes a choice, and they
determine the outcome together. Unilateral choices and games usually have
different outcomes, different benefits. That’s the nature of having others
involved. Watching a movie by yourself isn’t the same thing as watching a
(possibly different) movie with someone else.
If the payoff is specified in jollies, in happiness, we have to allow for
the possibility that some people are happier making decisions alone. Some
people get jollies from the feeling of control. That is, interestingly, absent
from formal realm of game theory most of the time, but it still affects games.
How?
Remember: you can simply decide not to play a game. Alternatively, you
can decide you will only play games in which you have a dominant strategy,
but recall, that does not assure that you control the outcome. The control
freak’s payoff matrix is structured such that, whenever possible, she will only
52 J. Prag and A. I. Prag

play games with people who allow her the determine the outcome. Here’s the
thing: if you’re the control freak, you don’t usually recognize you’re acting
with this motivation. Control tendencies are usually recognized from the
perspective of other players who cannot seem to affect their outcomes.
If you’re playing against a control freak, you should recognize that your
choices don’t affect the outcome. Knowing this, you might not play. A few
examples: if Mary likes driving the car once in a while, she probably won’t
end up in a relationship with Harry, a notorious control freak. If your friend
never lets you choose where to eat, you might stop going (or stop caring
where you eat). You always have the option to not play the game.
[box]

Patriarchy, Hegemony, etc.


It’s easy to relegate the discussion of a control freak to the category of rare
exceptions or perhaps cautionary tales. Unfortunately, history has taught us
otherwise.

Many countries and cultures actively remove some of their citizens from the
game as a measure of control. Concepts like patriarchy, hegemony, and even
colonialization are all situations where a small group of players usurp the
decision-making ability of a large swath of the population in order to maintain
control.

These approaches spring from the well-known concept of parental approval.


In many situations and real-world games, we don’t let children make their
own decisions independently. Repressive control mechanisms are as insulting
as their history. Treating other adults like children in a decision-making context
is wrong in more ways than we can describe. But, in a game-theoretic context,
it is quite literally imposing the decision-maker’s source of jollies on everyone
else.

Having laws and rules that don’t allow people to vote, work, drive, etc. are
systemic versions of the control freak. While the players who are allowed to
make all of the choices can tell themselves that they are taking care of everyone
else, the fact is, they are not ever allowing people to choose. As we have seen
and will see in repeated games, this policy often leads to inferior outcomes. We
will revisit this issue in Chapter 4 and connect it to information and access to
education.

[end box]
3 Know Thyself: The Players 53

You’ll Understand When You’re Older: Rationality


and Age

An easy example of how rationality is relative can be seen when you think
of… well… your relatives. Your older relatives, to be precise.
We hope to all be there; we all get older and change. When a teenager
looks at mom trying to figure out Tiktok, they might say “I’ll never be so
incompetent.” Over time, though, there’s no way to know. Saying you “won’t
be like that” is assuming you even know what “that” is. You are who you are:
your age, your tastes, your concerns, your feelings. You probably won’t be
like “that” in the sense that you won’t be like your elders are right now, but
they—and Granny—probably said the same thing once upon a time.
Some would classify the actions of an older loved one as “irrational,”
but now we know that doesn’t fit the definition in game theory. Priori-
ties of any age, based on age, can be understood and preferences of any
predictable nature can be worked into a payoff. You might try to under-
stand the perplexing approaches or thought processes of an older relative,
but remember: they aren’t irrational. They just aren’t…you.
Here’s a fun one: try to figure out a payoff matrix when you go out to eat
with an older relative. The answers might surprise you, but the process will
enlighten you.
If game theory is applicable to seniors, why not to babies? Well, as we saw
in the introduction, babies have not yet figured out how to make decisions.
They are purely reactive beings, responding to positive and negative stimuli
without an agenda. Babies won’t respond to game theory the way a learned
adult might.

Know Thyself
When we discuss sequential games in Chapter 6, we will include games that
you play with yourself. These are not the same as unilateral, controlled games.
Rather, we mean games where your decision today can change your outcome
of a future game. In essence, you’re playing against the needs of your future
self.
When we do this in game theory, we are not simply talking about growing
up, maturing, gaining experience, or other things that happen naturally with
age. We are talking about choices today that allow for different choices in
the future. Game theory for an individual is quite literally the fork in the
54 J. Prag and A. I. Prag

road. Every choice you make today can preclude or allow a future choice. As
Granny would say: “when one door closes, another one opens.”
Consider the curious situation that you are in when you play games with
your future self. You think that of all the people on earth, you most assuredly
know your preferences and payoffs. In fact, you do not. You know your-
self today; you do not know yourself 20 or 50 years from now. You don’t
know what will matter to you, your biases, your constraints. You don’t know
what will give you jollies. You don’t really know yourself in the future in any
meaningful way.
The proof of that can be seen if you look back over your life a decade or
two. Why is it easy to give your teenage son advice about education drugs or
sex? Because, as Granny might have said, you’ve “been there and done that.”
You can (hopefully) remember your feelings, what gave you jollies, and what
mattered to you when you were his age. You know how your choices at his age
changed outcomes for you, and you can give some sage advice. “Hindsight,”
Granny said, “is 20/20.”
It is easier to predict player preferences and accurately specify a payoff
matrix if you’re already past the time in life your players are experiencing.
Giving your kids advice is easier than dealing with your aging parents. You
simply haven’t gotten there yet.

Too Tired to Think


If acting on emotions is (sort of ) the opposite of doing game theory, then
being “too tired to think,” might mean you are too tired to figure out the
structure of the game that you’re involved in and what your best choices are.
It happens. We’re human.
Why does fatigue make decision-making more difficult?
Emotions and tiredness change information processing. In game theory,
they affect your ability to accurately assess a game before you even play it.
Game theory is a rational, logical process as long as we recall that everyone
can have different ways of being rational and logical. It is really a brain-only
thing, and the brain works better at some times than it does at others.
Leadership training often includes recommendations like getting enough
sleep, avoiding multitasking and distractions, and controlling your emotions.
These are good habits all for effective leaders, but they are especially impor-
tant when you make strategic decisions. Why? Accurate and measured
informational processing is key to evaluating all options and choosing the
optimal one.
3 Know Thyself: The Players 55

When you’re “not thinking straight,” you tend to do things that are
mentally easier. Strategic decision-making depends on analysis and evalua-
tion. So, tiredness is strategically problematic because you might mis-specify
the players, choices, and payoffs. For example, you might think that everyone
you’re dealing with is just like you (has the same payoff matrix). You might
also fail to recognize there are more than two choices or that there’s another
motivation for the other player. A reduced ability to accurately evaluate and
set up a game often leads to bad outcomes. It’s why Granny told you to “sleep
on it.” Don’t try to make difficult decisions when you’re too tired to think!
Physiological factors can affect decision-making in a couple of ways. Pain,
for example, can make people less able to process things—since it reduces
concentration—and thus less able to make correct strategic decisions. Inebri-
ation reduces the ability to process potential consequences in the short- and
long-term, also leading to suboptimal outcomes. When we focus on the
human aspects of game theory, we can’t ignore the effects of physical health.
But, going down this path needs to be done with great care. Hormones are
often associated with things like overactive sex drive and rage. Just because
the science is valid doesn’t mean we can use biological excuses for bad deci-
sions. When the correct outcomes for all players are included in a payoff
matrix, game theory specifically never allows for a “hormonal justification” for
making bad decisions. On the contrary, every punishment in a game exists to
take away jollies from bad choices and reduce the incentive to harm the other
player. Correctly specified jollies de-incentivize bad behavior. Game theory
does not allow for biology as an excuse.

[box]
Revenge
There’s a well-known, rather cryptic, expression (usually ascribed as an Italian
proverb): “revenge is a dish that’s best served cold.” What does this mean?

One interpretation of this proverb is that reactions to other players choices


should not be rushed. A player made a choice that gave you a bad outcome.
Revenge is a reaction to that choice. The nature of the word revenge implies a
sequence of games with the same player. The expression could mean exacting
your revenge quickly (and probably emotionally) will not be as fulfilling as
waiting until the situation or the payoffs from the game allow you to punish
the other player effectively.

All of this sounds pretty ominous, but it doesn’t have to be. Revenge can be
silly. If you’re building a snowman with your sibling and out of the blue, they
hit you with a snowball and snow goes down your back (usually the worst
56 J. Prag and A. I. Prag

outcome of a snowball sneak attack), you aren’t likely to get your best revenge
if you respond immediately; their guard is up. So, you laugh it off and wait
until they relax, maybe after they turn their back on you. Revenge, literally, is
best served cold.

Granny’s expression, often mis-interpreted, that has the same meaning is “he
who laughs last, laughs best,” or simply “getting the last laugh.” This has
nothing to do with humor. This one is about repeated games. If you’re making
choices with the same player over and over, the expression tells you to exact
your revenge in the last game. Laughing is the jollies you get by taking revenge
for something, knowing the other player can never get back at you. There’s no
chance for them to take revenge on the revenge. We’ll talk more about this
motivation when we examine repeated games.

[box]

In Hot Pursuit of an Irrelevant Success


There is an interesting complication of the payoff matrix and the concept
of rationality. Rationality is just consistency, an ordering of outcomes that is
applicable on a regular basis. That’s all well and good if the payoff matrix itself
is well-ordered. However, payoffs are personal things. We often find ourselves
wanting something for a reason that we can’t (or don’t want to) explain.
If your company’s softball team has never won the league championship
and they are in the finals for the 10th time, players might care a lot about
the outcome of that last game. They might care about it an awful lot. They
might care about it more than work or even home. The potential payoff, the
value of the “win,” might seem irrational to people who aren’t on the team
(spouses, other employees, etc.…). When team members skip work meetings
or home duties to attend softball practice, they look like they are making bad
decisions. This is best understood as a misunderstanding of the payoffs that
they perceive. To the softball team members, winning the championship is
an affirmation. To others, it is an irrelevant success.
That problem carries over into many more spaces. Rankings, awards, and
rules for promotion can be forms of irrelevant success. Organizations spend
millions of dollars enforcing historic rules that might well be irrelevant for
actual profitability. If you get stuck trying to figure out another player’s ratio-
nality, think about the issue of relevance. What matters to you may not matter
to others (even a large group of others), and vice versa.
3 Know Thyself: The Players 57

Suppose there are two restaurants close to each other with equally good
food. One focuses its resources on talent, ingredients, and ambiance that
earn it a Michelin star. The other spends the same number of resources on
marketing. Is it obvious which one will survive? Do the restaurants have the
same goal? (hint: do they have the same audience?)
Or, how about this one: success, to academics like college professors,
includes getting tenure. Roughly speaking, that means you have a job for life
at a college or university as long as the school and your department survive.
At many schools, getting tenure requires publishing several research papers
in academic journals that are peer-reviewed and highly ranked. The rules
for getting tenure were created many years ago when creative and research
freedoms needed formal protection.
Who’s the customer for a school? Students. Do students care about any of
the metrics that lead to tenure? Do they even know what they are?
What happens when the priorities of two players (in this case, a customer
and a producer) aren’t aligned?
Rules, rationality, and the payoff matrix are all interrelated. You will find
yourself in games where you are told the rules and you have to decide if you
even want to play the game. The choice can be difficult and problematic. For
better or worse, the rules were written by the people who got there first. You
can see something that looks like a lot of fun—looks like it will give you a
lot of jollies (like teaching), but the rules involve educational attainment (a
PhD in many cases) and other seemingly arcane metrics (published research)
that make the goal less fun and reduce your jollies. Knowing this, you might
become a Marketing Director and write books instead.
Historically, rules are set to establish and maintain some level of quality.
Teaching college means teaching concepts and approaches that are more
than your opinion. Aha! Light Bulb Moment: the irrelevance of your success
depends on its impact. If you teach people well and they live a better life, or
if your research solves a problem and it changes the world for the better, or
if the work team wins the Softball Championship, it may be worth it. If you
can trace the outcomes of your accomplishments to something beyond you,
then you did not spend your life in hot pursuit of an irrelevant success.
If mortality is the finish line for accomplishments, then, to paraphrase
Da Vinci: “a life well-spent brings happy death.” When all is said and done
(thanks, Granny), getting the last laugh might be the best outcome.
[box]

The Cover of the Rolling Stone


58 J. Prag and A. I. Prag

The hot pursuit of an irrelevant success was actually the point of a popular
song in the 1970’s. Dr. Hook and the Medicine Show was a band that achieved
moderate success during the seventies when it released the song “The Cover of
the Rolling Stone.” Written by poet Shel Silverstein, the song laments that the
group won’t feel like they are a success until they have appeared on the cover
of the then-iconic music magazine Rolling Stone.

At the time of the song’s release, that cover was considered the height of success.
But, the long-term relevance of that achievement more or less died with the
demise of print magazines. Rolling Stone is still around, but it’s mostly an online
publication. “Cover of the Rolling Stone” is a great song with diminished
cultural impact.
[box]

Sincerity
There are humanist things that affect game outcomes that can’t be easily
taught, but they are well-understood. When you’re trying to communicate
your motivations, what makes you tick, to the other player, you’re often put
in the position of saying things like “money doesn’t matter to me” or “I’m
not in this for myself.” Roughly speaking, you’re trying to get the other
player to believe that you want to make cooperative choices—as opposed to
self-interested choices.
When the other player believes you, they likely feel that you are sincere
or genuine. While these words have definitions, their application, and real-
world use is usually informal. People “pick up on things” when they talk to
you. Executive training consultants say it’s eye contact or facial expressions
or body language—all of which can be associated with sincerity. (But, those
things can be taught, learned, and therefore mimicked. They may not be the
most reliable indicators of sincerity.)
“How you come across” is a critical part of the outcome of all games that
you play. As we have more interactions, play more games, and see “what works
and what doesn’t,” we are ultimately testing signals and learning. Hopefully,
many subtle signals of things like sincerity “work for us.” And that’s the key:
what works for one person doesn’t always work for others. Communicating
sincerity is often a matter of reinforcing some aspects of your personality and
softening or removing other things that make you seem self-interested.
3 Know Thyself: The Players 59

A Sense of Fairness
Using jollies can help you put a lot of things into a payoff matrix and incor-
porate more of the human element into a game. There are concepts out there
that are more like feelings; they are things that are easy to say but hard
to define. One such concept is fairness. Being fair might be as generic as
“everyone gets to play by the same rules,” or it might be as personal as “things
turned out a lot better for me than they did for you and that makes me feel
bad.” The latter approach can be incorporated in the payoff matrix of a game.
Suppose the payoffs for a simultaneous (everyone chooses at the same
time), noncooperative (players can’t collaborate) game with full information
(everyone can see the payoff matrix) looks like Fig. 3.3:
To make this game easier, let’s assume that these players each get one jolly
for every dollar that they end up with. This game will end up with Person A
getting 3 and Person B getting 10. Person B has a dominant strategy to pick
Up; Person A does not have a dominant strategy, but they can see that Person
B will pick Up, so they pick Left.
Now suppose person B feels really bad—as in, loses jollies—whenever he
ends up a lot better off than person A. Specifically, B loses 6 jollies if his
winnings are more than twice A’s winnings. B is guilty.
That sense of fairness on the part of B will change where the game ends
up. To see that, let’s look at the payoff matrix, now only with jollies. We can

A
LEFT RIGHT
A A
3 JOLLIES 2 JOLLIES
UP

B B
10 JOLLIES 5 JOLLIES
B

A A
3 JOLLIES 4 JOLLIES
DOWN

B B
5 JOLLIES 4 JOLLIES

Fig. 3.3 Joystick Choices


60 J. Prag and A. I. Prag

A
LEFT RIGHT
A A
3 JOLLIES 2 JOLLIES

UP B
4 JOLLIES
B
-1 JOLLIES
B

A A
3 JOLLIES 4 JOLLIES
DOWN

B B
5 JOLLIES 4 JOLLIES

Fig. 3.4 Joystick choices—with guilt

include this self-imposed punishment in the upper right corner to see how
guilt affects B’s jollies, as in Fig. 3.4:
This change doesn’t affect Person A; he still doesn’t have a dominant
strategy (we didn’t change his payoffs). However, it does affect B. He now
has a dominant strategy to choose down, and the game will now end up with
both players getting 4. B’s guilt changed the payoff matrix and therefore the
outcome of the game.
There are people who would say B is illogical or irrational for not acting
in his own self-interest, but there’s nothing particularly wrong with basing
your jollies on things like how the other player might feel. We do this all the
time in healthy relationships. The choice was still self-interested: B chose the
option that didn’t make him feel guilty. When we use happiness, we have to
remember that it isn’t measured the same way as money, revenue, house size,
or other metrics. More/bigger isn’t necessarily “better” in everyone’s mind.

Conclusion
We run the risk of “losing” every game, making incorrect or inappropriate
choices, when we don’t understand what matters to the other players. The
easiest thing to do is assume that everyone is just like you. But game theory
isn’t easy (sorry). The very nature of doing game theory correctly involves
open-mindedness, a worldly perspective, and a lack of prejudice.
3 Know Thyself: The Players 61

(We’ll see in Chapter 7 that prejudice can dramatically alter the outcome
of a game.)
In classroom game theory, there are usually straightforward games (the
traditional version of the Prisoner’s Dilemma), and difficult, complicated,
heavily mathematical games. Straightforward games are perceived as easier.
In the classroom, that’s correct. In the real world, where you have to figure
out “what makes the other player tick,” specify their payoffs, and balance
your own preferences, you realize all game-theoretic situations are tricky.
Players in important games are more than just other ages or other genders.
They are other countries, other cultures, other belief structures. It’s very
easy to be judgmental when you do game theory out in the real world.
Feeling superior because you always do what you perceive as the coopera-
tive thing might give you jollies, but it doesn’t always give you enough (extra)
jollies to compensate for playing a game naively or mis-specifying a game.
Understanding the player is step one toward strategic success.
As it turns out, what you don’t know can hurt you. Ignorance, in game
theory, is rarely bliss.

Questions
3.1. Two people meet while partying in Las Vegas. Over the course of an
alcohol-soaked weekend, they get married at the Chapel of Love (yes,
that’s a real place). One year, 2 months, 3 weeks, and 4 days later, they
filed for divorce. Another couple is set up by their families in an arranged
marriage. Despite never dating before their wedding, they are married
for the rest of their lives. What game-theoretic information is different
between the two couples?
3.2. The point of learning game theory is to better understand and navi-
gate situations where you and another person are making choices that
together determine the outcome for each of you. Given that, it’s point-
less to study game theory if you assume everyone is just like you.
Consider this situation: a 65-year-old grandfather is taking care of his
14-year-old grandson. They decide to watch TV together. Grandpa is
only interested in a fishing show. Grandson only wants to watch a newly
released science fiction show. Neither of them has any interest in the
other’s show. How will they be able to decide which of these shows to
watch?
62 J. Prag and A. I. Prag

3.3. There’s a well-known debate in many corners of society about the


primary determinants of a person’s outcome—nature vs. nurture. Are
we the product of biology and genetics or are we the product of our
upbringing? We’re not biologists, so let’s restate the question in a game
theory context: does nature or nurture determine a person’s payoff
matrix? Put differently, are the sources of jollies learned or intrinsic?
3.4. Good long-term relationships (such as long, happy marriages) are
the product of well-played, repeated games. Learning what’s in your
partner’s payoff matrix is a critical part of playing those games well. Mel
and Nel have been living together happily for 30 years even though Mel
loves baseball and watches or listens to many games during the year and
Nel likes to crochet. How then could they have stayed together, happily,
for so long?
3.5. You’re going to break up with your boyfriend and you decide to do it in
a public place. Why? The outcome is the same. Why does “where” you
do it matter?
4
Fool Me Once: The Value of Information

Granny used to say: “a little knowledge is a dangerous thing.” No, she wasn’t
telling you to drop out of school. The key to this cautionary expression is the
word “little.” Granny was saying that too little knowledge about something
might cause you to misstep in the future. You think you know enough to
make an informed decision, but you don’t.
There are probably a dozen expressions extolling the value of knowledge
and information. In game theory, “who knows what” and how information is
used almost always determines the outcome of games. Information removes
uncertainty. Full information brings you closer to a full understanding of
game setup: the other players, your choices, and your outcomes.
Uncertainty happens in the absence of reliable information, and it creates
a problem in any decision-making process. As we saw in the last chapter,
the underlying challenge in game theory is figuring out what someone else’s
payoff matrix really includes—what they really value. In the real world, it
is nearly impossible to understand other people’s jollies, what makes them
tick, fully. Information (in many forms) becomes an important tool in getting
closer to clarity.
Let’s take a look at the types of information that can be helpful, how and
when we use them, and where you can leverage information across multiple
types of games.
Information is a broad term, and as we stated before, we have to be specific
about what we mean. In a game theory context, information is anything that
helps one player better specify the potential outcomes of a game. It could
be related to the preferences or motivations of the other player, the type
and frequency of the game at hand, and the choices available to all players.

© The Author(s), under exclusive license to Springer Nature 63


Switzerland AG 2024
J. Prag and A. I. Prag, Useful Game Theory, Palgrave Executive Essentials,
https://doi.org/10.1007/978-3-031-75154-7_4
64 J. Prag and A. I. Prag

Because information is so impactful, game theorists use a lot of in-house


jargon and new terminology when talking about information-based issues
and problems. Because of that, we’ll be spending a fair amount of time in
this chapter restating Granny’s wisdom in terms of game-theory-speak.
In classrooms and game theory books, you are given all information up
front. It’s right there in the setup. There’s the game, up on the board, crystal
clear. Each player can see everything plain as day: payoffs, preferences, players,
and all available choices. In daily interactions, well… it’s not so clear. Most
games we play have some degree of asymmetric information, where one player
has more (and more relevant) information than the other. Information asym-
metry is at the heart of every job interview, used-car transaction, and first
date. Each of these games is an exercise in gleaning (or hiding) relevant infor-
mation from the other player and, in doing so, getting the game to turn out
better from your perspective.

The Fine Art of Information Sharing


“Never tell anyone outside the family what you’re thinking,” was The Godfa-
ther, Vito Corlene’s warning to his son Sonny after he blurted out his opinion
to an outsider. Side note: if your Granny said this too, you may want to look
into your family history.
The default assumption in most games is that information known only
to you is more valuable because it can help you attain the best outcomes. If
information is private, it’s perceived as having more impact on the player. The
Godfather’s advice is to share information only with people that you trust,
like family. Otherwise, you run the risk of another player using information
against you.
Information is certainly a tool that can be used to manipulate, and in the
context of The Godfather, it was a matter of life and death. Hopefully, we’re
not engaging in daily dealings on this level, but Vito’s point is well-taken:
once information is out there, it can’t be “taken back.” Each piece of infor-
mation is part of a puzzle to how you, as a player, “tick.” Once it’s available
for all to see, it’s difficult to put it back in the bottle. Granny advises us to
“play it close to the vest,” and keep valuable information from other players
until you are certain of their intentions.
Information can be withheld or revealed depending on how that infor-
mation can be used by another player. The act of revealing information to
someone else is called a signal. If it makes you better off, you can signal
the information you have. If it doesn’t, strategic theory says you should
4 Fool Me Once: The Value of Information 65

keep it private. (Spoiler Alert: there’s a whole subdiscipline of game theory


called Signaling Theory. Everybody loves a good subdiscipline). Signaling
by a more-informed player can grant a lot of power in negotiation games,
especially when the information can impact the other player’s jollies in a
significant way.
If you’re the less-informed player, signaling has value as well. The less-
informed player will usually try to glean information to help reach their best
outcome. When a less-informed player asks questions or attempts to gather
baseline information to better her position, we call this a screen. You screen
the other, better-informed player to glean useful information. Every time you
fill out an application for a job, loan, or credit card, or you apply for college
or graduate school, or you fill out an application to rent an apartment, you
are being screened by the entity that you are trying to do these dealings
with. You know more than they do about yourself, so they’re trying to fill
in informational holes to ensure their player position is stronger.
Aha, did that confuse you? We bet you thought that the hiring manager
or company was the more informed player, right? Wrong! In hiring deci-
sions, the candidate knows much more than the company. Why? You know
you best. The company tries to screen you to get some information to tell
them if they should hire you. Keep in mind, information affects power, but
the two are not the same. Power is an ability to drastically alter the other
player’s outcome with your single choice. We’re not saying a less-informed
player doesn’t have power to influence the game. Simply, from an informa-
tion standpoint, a job candidate always has more knowledge about what they
bring to the table than the company looking to hire.
Here’s a fun tactic: bluffing. Bluffing is a signal that’s not true. It is false
information intentionally given to alter the game in your favor—and this is
problematic for both players. From the perspective of the less-informed player
receiving the bluff, there’s no way to confirm how true a bluff might be. The
player might not even know the true impact of the bluff, which means they
can’t accurately specify their own jollies anymore. If you think of the signal as
the total amount of information that you’re getting, a bluff is noise. It’s part
of that information that’s hard to verify, and the ability of a player to bluff
implies there’s a lot of potential noise in these signals. In game theory and
acoustics, this is sometimes called a high signal-to-noise ratio, where a ratio
of one (1) means the signal is all noise, unverifiable, and dangerous to rely
on.
In some circumstances, we are able to walk away from (not play) games
with a significant signal-to-noise ratio. If calling the bluff (asking for
evidence) costs you a relationship, for example, you might not play the game
66 J. Prag and A. I. Prag

at all. On the other hand, we regularly play games that are based on bluffing,
like poker. Great poker players assume bluffing is happening, and their skill
lays not only in betting on a great hand but also leveraging bluffing players
against each other. It takes a keen eye to see a pattern (and therefore infor-
mation) in a bluffing signal. Some would say it’s the whole point of the
game.
Signals are given to change the behavior of a player. If a person sending
a signal is not bluffing and she wants the signal to change the behavior of
the recipient, she will have to decrease or eliminate the noise to increase the
validity (trustworthiness) of the signal. This can be done by including some
form of proof to back up the claim. In day-to-day interactions, proof is more
than “I swear” or “I promise.” As Granny said, “talk is cheap.” When you
apply for a loan, you can swear that you’ll repay it, but the lender will prob-
ably look at your credit score or another metric to know with reasonable
certainty whether the loan will be repaid. You provide a signal, and your
credit report verifies it’s not a bluff.
Can bluffs happen on screens? Absolutely, we do it all the time. A bluff can
inject noise into a screen easily, since the less-informed player doesn’t know
whether the screened information is correct. It could be that he doesn’t know
the other player well enough to evaluate the answers for truth. It could also
be a processing problem: did you ask the right question? The person trying
to get information might also face a cognitive bias in his interpretation of the
screened information, based on many factors (do I like how is the response
worded? Is it cold in here? How long have I been screening candidates?)
“You can’t judge a book by its cover” is one of Granny’s cautions about
what your screened information is actually worth. She means you can’t tell
how good a book is by simply looking at the front; the cover doesn’t give
you enough information to accurately assess how good the book is. Granny
is cautioning you against an unreliable screen, lest you buy the book and hate
the content.
If the game is on a board in a classroom: yes, you have all the information
you need to solve a game. In reality, information gathering takes time, skill,
patience, and the ability to see through many human factors.
“Don’t judge a book…” is talking about the misuse of spurious visual
screens—making judgments about people based on their appearance. Because
of this, screening has become rightfully regulated or outlawed in many cases.
You aren’t allowed to use screens like ethnicity or gender to make decisions
about most things. Historically, screening on those sorts of things has been
used inappropriately to make decisions on hiring, housing, compensation
and promotion, mental toughness, and college admissions. Regulations have
4 Fool Me Once: The Value of Information 67

helped stop non-data-driven biases from creeping into these life-changing


decisions that require much better information to assess accurately.
Of course, as a human, you can put any screens in place you like. For
example, if you’re an astrophysicist attending an international conference and
your personal screen is to only talk to people who look like serious, tie-
wearing, strait-laced scientists, you might avoid other people… like Dr. Brian
May, award-winning astrophysicist, published author, and lead guitarist of the
rock group Queen.
[box]

Consider the Source


When information is being shared openly and freely in an obvious strategic
situation, the first thing you should do is think about the type of game you’re
playing. For example, if you’re in a “winner take all,” what we call zero-sum
game and the other player tells you what they intend to do, you should ignore
them. Why? In this case, the other player has no incentive to help you; it’s zero-
sum, win or lose. They are telling you information to mess with your head,
skew things to their favor, or otherwise influence the game to their advantage.
In zero-sum, any information conveyed by your opponent is a setup to try to
make you lose.

Consider tennis. Suppose your opponent yells, “I’m going to hit the ball to
the left side next time.” Why would they tell you that if it was true? They
wouldn’t. Why would they bother lying to you at all? Because they want you
to deviate from your current strategy that has you ahead 5-0 in the final set.

First and foremost, information sharing only makes sense if the game allows
the information to actually make a difference. In poker, it takes a trained player
to filter bluffs and signals that come their way. In zero-sum, it may take one
or two bad outcomes before you stop trusting all signals altogether. Even in
games that might be win-win, information can be used to move the outcome
in your favor or against you.

[end box]

Separating the Wheat from the Chaff


Granny was all about signaling. She knew that it’s “hard to tell a book by its
cover” and “actions speak louder than words.” She also told you that “you
will be known by the company you keep.” Sure, she might have told you this
68 J. Prag and A. I. Prag

to get you to think twice about your dear friends. The expression is great for
game theory too.
Suppose a young, highly talented basketball prospect is trying to choose
between two colleges that are equal in most ways. Each college’s Athletic
Directors (AD’s) will likely tell her that she’ll get a lot of playing time, the
team will promote her in the press, and she’ll start each game. Assume both
teams kept most of their starters from last year, and the colleges finished their
conference in first and second place, respectively.
How can the prospect choose which school to attend? Both coaches are
signaling what they think is useful information, and it’s virtually the same
for each school. Both AD’s said the same things about opportunities to get
playing time and the likelihood of wins. The team makeups are almost iden-
tical, records are close, and the facilities available are state-of-the-art. The
schools are equidistant from the player and each program is offering her the
same scholarship.
What other information could the candidate use to break this tie?
What about the personality of the coach? Suppose Coach X has more
charisma, is more energetic, and seems more excited about the prospect of
getting the student on board. He calls all the time and makes a big show of
trying to recruit her.
The other coach, Coach Y, is a close friend and frequent confidant of one
of the most successful professional coaches in the country. This is not some-
thing he talks about openly, but everyone, including the prospect, knows
about it.
If the player believes that friends are a signal, and if she believes Coach Y’s
friendship is indicative of his commitment to basketball, she might choose
Coach Y’s college. The friendship is a signal, a new piece of information, and
it influences her decision and the outcomes of the game. The player is judging
the choice before her by the company each coach keeps.
Signaling and interpreting signals is complicated and personal.
We said that both coaches offered the same amount of playing time, but
an experienced player knows that outcomes—both individual and team—
depend on the amount of effort that all players put forth. Being a good
friend of someone successful is a positive signal, but so is being a charismatic,
inspirational coach. As it turns out, many of us “follow our gut” when it
comes to choices like this. We could have just as easily said the player values
enthusiasm as a force in leadership and chooses Coach X. Interpretation is
extremely personal, dependent on the player’s priorities and preferences.
There are infamous examples of charismatic, inspirational leaders who led
followers down a path to very bad outcomes. Pick your favorite repressive
4 Fool Me Once: The Value of Information 69

dictator, for example. This is not a reason to ignore charisma, but it’s a reason
to put these two signals (charisma and friends) together. Most charismatic
leaders have friends—previous followers. Before you follow a charismatic
leader, you might want to talk to, or do a little research on, other people
who have made the choice that you are about to make. Granny’s advice is
even better now: you’ll be known by the company you keep.

What’s the Worst That Could Happen?


You’re back in high school (ew) and you want to ask a classmate to go out
on a date. You’re shy and afraid of rejection. Granny says, “there’s no harm in
asking; the worst thing is they could say no.”
This and other similar expressions sound completely innocuous. “There’s
no harm” makes it seem like nothing bad could happen by simply asking a
question. True?
Let’s try another example: you get pulled over by a police officer because
you were speeding. You’re asked for your license and registration. As you’re
fumbling around trying to find them, you ask in a friendly tone, “have you
ever had to shoot someone?”
Ooh, not the time, friend! Not the time. Why does it feel like such an
ill-advised question? It shouldn’t hurt to ask, right? Well, take a step back and
put yourself in the officer’s shoes. She was just doing a routine traffic stop, but
now she’s playing a different game. The question’s timing and circumstance
changed the perceived risk in the traffic stop. Asking a friend out at school
also changes the game, assuming you have to continue seeing the person in
class—but the risk to personal safety is much less.
What about asking out someone that you like at work (assuming this is
allowed)? There’s a colleague that you get along with and you’re thinking
about asking if he wants to go to dinner. You remember the advice from
long ago, “there’s no harm in asking, all they can say is no.” So, you ask and,
politely, and he says no. Is it business-as-usual on Monday?
Nope.
By asking the question, you changed the relationship forever. Maybe not
in a bad way, but all future interactions, all games, are now re-informed. Both
players know about your feelings and your willingness to act on them. The
question changed the game.
That all future games are changed doesn’t always mean there’s harm in
asking a question. A question is a type of information, and it always helps
players respecify a future game.
70 J. Prag and A. I. Prag

How do we address questions in terms of signaling? Maybe you don’t ask


the question directly; maybe you just “feel out” the situation. You could ask
other questions (maybe of other people) the information you need. This is a
strategy to screen the other person’s position indirectly—and more cautiously.
When stakes are higher, like in a workplace or getting pulled over, you might
start with a smaller version of the game. In High School, it’s “do you want to
go on a date Friday night?” At work, it’s “let’s go grab coffee sometime.” On
the side of the road, it’s “traffic stops are so stressful; here’s my license and
registration.”
In many game-theoretic situations, making the game smaller can often
make the game turn out better for all players. Recall the Maximin Game in
Chapter 3: human nature (jollies) often makes us shy away from choices that
could lead to a very bad outcome—even if the outcome isn’t likely. When you
break a larger game into smaller games, the potential negative payoffs become
smaller. As humans, we do this all the time: in personal relationships, finan-
cial investing, traveling, and gambling. You don’t ask her to marry you on the
first date; you don’t throw millions into an unknown hedge fund, you don’t
make your first trip overseas a 90-day transatlantic cruise, and you don’t go
all-in on your first poker hand. It’s what Granny would have called “pacing
yourself.”

Diluted Signals
The most straightforward signal is to communicate directly: say it, write it,
create a PowerPoint for it, or otherwise reveal it completely. This ensures the
information is received. However, direct communication doesn’t guarantee
the information is understood . When dealing with signals between humans,
we must be cautious that any signal communicates what we are hoping to
communicate.
Once upon a time, telling someone that you loved them was a very
powerful signal. These days, when “love ya” is something people toss around
mindlessly, the word doesn’t necessarily convey a clear message from person to
person. Along the same lines, when you teach your child manners you teach
them to say please and thank you. These are good habits to be sure, but saying
please and thank you isn’t supposed to be a substitute for good behavior.
Those words do not allow you to ask too much of people. Well-mannered
words are not the same as good manners.
Granny cautioned us well to be wary of verbal signals. She said, “actions
speak louder than words,” and “talk is cheap.” Game-theoretically, these
4 Fool Me Once: The Value of Information 71

expressions are absolutely true. Notice: she isn’t saying words don’t mean
anything, just that direct communication of some things can be interpreted
in many ways. People absorb and interpret information through their own,
individual lens, with their own biases, processing capabilities, and previous
experiences. Even direct communication—the best form of signaling and
information sharing—can become diluted and misinterpreted.

[box]
Curiosity: Educational Attainment
Formal education is important to build hard and soft skills for future career and
achievement. However, there are those who argue that educational attainment
is actually a signal of something more than formal skills. Being able to say that
you have an associate’s, certificate, or bachelor’s degree doesn’t necessarily mean
you’re smarter that those who do not. It really means you were able to get into,
stay in, and graduate from a college program. The degree provides information
about your privilege, your work ethic, your commitment, or your ability to
learn. A potential employer or client takes the degree as additional screened
information, assuming the values of the college align with what they’re looking
for.

Like other signals, a formal degree has lost much of its informational power
over recent decades. Policies that make it easier to get into college and make it
harder to flunk out have skewed the information a degree provides. Advo-
cates for these policies are (rightfully) trying to improve life outcomes for
people based on data that show college graduates have higher incomes and
better achievement throughout life. There’s nothing wrong with wanting to
give everyone an equal chance. Unfortunately, these policies can also take away
part of the value of the signal, depending on how they are implemented. If a
potential employer no longer believes a degree is a major indicator of potential
and performance, she may not prioritize an applicant with the degree. Perhaps
she moves on to other indicators, like work experience or longevity with a
prior company. In this case, the policy intended to help actually removed some
opportunities, since it diluted what the degree represents. It has the opposite
effect of what policymakers may have wanted.
[end box]

Mixed Signals
“You’re sending mixed signals!” said Granny. In game theory, a mixed signal
is both a big deal and a real problem.
72 J. Prag and A. I. Prag

When players face asymmetrical information, a signal conveys informa-


tion that the signaler wants the recipient to believe and use to solve the game
in the signaler’s favor. We talked about bluffing, purposely signaling some-
thing untrue to try to change the outcome. It’s a noisy signal, where the
interpretation depends on the believability of the signaler.
Mixed signals are a bit different. A mixed signal is a signal that has more
than one possible interpretation. And that’s key; how the recipient interprets
the signal determines whether it is mixed. Here’s an example:
A very smart high-school senior is trying to decide where to go to college.
She was accepted to many schools including the University of Chi-town—a
famously exclusive, academically challenging school, and Hoity-toity Univer-
sity—a famously elite (Ivy League), academically excellent school. The
student asked the opinion of a learned family friend. The student said “I
have to go to Hoity-toity, right? No one turns them down, and if I go there,
I’m set for life.”
The sage, wise, family friend responded, “If you choose Hoity-toity, a
future employer will think one of two things: either you’re smart or you’re
from a well-connected, affluent family. If you choose Chi-town, the only
signal you send is that you’re smart.”
These are just one person’s interpretations of the college-choice signal.
Others could certainly interpret them differently. That is the problem. Mixed
signals are subject to interpretation. The choice of a college has to be made
early in life, but what it signals lasts, more or less, forever and will be subject
to interpretation by people the student doesn’t even know yet.
Mixed signals also permeate the world of high-price, high-quality goods.
When you consider buying an ultra-expensive, high-quality watch, are you
communicating your sense of style or just your ability to afford something
extravagant? When you buy designer clothes, are you buying their style, fit,
and other attributes, or just the right to wear a brand name? Does the brand
convey that you are stylish or snobby? Or both?
Signals are subject to interpretation, and mixed signals depend solely on
the parameters of the receiver. Specifying a game depends on one player’s
ability to accurately identify what matters to another player, so they don’t
send a signal that can be interpreted negatively. When decisions are made
without knowing all future players, mixed signals can be risky.
[box]

K.I.S.S. Strategy
4 Fool Me Once: The Value of Information 73

Granny didn’t mince words a lot. Sometimes, she said “come on, keep it
simple.” Some of us had a Granny that embellished it: “keep it simple, stupid.”
She was a peach.

We said the most straightforward signalling strategy is direct communication,


where you simply tell the other player what you want them to know. This is
the K.I.S.S. (keep it simple, stupid) strategy. If you tell the other play exactly
what you want them to know, there’s no way it can be subject to interpretation,
right?

Look at it from the other side. If a player tells you something in a game-
theoretic setting, why do you believe them? To use information correctly, you
have to know the game and its payoffs. If you’re at a four-way stop perpendic-
ular to a stranger and they wave for you to go first, you don’t usually question
that information. You don’t even think about this very long because, well, why
would they lie? Mind you, there are possibilities; they might work for an auto
body shop, or they might get jollies from the sound of crashing cars, but far
more likely, they just want you to go first.

Now, go back to playing tennis. You’ve been hitting the ball back and forth
with the other player a few times. It’s a competitive game; you both want to
win, and it’s zero-sum. Just before she hits the ball back to you, your opponent
yells “I’m going to hit this one to your left.” Of course you don’t listen! It
might be true, sure, but this time, the other player has every reason to lie.
Your opponent is trying to “get inside your head”; change your strategy by
giving you completely useless information. That’s the nature of that game.

The K.I.S.S. strategy—direct provision of information—only works when


there are no reasons for any player to bend the truth. Weirdly enough, in game
theory and in life, it’s rare to find that type of game. Information is almost
always shared to sway a player. Think of most communication that comes
from a parent of a three-year-old right before bedtime. Information—even
directly—is intended to promote better choices.

[end box]

The Winner’s Curse


You’re wandering the streets of Paris, and you happen upon a little art gallery,
Galerie De Frèd. As you’re perusing the paintings in a bin, you find an
unsigned piece that you think might be a Van Gogh. You offer the proprietor
74 J. Prag and A. I. Prag

100 Euros, and he happily accepts your offer and hands you the painting.
Did you buy a Van Gogh? Nope…you probably just bought a “Frèd.”
When a transaction involves asymmetric information, specifically when a
good’s value is known to a seller but not a buyer, a buyer will often overpay
for the good. In the case of the painting, any offer that the gallery owner
accepts will be greater than or equal to the true value of the artwork. The
gallery owner—the seller—is the only one who knows the true value. It’s
usually illegal for the seller to outright lie about the quality of a good, but he
can certainly hold back on key information, forcing the buyer to guess (aka,
lie by omission).
Some states or municipalities have tried to reduce information asymme-
tries by forcing sellers and buyers to disclose information. In California,
for example, all job listings must include the salary range for the position.
Revealing such an important factor ensures that the “seller,” the potential
candidate, does not accept an undercut salary. Incidentally, the policy also
helps the “buyer,” the recruiting firm, screen candidates who want more than
the intended range.
[box]

Asymmetry and Acquisition


In our gallery example, a key piece of information is that the painting was not,
in fact, an original Van Gogh and the seller knew this. How? Well, while it’s
true that the “true value” of art is not quantifiable, there is a bottom-line value
that’s usually easy to assess. A Van Gogh original is going to be worth millions
on the open art market, and it is unlikely a painting in a bin in a corner store
in Paris is in possession of such treasure.

There are other situations where a seller knows what the asset is worth to
him, but not to everyone else. Corporate acquisitions, where one company
buys another, are particularly susceptible to information asymmetries. Even
with mandated financial disclosures and legal discovery periods, the true assets
purchased are unknown until management assumes control.

Let’s say you develop and patent a new form of artificial intelligence (AI). It’s
new, it’s efficient, it’s cutting edge. You nurtured the project from start-up to
innovation to a successful first year, and now Microsoft wants to buy it. You
value your firm at $1 billion. Microsoft offers you $2 billion. You think you’ve
hit the jackpot, you sell, and Microsoft overpays for your company.

Not necessarily.
4 Fool Me Once: The Value of Information 75

Your valuation of $1 billion is your assessment. It is based on your firm’s ability


to use your new technology (directly, for a particular project; or indirectly,
marketing it to other companies). A large firm with better connections, more
applications, and deeper marketing experience might be able to use your tech-
nology better. The value you created with your technology is $1 billion for
you, but it may be in the $ trillions for Microsoft. Did they really overpay? The
question will surely keep you up at night on your new yacht in the Bahamas.

In practice, companies often do overpay for acquisitions, because the ability


to gauge value from the outside is difficult. It’s hard to judge how useful
something is before you’ve bought it. Novelty, news, even international pres-
sure can skew the perception of value, pushing the purchase price up. In fact,
for publicly traded firms, stock prices usually go down for a while after they
acquire another company. The market almost always assumes they’ve made a
bad choice and overpaid.

[end box]
Let’s take one more example of mixed signal information that might hit
closer to home. You’re in the market for a car and you decide to peruse
Craigslist for some options. You see a great option, contact the seller, and
offer twice what the seller is asking. The seller is the only person who knows
how the car really runs, and it’s not that great. The seller laughs all the way
to the bank—what a sucker you were!
But wait, there’s more. You’re an expert in auto restoration, the daughter
of a mechanic, and a lifelong lover of cars. You transform that beat-up car
and sell it for five times what you paid. The information asymmetry, in this
case, put the seller at a disadvantage. The buyer knew more about the “true
value” after all.
The winner’s curse is a cautionary tale about the value of information in
a game. Information asymmetries can be used by either player in a game,
and they can result in a variety of outcomes. Remember, payoffs are uniquely
personal and to specify them correctly takes a lot of understanding on the
part of both players. Granny knew this. It’s why she said, even though you
prepare and think you know everything, “be careful what you wish for, you
might get it.”
76 J. Prag and A. I. Prag

Negotiations
One unique area of strategic decision-making is negotiations. Every negoti-
ation is a well-choreographed dance of signals and screens. Like many other
things we’ve discussed, the situation and setup of the game is rather specific,
and classic. You are involved in a discussion when you and another player are
trying to divvy up the end result of a cooperative solution. Your choices affect
the other player, and theirs affect you.
You and your potential employer are negotiating your salary. Salary repre-
sents two things: for the firm, it’s the value that you will be able to create
if you work for this company; for you, it’s the value of your time spent
away from doing other things. Both your resources (skills) and those of the
company (assets) are required to create this value (some strategists call this
“synergy”). Given that, how much of the value do you get, and how much
does the company get?
A negotiation is a series of choices that help set up a game. The actual
game is your final choice: to accept or decline an offer. Through negotiation,
you reach a choice set where each player sees a specified outcome. If a mutu-
ally acceptable agreement can’t be reached, you won’t take the job and all
of the value disappears. Both parties move on and try to find an acceptable
agreement somewhere else.
From here, things get tricky. If—and this is a big if—both parties agree
on the amount of value that’s being created, deciding who gets what often
comes down to negotiating from what’s called a “position of strength.” That
usually means you both have a Plan B, something you can or will do if you
don’t get an acceptable outcome from the negotiation. Negotiations coaches
would call this a fallback, the best minimum, a BATNA (Best Alternative to
a Negotiated Agreement, for our negotiations friends).
It’s usually easier to negotiate a good starting salary with a new employer
if you’re already employed somewhere else. Your Plan B, your backup plan,
is to stay where you are. This minimizes negative jollies, especially if income
is your major motivator. Similarly, your potential new employer will usually
have more than one candidate for a job opening. They’re minimizing their
potential downside as well. There’s lots of gamesmanship in negotiating, and
some of it needs to be used carefully.
Suppose you know the other candidate and you know something bad
about her (we’ll let you make the bad thing up). Do you tell the potential
employer? Do you say something in passing like, “I know you’re also talking
to Gladys, and I think it’s wonderful that you’re not holding what happened
at her last firm against her.” That might damage the company’s backup, but
4 Fool Me Once: The Value of Information 77

it might damage you as well. You look like a gossip. The employer might not
want to hire either of you.
When a labor union goes on strike because its members want higher
wages, it sometimes forms a picket line around the company entrance saying
“this company is unfair” or “this company abuses its workers.” That might
convince the company to give in to the union’s demands, but it will also
hurt the company’s business if customers don’t want to shop there anymore.
Even if the Union is paid more eventually, it might get what’s called a Pyrrhic
Victory: a win in which everyone ends up losing.
Information sharing in negotiations is particularly tricky, because one form
of mixed signal or bluff can derail the choreography. The important take-
away is this: a negotiation gets two parties to the point of making a decision.
All information shared during negotiation brings players one step closer to
specifying payoffs for everyone involved.

Information, Education, and Patriarchy


In Chapter 3, we said that patriarchy and other discriminatory systems limit a
group’s ability to make a choice, to play a game. It’s the macro-societal version
of the control freak. When societal rules only allow one type of player to
make choices, it is taking what could (and should) have been a game-theoretic
situation and making it into a unilateral choice that benefits one player.
The justification for this often revolves around information or education.
Players in power often say the oppressed group is not capable of deciding
for some reason. The implication, sometimes reinforced by other structural
factors, is that the disempowered people aren’t well-enough informed to be
allowed to make choices.
Socially, personally, politically, emotionally—there are many ways this
approach doesn’t make any sense. But let’s indulge the insanity for a moment.
If a society is removing choice on the basis that the repressed group can’t
possibly know how to make a good choice, this is still not a long-term solu-
tion. Economists sometimes refer to this as a second-best solution, meaning it
does help solve the game, but the outcomes (payoffs) were not as good as they
could have been if both players acted with full information. The better way—
the way to improve society over time—is to fix the information problem:
allow all players to get an education and become better informed, and then
stop making choices for them. This approach allows everyone to play the
game for themselves and should maximize jollies for all players.
78 J. Prag and A. I. Prag

Social structure is usually discussed in areas like political science and


history, but things like racism, sexism, and policies on education (informa-
tion) fit very neatly into game theory. After all, games are just decisions
where two players determine a joint outcome. Most, if not all, of society’s
interactions (within and between cultures) depend on another player of some
sort.
History has shown us that having hard and fast rules that are based on
cooperative principles are more likely to have better outcomes, even longer
survival rates. Dictatorships rarely thrive beyond a certain point because a
lack of information stunts payoffs and growth.

[box]
Our Modern Society: Fake News
One of the more problematic aspects of AI is the ambiguity it creates about
the accuracy of information. Maximizing your outcome in a game requires
accurate information, something not skewed by lies, personal interpretation,
or other biases. Easily created—and easily disseminated—false information,
through the use of AI, chat, media manipulation, etc., makes verification diffi-
cult. Even with direct communication, doubt can cause misinterpretation of
available information.

When faced with fake news, decision-makers turn to alternative approaches to


help inform their choices. We start questioning facts and discrediting sources
based on various factors. Personality traits like charisma, charm, and other
subjective criteria are used to correct a perception of noise—even if the facts
are proven. Indeed, the reliability of the messenger becomes a substitute for
the reliability of the message. This has long been a tenet of public relations
and politics, but it has emerged as a leading force in everyday life.

Strictly speaking, there’s nothing wrong with making a choice based on person-
ality; we do it all the time. But when personality trumps all other sources of
information, a game may be mis-specified and choices might turn out poorly.

[end box]

The Pitfalls of “Reply All”


We’ve all done it. You hit Reply All, write your message, hit send, and
immediately regret your entire life.
Sometimes, we signal information inadvertently. The convenience of
modern communications is undeniable, but there’s a clear danger of revealing
4 Fool Me Once: The Value of Information 79

things to the wrong people at the wrong time. How many people look at an
entire email or text stream before adding to it and sending it to the entire
family or the entire company? In this day and age, do we realize that every
email and text is permanent?
In the next chapter, we’ll contextualize the expression “what you don’t
know can’t hurt you” when we talk about threats. As we know from the
Reply All Conundrum, what someone does know, in particular about your
feelings and opinions, can hurt you. It’s inadvertent because you didn’t mean
to share the information, but you did. It’s out there. And, as we know with
information, you can’t take it back.
Once a sibling or your boss knows what you really think about them, you
can’t play any future game the same way. Keeping some things private is crit-
ical to achieving certain outcomes. This isn’t a lie by omission. It’s keeping
information that would not lead to optimal outcomes out of the game.

Conclusion
Every game-theoretic situation depends on information. You can’t begin to
play unless you know who you’re playing against, what your choices are, and
what the payoffs could be. Furthermore, and usually trickier: you need to
have some sense of the other players’ choices and payoffs so you can leverage
information to your advantage.
The list of expressions that relate to information is long. “You can’t
kid a kidder,” means you’ll have a harder time successfully manipulating
information set if the other player is more experienced at doing that very
thing. “Backstabbing” conjures images of people saying one thing but doing
another. Doing things “behind your back” means the same thing; you can’t or
don’t see it coming. Maybe it’s because of the player, maybe it’s their payoffs,
but these strategies are the result of manipulating an information asymmetry
to your advantage.
The discussion of information in game theory also includes many of the
problems that we associate with prejudice and intolerance. When we make
bad assumptions about people based on appearance, culture, age, gender, or
demographic, we find that outcomes are often worse for everyone involved.
This decision-making problem is one that often has people aligning attributes
with different motivations: if someone doesn’t look like me they probably
don’t get jollies the same way as me. This can cause you to mis-specify a
game in a bad way. Maybe you and your much-older aunt like donuts the
same way.
80 J. Prag and A. I. Prag

If you play the Prisoner’s Dilemma game with a stranger who looks
different from you, your choice might be the self-interested choice because
you’ve assumed that’s what they will choose. You don’t know them, and they
don’t know you. Human nature says to defend yourself first. When it turns
out they weren’t as prejudice as you were and they made the cooperative
choice, they got the worst possible outcome. Going forward, how do you
think they will play the game the next time they encounter someone like you!

Questions

4.1. Some fables fundamentally change with technological innovation. In


the “Boy who Cried Wolf,” the kid was ultimately eaten because he
summoned help too many times when there was no threat. What could
the boy do with today’s technology to reinforce his cries that there really
is a wolf?
4.2. Let’s look at regulation. Asymmetric information always gives the better-
informed player an ability to steer the outcome of a game in his favor.
In an unregulated world, a really good art forger could get really rich.
She can sell well-done forgeries of famous artists’ works for millions of
dollars. In the real world, how do we stop this from happening?
4.3. “Walk a mile in my shoes” is a good reminder that information changes
outcomes in ways that both players—especially the less-informed
player—should consider. Another popular expression, lifted from foot-
ball, is when a leader says, “we need to be more focused on blocking and
tackling.” Why might that suggestion not bring for desired outcomes?
(no one likes doing that!)
4.4. Bluffing is part of the game when you’re playing poker. It’s part of the
strategy and part of the fun. Lying in other games is much less acceptable
and definitely not fun. Why, for example, can you lose your job if you lie
about your qualifications? (Hint: think about the surgeon who is about
to remove your appendix).
4.5. Many human attributes are difficult to signal, but knowing could
relieve some information asymmetries. Can you think of establishments,
organizations, or groups that exist to solve this problem?
5
Do Unto Others: Cooperation in Games

We’ve invoked the concept of cooperation quite a bit. Cooperation, in its


truest form, is something that works only in certain games and under certain
circumstances. This will seem strange, but you can get cooperative outcomes
in games that, strictly speaking, we don’t think of as cooperative games. We
know that because we just saw it. There’s a way to get a cooperative result
in the Prisoner’s Dilemma game, and that is most decidedly not set up to be
cooperative.
Zero-sum games, which we described in Chapter 2, are games that have a
winner and a loser. Most competitive sports, board games, job competitions,
and Tic-Tac-Toe fall in that category. Cooperation doesn’t align with the
outcomes of zero-sum games because, well… they’re zero-sum. There has to
be a winner and a loser, and that doesn’t incentivize players to be cooperative.
Cooperative games are ones in which cooperation is possible. Players
have an ability and opportunity to observe everyone’s choices and identify
a mutually beneficial outcome (what we sometime call “win–win” games).
Cooperation is a term that we should unpack. In the modern lexicon, it
refers to interactions that focus on mutual benefit, maybe even synergistic
benefit (where the cooperative solution is better than anything any one person
could have accomplished on his own). In game theory, we don’t make any
assumptions about the level of the outcomes. We don’t say that a cooperative
game has a better outcome than a noncooperative game, because these are two
completely different types of games. Rather, we say that cooperative games
have outcomes that are non-binary: there can be two sets of positive jollies,
two sets of negative jollies, and/or a mix of each, depending on the setup of
the game.

© The Author(s), under exclusive license to Springer Nature 81


Switzerland AG 2024
J. Prag and A. I. Prag, Useful Game Theory, Palgrave Executive Essentials,
https://doi.org/10.1007/978-3-031-75154-7_5
82 J. Prag and A. I. Prag

Learning and understanding game theory is all about figuring out what’s
possible in the realm of human interaction. Arguably, we are born mostly self-
interested. We’ve never met a baby who decided to stop crying just because
mama was a little tired. In practice, we have learned that cooperative behavior
usually leads to better outcomes for everybody. In a game-theoretic sense,
cooperative behavior means players end up better than they would have
playing the game in a self-interested way.

The Game Within the Game


It never failed: you were ready to head out to show off your new bicycle to
your neighbor. She was going to be so jealous! You pack up, head out the
door, only to hear Granny call out, “play nice!” So much for being super
competitive. Looks like bragging will have to wait.
Let’s say there’s a basketball coach attempting to create a “team mentality”
in her players. She’s trying to raise the level of cooperation among teammates.
The goal: to ensure the team collaborates toward the same goal (winning),
that they share responsibility, and that they capitalize on each other’s strengths
to get the win.
But, wait… Basketball is a zero-sum game, right? One team wins and the
other team loses. How can cooperation enter the picture? True, the basketball
game itself is zero-sum, but the team makes choices of how to play, and these
choices vastly affect their payoffs. This is what we call a subgame, or a game
within a game.
Is each player in the right place at the right time? Are they making the
right choices (staying within their role on the team)? Is winning the most
important thing to every player? Is everyone doing their job, or are some
covering for others that are falling short? These are all elements of cooperation
that affect the outcome of the basketball game, even though it’s technically
zero-sum. They are also things that could affect the outcome of future games.
If the team wins this game but team members cooperate badly, it could affect
future outcomes and the overall season.
Suppose there is an exceptionally talented player on the team, and she
wants to be noticed by the people (often called scouts) who are choosing
players for the next level (a college player looking to go pro). She knows
she’ll have to appear to cooperate, or she won’t get any playing time. She also
knows scouts are looking at the individual. Ultimately, she needs to show her
individual skills to fulfill her personal goal of playing at the next level. This
will affect how she “cooperates” with her team members and the outcome of
5 Do Unto Others: Cooperation in Games 83

the game. If she’s looking to be recruited, but her teammates are not, she is
really playing a different subgame.
Cooperation is a trickier concept in game theory than it is when we
invoke the word “cooperate” every day. Making a game-theoretic coopera-
tive choice is making a choice that usually goes against any single player’s
best interest unless other players also make cooperative choices. Put another
way, cooperation only leads to good outcomes if everybody does it. To that
end, cooperative games need a fully observable set of payoffs, so everyone is in
the know. It also helps to have a way to enforce cooperation or de-incentivize
noncooperative behavior in some way.

Cold War Cooperation


A game’s structure has to allow cooperation to exist and work.
Nuclear disarmament is a nice, uplifting example. Let’s start there.
Suppose there are two countries—Us and Them—and both countries have
nuclear weapons sufficient to obliterate the other. If each country knows
about the other’s capabilities, this can lead to a stalemate: neither side can
use its weapons knowing obliteration is likely.
Why does a stalemate matter? Well, in a classroom game, we don’t account
for silly things like costs of maintenance or upgrades. The game’s choice set
becomes “fire, don’t fire,” and the payoffs are usually specified to show a first-
mover advantage. In the real world, maintaining a functional and state-of-the-
art nuclear arsenal is quite costly, even if you never fire a rocket. A stalemate
is rather pointless and very expensive. So, how do you do you reverse costs?
Simply disarm! Aha, that’s another game entirely.
Let’s formalize this game with some made up numbers so we can visualize
the situation. Our units are jollies but on a national scale. Remember, jollies
are relative units of happiness for each player; in this case, each nation.
Let’s break down the payoffs shown in Fig. 5.1:

• If one country disarms and the other does not, the remaining country
becomes a world superpower, militarily. The disarmed country’s jollies are
lower, since the dominant country can basically control the rest of the
world.
• If neither country disarms, they are equal superpowers. However, each has
to spend resources to maintain vast nuclear arsenals, and those costs are
reflected in the payoffs.
84 J. Prag and A. I. Prag

US
DON’T
DISARM DISARM
US US
8 TRILLION 10 TRILLION

DISARM
THEM THEM
8 TRILLION 1 TRILLION
THEM

US US
1 TRILLION 5 TRILLION
DISARM
DON’T

THEM THEM
10 TRILLION 5 TRILLION

Fig. 5.1 The disarmament game

• If both countries disarm, they again end up with equal jollies. PLUS,
they’re no longer spending on armaments and can put those resources to
greater economic effect. As the payoffs reflect, both nations are better off
than they were when they had a lot of nukes.

If we play this game simultaneously and without cooperation, both coun-


tries will keep their weapons. They both have a dominant strategy to not
disarm (run through the analysis in Fig. 5.1 to confirm). This is a Prisoner’s
Dilemma-style game, and both countries can see that the cooperative solution
is better. Question is: can they get there?
If Us and Them sign a treaty in which they both agree to disarm, they are
both better off. Signing the treaty is their attempt at cooperating. It’s a good
signal, but is it enough? The treaty itself isn’t enough, even with the punish-
ments and impartial observers and ongoing monitoring. Both countries once
had an incentive to engage in nuclear armament—it’s how they got to the
current position. Assuming that incentive hasn’t gone away, both countries
now have an incentive to cheat and to keep their weapons.
We’re circling around a word here: trust. In order for cooperation to work,
there must be innate trust OR a mechanism built in to ensure trust in some
way. Us and Them need a way to keep each other accountable. Like Butch
and Sundance, it would help if these players could observe each other.
The solution in the 1960s Cold War was spy planes. Famously, for the US,
the high-speed, high-altitude camera with wings known as the U2 helped
document disarmament by the USSR. (Turns out, they weren’t high-speed
5 Do Unto Others: Cooperation in Games 85

or high-altitude enough and the USSR shot one down. Not exactly coopera-
tive behavior on both sides, but we digress). Perfect observation mechanisms
weren’t technologically possible in the 1960s, and neither was a cooperative
solution to the Cold War.
Until…
The US and USSR needed something that couldn’t be shot down; some-
thing that was not in a country’s protected air space. Something like… a spy
satellite: an orbiting camera with the ability to see anything and everything
that looked like a nuclear weapon within a country, but not be shot down.
More importantly, observation could be done by both sides, but at unknown
times. With satellites, each side would assume it was being watched. By the
1980s, both countries had this capability, treaties were signed, and the Cold
War was done. Observation reinforced the trust necessary for the cooperative
solution—the treaty—to work.
The structure of most games is not carved in stone. Rules and other aspects
of the structure can be changed to allow for things like cooperation if and
when that leads to better outcomes. In the case of the Cold War, the game
was restructured so that bluffing wasn’t possible. Once each side could observe
the other, trust was in the cards.
Recall that bluffing is one way to take advantage of an information asym-
metry, for the player who knows more. For cooperation to be possible,
information needs to be shared, transparent, consistent, and available to all
parties.
The Prisoner’s Dilemma, as normally structured, isn’t a cooperative game.
Either player can say that they’ll choose Don’t Confess, but their self-interest
is better served by choosing to confess and they always choose that way. But,
we said this Cold War game was a Prisoner’s Dilemma style game. What’s the
difference? Why doesn’t the ability to observe get Butch and Sundance to the
cooperative solution?
Both Prisoner’s Dilemma and the Cold War games are structured as simul-
taneous games—both players make their choices at the same time. The nature
of the Cold War game in the real world is different because it is a series of
subgames over time. This creates a timing difference that makes long-term
cooperation possible. There’s nothing permanent about the choices Us and
Them make.
If a cooperative treaty were signed that said both countries would stop
building new weapons, it would require years of cheating before the cheater
could take advantage of their additional (secret) weapons. With modern
surveillance technology, that’s (hopefully) impossible to achieve. Even if they
did, there would be a next iteration of the game, a next decision to make and
86 J. Prag and A. I. Prag

another after that. Because the game is repeated, it allows players to iterate
to a solution that’s better for everyone. This is consistent with the real world.
We might not trust somebody enough when we first play a game, but by the
tenth round, we are willing to hear them out.
In its usual form, the Prisoner’s Dilemma is a bit more immediate. Imagine
the players are able to observe each other but they still have to choose to
confess or not simultaneously. Not knowing what the other will choose still
leads both of them to confess. The immediacy and finality of the decision
make cooperation difficult.

The Nash Equilibrium


Both life and game theory are easier if at least one player has a dominant
strategy. Ah, to be so lucky. Most times, in many games, finding one optimal
solution isn’t possible. When there’s more than one possible outcome, we
try to find other ways to approach and if possible, solve the game. One
such approach is called a Nash Equilibrium. Named for Nobel Prize-winning
economist (and subject of the movie “A Beautiful Mind”) John Nash, the
Nash Equilibrium is one type of stopping point for a game.
Let’s back up. An equilibrium is a set of choices that solves a game. It is
a form of stasis for that game, where neither player wants to move. In our
games, we’ve reached equilibrium by dominant strategies, dominated strate-
gies, and other methods. In some games, there were even two equilibria, and
we needed a tie breaker. Ultimately, an equilibrium is a solution to a game
where no player wants to move to another choice.
A Nash Equilibrium is one such solution, where a game could end up. In a
Nash, you are doing the best you can given what the other player has chosen.
Put differently, in a Nash, no player can unilaterally make himself better off.
In these games, you don’t have a choice that’s always best for you the way you
did in a dominant strategy; you have a choice that’s best for you given the
other players choice.
Let’s see how this works in a familiar situation: the Dropped Call Game.
You and your significant other are talking on your cellphones when suddenly
the call drops. One of you loses cell service and the call is terminated. Who
calls whom back?
Take a look at the payoff matrix in Fig. 5.2. Let’s assume that you were in
the middle of a great conversation, and “Talk More” gives both of you the
most jollies.
5 Do Unto Others: Cooperation in Games 87

CALLER 1
CALLS BACK WAITS

CALLS BACK
BOTH GET TALK
CALLER 2
VOICEMAIL MORE
WAITS

TALK DON’T TALK


MORE AGAIN

Fig. 5.2 The dropped call game

The game has two Nash Equilibria: one player calls while the other player
waits. It doesn’t matter who waits and who calls as long as they don’t do the
same thing. So, the game has two possible solutions. How do we get to one
of them? How do we get to finish our conversation??
When games have multiple Nash Equilibria but no dominant strategies,
you need to do more to solve the game—to get to one of the equilibria.
Games like these are often called coordination games, and the hardest
thing about them is to find a way to get to the mutually beneficial outcome.
They need coordination. In many cases, people will revert to an under-
standing or a rule for solving the game. Something like “the person who
originally made the call should always call back.” That certainly works as
long as everyone knows the rule and everyone follows it.
These rules emerge when people have what game theorists have called a
focal point. This sounds fancy and theoretical and at some level, it is but
the idea is straightforward. Focal, like focus, means we’ve found a way of
cutting through the noise and randomness of the game. We have found an
informal rule that works for the players most of the time, the rule is easy to
remember, and the rule often springs from what we call common sense. Fancy
strategists will formalize rules into heuristics or other modes of operation. The
bottom line is this: humans draw on their own preferences and ways of doing
things to get out of a Nash situation. It is usually something intangible—
some unstated rule—that allows the conversation to continue.
That’s a phrase we say a lot and don’t really think about, but “common
sense” is two really powerful words. Common, meaning shared by most
88 J. Prag and A. I. Prag

people, well-known, or typical, and sense meaning intuitive, sometimes


even automatic. A coordinating mechanism that comes from common sense
should work most of the time and should be easy to apply to multiple players.
We needed a coordination mechanism like “whoever made the call, calls
back” in the dropped-call game because of the specifics of that game. If the
situation allows, the easiest coordination mechanism is direct communica-
tion. Just say what you’re going to do. That always works in these games
because the only real problem is coordination. How many times in life have
we said: “I’m heading into a tunnel/elevator/building; if I lose you, I’ll call
you back.” In one direct statement, you pre-solved the game.
We’ve talked about games that have outcomes in which one player is better
informed than the other and uses that advantage to improve their outcome.
Those games will bring in things like bluffing and lying. Those asymmetric
information strategies are of no value in a coordination game, however, since
players can only coordinate with full information.
When a coordination game has a rule that everyone knows and uses, like
“I’ll call you back,” game theorists call it an assurance. When there are game-
theoretic situations in which players need to coordinate in order to reach a
mutually beneficial outcome, finding and using that assurance guides toward
the cooperative solution. It helps players know that things will turn out okay.

Punishment
One way to restructure the single-game Prisoner’s Dilemma to make it a
cooperative game is to incorporate a form of punishment. Doing that means
that we have to make explicit changes to the payoff matrix to punish a player
if they confess. Recall the original Prisoner’s Dilemma game in Fig. 5.3:
Suppose both players agree that if one of them confesses and the other
doesn’t, the snitch will do something to the no-snitch in jail that would effec-
tively ruin their day (break something that takes a long time to heal). The
punishment would add 5 years to the snitch, because…you know, violence is
bad. The payoff matrix would then become Fig. 5.4.
This changes the game. Now, neither player has a dominant strategy. The
game could end up at either “both confess” or “neither confess.” Both of those
outcomes are Nash Equilibria; neither player can unilaterally make himself
better off.
Is the game now cooperative? Yes… with one caveat.
Suppose Butch says he won’t confess. If he does what he says and Sundance
also doesn’t confess, they both get 3 years. If Sundance confesses despite what
5 Do Unto Others: Cooperation in Games 89

BUTCH
DO NOT
CONFESS CONFESS
BUTCH BUTCH

CONFESS
5 YEARS 10 YEARS
SUNDANCE
SUNDANCE SUNDANCE
5 YEARS 1 YEAR

BUTCH BUTCH
1 YEAR 3 YEARS
CONFESS
DO NOT

SUNDANCE SUNDANCE
10 YEARS 3 YEARS

Fig. 5.3 The Prisoner’s Dilemma, redux

BUTCH
DO NOT
CONFESS CONFESS
BUTCH BUTCH
CONFESS

5 YEARS 10 YEARS
SUNDANCE

SUNDANCE SUNDANCE
5 YEARS 6 YEARS

BUTCH BUTCH
6 YEARS 3 YEARS
CONFESS
DO NOT

SUNDANCE SUNDANCE
10 YEARS 3 YEARS

Fig. 5.4 The Prisoner’s Dilemma, with punishment

Butch said, Butch is worse off—but more importantly, so is Sundance. If


Sundance ignores what Butch says and does, and confesses, Sundance’s prison
time goes from 5 to 6 years. So, Sundance has no reason not to believe Butch.
The punishment—reflected in the payoffs—seems to be working.
If Sundance does not confess, would Butch have any reason to not follow
through and confess? Nope. Butch will get 6 years if he does not follow
through, instead of 3 years if he follows through.
90 J. Prag and A. I. Prag

Both players can now get to the cooperative solution by agreeing, upfront
and explicitly, to choose “don’t confess.” The game doesn’t have the observ-
ability that we normally require for a cooperative game, but we’ve restructured
the payoff matrix to make it a coordination game in which direct communica-
tion (just say what you’re going to do) will get both players to the cooperative
outcome.

Punishment, Mafia Style


There are game-theoretic setups in which punishment is metered out by
someone other than the players in a game. Members of organized crime
families and other, less nefarious organizations, like the military, often have
an honor code that requires all members to monitor and enforce cooper-
ative choices for the entire group. If that were the case, the payoff matrix
would have both Butch and Sundance being punished for confessing by a
third party. Let’s call him Mickie.
We’ll make two changes to the payoff matrix, if Mickie is in the picture.
First, the “punishment” for confessing is eight extra years (Mickie’s, like, 6 5
and used to be a prize fighter) and second, both of them would get punished
if both confessed (Fig. 5.5).
The presence of a third party doing the punishing means both or neither
man could be punished. This gives even more incentive to cooperate, since

BUTCH
DO NOT
CONFESS CONFESS
BUTCH BUTCH
CONFESS

13 YEARS 10 YEARS
SUNDANCE

SUNDANCE SUNDANCE
13 YEARS 9 YEARS

BUTCH BUTCH
9 YEARS 3 YEARS
CONFESS
DO NOT

SUNDANCE SUNDANCE
10 YEARS 3 YEARS

Fig. 5.5 The Prisoner’s Dilemma, mafia style


5 Do Unto Others: Cooperation in Games 91

there’s an opportunity to both “win,” or at least survive. Mickie gives both


players a dominant strategy to not confess, and the two get the game to the
cooperative solution.
We can’t assume that this will work consistently. Those third-party
enforcers have to be “described.” How do they work? How are they getting
jollies? Why should they be trusted to do the enforcing? What if Mickie’s
bluffing?
Let’s leap from mobsters to international relations to show how the
approach is imperfect. Both the League of Nations (circa 1920) and the
United Nations (circa 1948) were intended to eliminate aggression between
countries. They were supposed to make sure war never happened again. They
were created to get the world to an international cooperative solution.
While the latter organization has, so far, prevented World War III, it has
not in any way prevented all war. The governing structure of the UN shows
us why. While most countries are members (part of what’s called the General
Council), there are 15 countries on the Security Council, five of which are
permanent members. Any of the permanent members can unilaterally veto a
UN choice.
That veto power means that those five countries can, more or less, do
anything that they want to do. Unfortunately, that often extends to coun-
tries that are “friends” with the five permanent members. These organizations
were certainly founded with a meaningful charter, but the structure limits its
ability to serve its charter. As Granny said, “the road to hell is paved with the
best intentions.”
[box]

Game Theory Goes Back to the Movies


There are many movie quotes and situations that are game-theoretic. Let’s talk
about one that comes from The Godfather Part II. Now head of the family,
Michael Corleone tells a top lieutenant: “keep your friends close, but your
enemies closer.” Michael is providing a rule to address the problem of decision-
making when the other player is not likely to act in your best interest.

If the downside of a situation is dire—possibly fatal—then you need to know


if, when, and why the other player might make the choices that will lead to
your demise. The implication of the expression is that your enemies are much
more likely to make those choices than your friends. Keeping them close = you
can keep an eye on them. Keeping them close = you have more information
about their intentions at any given time.

[box]
92 J. Prag and A. I. Prag

Automatic Punishment
Punishment is important in reaching and maintaining the cooperative
outcome, but dealing out said punishment can be hard to do. Causing pain,
for most people, is itself painful and thus unpleasant. Did Granny say it?
“This is going to hurt me more than it hurts you” implies punishment
negatively affects a person’s jollies, even if you’re on the receiving end.
Given this reality, we sometimes try to make the punishment automatic.
We remove the hesitation from the player who needs to punish because
the so-called “human element” might interfere with follow-through. Making
punishment automatic means it is no longer a choice being made by a person
who might be unwilling to “do what has to be done.” It also removes punish-
ment from the punisher’s jollies, since they are absolved from having to
personally make that decision.
In the classic movie Dr. Strangelove, the Soviet Union builds a computer-
controlled, unstoppable “doomsday machine” that would destroy the entire
world if anyone (but at the time, the US) ever launched a nuclear attack
against it. The idea: this device would end the arms race because a preemptive
nuclear attack would kill both sides with certainty.
SPOILER: unfortunately for the world, Soviet leaders didn’t tell anyone
about the doomsday machine. Nobody knew until a deranged US general
launched an attack and, as it turns out, the machine automatically retali-
ates and destroys the world (apologies to anyone who hasn’t seen this truly great
comedy—definitely go see it if you can).
In effect, automatic punishment is what some dog owners might think
of as a shock collar; a device that gives a dog an electric shock every time
the critter barks. The idea is that the dog will learn not to bark as a way
of avoiding the automatic shock. Veterinarians and animal trainers generally
agree that these devices don’t work and might be counterproductive; instead
of controlling a dog, they might make the dog more aggressive. That and Dr.
Strangelove should be a cautionary tale; automatic punishments are difficult
elements of a game since they directly affect the outcome but aren’t in the
control of either player.
Removing the human decision-maker from the punishment phase of a
game can backfire. Use of a gun in the commission of a crime (robbing people
at gunpoint) adds mandatory extra prison time in many places. That makes
sense if you’re trying to deter the use of guns, but some people believe that
if that extra time is excessive, it creates an incentive to use the gun which
means the criminal might be able to eliminate a lone witness and avoid being
caught.
5 Do Unto Others: Cooperation in Games 93

While we won’t wade into this gun control controversy, it’s easy to see how
automatic enforcement is imperfect. Here’s another thought: some countries
have automatic penalties for crimes like murder to avoid even the appearance
of selectively applying harsher punishments to different groups of people.
That is another aspect of humanity that makes punishment harder than it
appears. For punishment to work, to change the decisions of all players and
reach the cooperative outcome, punishment cannot be applied selectively.

Let the Punishment Fit the Crime


We’ve talked about punishment, but what about forgiveness? De-
incentivizing noncooperative actions might not require punishment per se.
Sometimes, pressure or negative perceptions that could take effect in the next
round of games is enough to deter that kind of behavior. In repeated games,
where we play in an iterative way, can we bring a noncooperative player back
to the cooperative solution?
Punishment is a funny concept. It can be formal, as in jail time. It can
also be more esoteric, as in giving somebody the cold shoulder or outright
ghosting them. In legal terms, most societies have put limits on punitive judg-
ments. Capital punishment is just not an option if the crime is, for example,
grand larceny. In game theory terms, this means jollies need to be reason-
able and punishment should be in line with the jollies that the bad behavior
created. Anything out of line with that metric would give players the sense
that the person doling out the punishment is doing so with other payouts in
mind (perhaps they have an agenda, a vendetta against you, or some other
reason for blowing things out of proportion).
What can go wrong in a game-theoretic situation if the punishment
is excessive? Among other things, people won’t want to play with you.
Overbearing punishments often direct people to a maximin strategy. If the
difference is large enough, they might decide not to play, or not to confess
mistakes or indiscretions at all. With large punishments, we often see a reduc-
tion of information, and therefore fewer games that land on an optimal
solution for all players.
Punishments are an enormous problem in many organizations. If the threat
of punishment is severe, meaning it significantly alters jollies beyond a certain
amount, the incentive to play honestly is removed. Now, jollies are based
on avoiding punishment instead of cooperating to find the best outcome.
Mistakes happen in any repeated game. So, if the intention of the threat of
94 J. Prag and A. I. Prag

severe punishment is to stop people from making bad choices, the result could
be more cover ups and inefficiency rather than higher performance.
Bad behavior sometimes has implications that the decision-maker doesn’t
understand. The tricky thing about jollies is that they come from a lot of
different parts of our psyche. “An eye for an eye,” which dates back thousands
of years (Hammurabi 1750 BC), is an early attempt to align the punishment
with the crime. But, as subsequent scholars have said, you end up with a
lot of people without eyes. Explicit and pre-stated punishment is interest-
ingly missing from many of the most famous rules of societal games. The Ten
Commandments and the US Constitution have lots of do’s and don’ts—but
no specific punishments. That’s left to the players to decide.
Realistically, punishment and forgiveness are sometimes not possible, and
making some bad choices even once changes all future aspects of a relation-
ship. Let’s say a professional basketball player snaps at practice 1 day and
attacks the head coach. The League may punish him with a suspension or a
fine, but that doesn’t mend the relationship. He will probably be traded the
next season.
[box]

In the News: Nuclear Disarmament vs. Climate Policy


When the US and (then) USSR agreed to reverse their build-up of nuclear
missiles, it marked a dramatic, positive change in the direction of international
relations. But as much as we’d like to think this was the result of pragmatic,
practical leaders seeing the advantages for the world in cooperative behavior,
it was facilitated by the innovations such as surveillance satellites and other
technology that let both countries see each other’s missile silos.

Let’s pivot to another crisis: climate change.

Policymakers believe that we need cooperative agreements among the largest


countries in the world to control greenhouse gas emissions caused by carbon-
based fuels used to power most cars, factories, and electricity generating plants.
Once these international agreements are signed, there will be a reduction in
emissions, right?

Well, not automatically.

Agreements are in place to control output and impose other restrictions that
cost short-run money to improve long-run outcomes. No problem there,
except countries are short-sighted. Large economies will take a larger hit, so
the agreement is there to make sure everyone is incentivized to participate.
5 Do Unto Others: Cooperation in Games 95

Like nuclear disarmament, though, observation is key. Without a reliable and


transparent way to monitor emissions (and control cheating), an agreement is
just a piece of paper.

Do we also need punishment? This is where nuclear war and greenhouse emis-
sions are a little different. In the long run, sure, they’re both pretty bad,
don’t get us wrong. But, cheating on building nuclear weapons is different
than maintaining fossil-fuel consumption. The punishment for going against
a nuclear treaty might be nuclear war. This is a big deterrent. Punishments
for going against emission standards should be equally powerful: significant
economic sanctions, fines, or other costs. They also need to be larger than the
possible gains from deviating from the treaty (lest the payoff matrix incentivize
noncooperative behavior).

A viable greenhouse gas limitation treaty needs both observation and punish-
ment to succeed.

[end box]
While cooperation is a useful area of discussion in game theory and one of
the most real-world forces we see, one could argue that cooperation defeats
the purpose of game theory. Most games, most of the time, are noncoop-
erative. Having to make a choice without knowing what the other player is
doing is the whole point of game theory, right?
It turns out, some of the biggest, most impactful games need bi-lateral
cooperation to ensure a good outcome.
Noncooperative games are not malicious. As we have seen, they are not
the same as self-interested, selfish behavior, nor are they unilateral. You’re still
playing with somebody else. So, we think of cooperation as a restructuring
of a game, something with commitments, promises, threats, or other things
that help us approach a cooperative solution in the real world. We’ll examine
these three things more in Chapter 7.

Too Many Cooks Spoil the Broth


A common complaint about academic game theory is that most of the mathe-
matical models and examples have only two players. Even worse, most games
are what’s called 2 × 2—two players and two choices. In the real world, there
are usually more players and more choices. When we teach and learn game
theory, we do these smaller games because they are easier to solve, under-
stand, visualize. Remember, we aren’t trying to be literal in most cases; we’re
96 J. Prag and A. I. Prag

trying to learn technique and grow an approach to analyzing an environment


and specifying a game.
Granny has a zinger for this one too: “too many cooks spoil the broth.”
If we have a game with a small number of players and choices, the game is
easier to solve. It’s also easier to find ways to cooperate and thus reach the
good-for-everyone outcome that we have called the cooperative solution.
When you have too many cooks (or players), the cooperative solution
becomes less available. Beyond a certain number of players, observation
becomes more difficult, and we revert to more self-interested behavior. If
and when this happens and we end up at the noncooperative solution, most
people think that the broth has been spoiled. This is why we teach socializa-
tion early in childhood education. Learning how to navigate more complex
choices with more players and still reach a cooperative solution is part of
growing up.

The Golden Rule: Words Well Chosen


“Do unto others as you would have others do unto you.”
Granny didn’t say this one, but she might have quoted it a lot.
While a good expression, as we’ve pointed out, is often catchy and cute,
you have to think about the specific words that were chosen. In this case
(sorry, everyone), grammar matters.
Look at the tense of the verbs.
If you have a repeated Prisoner’s Dilemma game (or any noncooperative
game), one heuristic, MO, or way of deciding is a strategy known as “tit-
for-tat.” I do to you next time what you did to me this time. Tit-for-tat is
a great deterrent to noncooperative behavior, and it’s especially humbling in
each game. After all, you don’t want something to come back around in the
next game that could have been avoided, and you control that choice.
The expression that would fit that solution would be “do unto others as
others did unto you.” Tit-for-tat is a response rule. When everyone follows
that rule, and everyone has full information, it should lead to the cooperative
solution.
The Golden Rule, on the other hand, is more prospective and forward
thinking. “Do unto others as you would have others do unto you.” Note
the choice of the verb “would have.” For this rule to help you reach the
cooperative solution, you need more detail and more structure. In particular,
everyone would have to be “golden rulers.” The downside of cooperating with
5 Do Unto Others: Cooperation in Games 97

a noncooperator is large, and you will abandon that strategy the first time
somebody acts in a self-interested way.
Mind you, you wouldn’t necessarily become completely self-interested. You
would try to sort the “others” in an attempt to discern who you could form an
alliance with. This might explain the human tendency to join clubs, churches,
and other groups with shared values. Familiarity becomes your method of
sorting and determining who you should trust to act cooperatively.
[box]

The Simple Logic of Rumspringa


Business leaders sometime call it “buying in” to the corporate culture. The
Marine Corps calls it “Semper Fi.” Granny called it “rowing in the same direc-
tion.” One requirement for any successful, cooperative social structure: people
have to want to be part of it. Outcomes depend on a voluntary and complete
“buy in” to get to cooperative solutions time and time again.

The more rigid the cooperative structure, the more this is true. The Amish
lifestyle, for example, is based on cooperative tenets, where community super-
sedes individualism. To choose this lifestyle, members must believe that this
approach will make them at least as happy as other lifestyle choices. It’s one
thing to be told the lifestyle is beneficial; it’s another to make the choice
voluntarily given available information.

Many Amish sects encourage individuals remove themselves from the local
community and experience the non-Amish world prior to committing to the
way of life. Called the Rumspringa, the experience allows younger members
of the community to take residence elsewhere and live under different rules. If
the individual returns, they are actively choosing to be a part of the institution,
with full information about other available choices.

That the Amish religion is still around speaks to the validity of this approach.
Very few religions take this approach. Most have a rite of passage, formal accep-
tance, or ceremony of indoctrination—sometimes taken at a young age before
informed choice is even possible. They don’t encourage young members to test
drive other religions.

Cooperation, sacrifice, and other words that mean selflessness, trust, and even
faith are associated with group outcomes. Indeed, a lot of our upbringing—
across almost all modern religions and societies—is intended to help us avoid
the noncooperative outcomes we observe in the Prisoner’s Dilemma.

[end box]
98 J. Prag and A. I. Prag

Conclusion
Learning to share, to wait your turn, and play nice with others are all
preschool lessons that train you to act in a way in which society is better
off. These skills are meant to improve the collective; they are for the benefit
of the group, not only the individual. Sometimes, the downside of noncoop-
erative behavior doesn’t occur to people until they are on the wrong side of
that behavior.
When someone butts in front of you in line, or hogs all of the toys, or
hits and pushes during playtime, you (hopefully) realize that when everyone
does these things, no one is better off. Good parenting is trying to get that
lesson ingrained without the (probably repeated) bad behavior punishments.
Bad behavior in the real world does sometimes win; bullies and crooks don’t
always get punished. But, are those individuals supported by society?
In legal parlance, the goal is to maintain individuals as “functioning
members of society.” Lessons are small, constant, and repeated by many
people. They are also reinforced by many social structures (corporate codes
of conduct, school bylaws, organized religion, legal codes). Reinforcement is
both the presence of a rule and the punishment for not following it. How
well they work, however, depends on the individual.
Granny always had something to say, especially when people evaded
capture or punishment: “you’ll get what’s coming to you.” When we see
noncooperative behavior, sometimes we wish authority were there to meter
out punishment (where’s a cop when you need one?). Granny was convinced
judgment would happen eventually, and this makes sense in a Game-theoretic
sense. In our heads, the bad guys can’t win in the long run. Without controls,
we iterate to smaller and smaller payoffs. Society becomes unpleasant and
unstable, and we are guided to suboptimal solutions.

Questions

5.1. We made it five chapters before bringing up COVID-19. Yay! Coun-


tries attempted to slow its spread with cooperative behavior, specifically
masks and accelerated (publicly sponsored) mass-vaccination programs.
Mitigating communicable diseases is a good example of the upside of
cooperation. Our recent experience also shows us how difficult that is.
Discuss the recent debate between mask mandates and personal choice
using a game-theoretic approach. What is different between masks and
vaccinations? Why might one reinforce cooperative behavior better than
the other?
5 Do Unto Others: Cooperation in Games 99

5.2. Many behaviors that we would think of as cooperative also fall into the
category of politeness. There can be, unfortunately, a bit of a culture
clash when we think about polite behavior. Culture clash is another
word for personal interpretation. Take opening a door, for example.
A man holds a door open for a woman. This was taught as polite
behavior once upon a time, but it might now be considered a bit
dated—especially if he only holds doors open for individuals that he
perceives are female. What if someone who identifies as male wants
to maintain the custom for politeness, without judgment? What could
he do? Think about the information problem(s) with this action,
depending on the perspective of the person he’s holding the door for.
5.3. The ability to observe the actions and decisions of all players is an impor-
tant factor in getting cooperative outcomes. Today, observability is at
an all-time high. Cars, houses (doorbells), stoplights, and retailers are
among the many places where cameras see everything. This new ability
to observe all behavior, cooperative and noncooperative, hasn’t become
sufficient to decrease self-interested or illegal behavior, though. What
else—besides the ability to observe players’ choices—is required to stop
noncooperative social behavior?
5.4. Waiting in line is a classic, yet frustrating cooperative behavior. At
places where wait times are notoriously long (ex, Disneyland, a ballpark
restroom, the school pickup line), some organizations have devised ways
to have customers pay to skip the line. How is that extra fee similar to a
fine for noncooperative behavior?
5.5. Cooperative behavior is taught from an early age. Sharing and patience
are not innate; they are learned. Are there limits to cooperative behavior?
Are there games in which cooperation does not help the game reach the
best outcome?
6
A Stitch in Time…: Sequential Games

Granny may have grown up on a farm, so she had the best expressions ever.
Take the short, simple saying “you reap what you sow.” Granny was saying
that your future crop is connected to the seeds you plant today.
Well… duh, of course. It turns out this classic bit of cautionary advice
defines sequential games. Good outcome or bad, what you get in the future
will depend on your choices and actions today.
Sequential games have the same features as simultaneous games: two
people are making choices and that, together, determine their outcomes.
Sequential games simply have a different structure: rather than choose at the
same time, one player chooses before the other. Checkers, chess, Tic-Tac-
Toe, poker, and many other games are sequential. Unlike rock-paper-scissors
or the Prisoner’s Dilemma, sequential games are fully observable: each player
can see the first player’s decision and then make his own. As we’ll see, this
temporal nature of sequential games can give some players a unique advantage
or disadvantage.
Sequential games occur as part of the natural order. You spend your entire
life playing games with your future self. The “present you” makes choices that
the “future you” has to reckon with. While we don’t want to make it seem like
every choice you make is in need of a game theory deep-dive, having the tools
and approaches of game setup could help you sort out complex decisions.

© The Author(s), under exclusive license to Springer Nature 101


Switzerland AG 2024
J. Prag and A. I. Prag, Useful Game Theory, Palgrave Executive Essentials,
https://doi.org/10.1007/978-3-031-75154-7_6
102 J. Prag and A. I. Prag

PLAYER
2
CHOICE
A OUTCOME

CHOICE
X
CHOICE
B OUTCOME
PLAYER
1
CHOICE
A OUTCOME

CHOICE
Y
CHOICE
B OUTCOME

Fig. 6.1 A tree!

Sequential Games: The Setup and the Approach


The easiest way to see how these games differ from ones we’ve already done is
by rethinking the payoff matrix. You can certainly try to keep our payoff
matrix and impose timing on it, but there’s an easier way. For sequential
games, we use what’s called a decision tree, or as game theorists call it, the
extensive form (Fig. 6.1):
Player 1 makes her choice. Then, Player 2 chooses. Both choices have an
effect on the outcome, so Player 1 has to look ahead to Player 2’s possible
choices to see what might happen. Sound complicated? It can be, but we do
it all the time.

The Nancy Reagan Game


This is a book on game theory, not statistics and probabilities, so we aren’t
going to spend any time talking about the likelihood of future outcomes.
Rather, for sequential games, we want to focus on the choice process and the
impact of today’s choices on future outcomes.
If you decide, when you are eighteen years old, to try smoking cigarettes,
you might become addicted to them and smoke for the rest of your life
6 A Stitch in Time…: Sequential Games 103

(a possibly shorter life). If you never try cigarettes, you really can’t become
addicted to them. The choices of the two players, the young you and the
grown-up you, are inexorably connected. You could start talking about prob-
abilities—you’ll probably be able to quit, don’t have an addictive personality,
etc. Yet, even from a probabilistic standpoint, one thing is true: if you don’t
start smoking, you can’t become addicted to cigarettes.
With that in mind, we can look at a different extensive-form game where
both players are you. As simplistic as it sounds, looking into your own future
and seeing outcomes that you really don’t want to happen, you can give your-
self a straightforward approach to decisions you have to make today. Let’s see
smoking in extensive form (Fig. 6.2):
Former First Lady Nancy Reagan’s famous, oft-mocked advice about using
drugs, “just say no,” applies very well to this game. The idea is simple: you
really can’t become addicted to anything if you never try it in the first place.
Note, the outcomes for both versions of “try smoking” are worse than the
only outcome possible from not trying. This is similar to other game setups
where we show one or more very bad outcomes that we simply want to try
to avoid.
In the case of the game above, we’re not including any positive aspects of
smoking. Smoking might make you feel good (chemically), or sophisticated,

FUTURE
YOU
QUIT SUNK COST
+ HEALTH
PROBLEMS
TRY
SMOKING

DON’T ONGOING
QUIT COST +
TODAY HEALTH
YOU PROBLEMS

DON’T
TRY
SMOKING
STAY
CIGARETTE
FREE

Fig. 6.2 Smoking or non?


104 J. Prag and A. I. Prag

or less stressed, and these feelings increase your jollies. This isn’t a simul-
taneous game, so we have to remember there’s a temporal element. These
feelings increase your jollies in the short run. In sequence, we are able to
specify a portion of the payoffs (in this case health outcomes and money)
and still navigate the game.
Sequential games are both useful and a bit misleading. Suppose we’re doing
the game that Nancy Reagan was talking about, but instead of smoking legal
substances, we’re choosing whether to start illegal drugs like heroin or crack
cocaine. We can specify the entire sequential game that you play with your
future self when you consider trying drugs, in terms of another outcome:
careers (Fig. 6.3).
There are better and worse futures in either case. However, there is still
one, unambiguous bad outcome—being a drug addict—that only happens if
you try drugs today. At this point, we aren’t interested in the likelihood or
probability of these outcomes. The point of game theory is to look at payoffs
and, in this case, walk back a path to avoid the worst possible outcome.
From this maximin perspective, it makes sense to avoid the worst possible
outcome by never starting drugs. This reality is loosened by those who
consider the first step “experimenting.” Whatever the rationale behind the
choice to try the drug, it is a choice that changes all possible future outcomes.
An outcome that you almost certainly do not want is now possible. If you
never “experiment” with cocaine, heroin, or crack, you can’t possibly die of
an overdose from the addictive use of those drugs. From a sequential game

FUTURE
YOU
QUIT POETRY
PROFESSOR
TRY
DRUGS

DON’T DRUG
QUIT ADDICT
TODAY
YOU
COLLEGE ECONOMICS
GRAD PROFESSOR
DON’T
TRY
DRUGS
TRADE ELECTRICIAN
SCHOOL (HIGHEST PAID)

Fig. 6.3 Future careers


6 A Stitch in Time…: Sequential Games 105

perspective, “just say no” is good advice, simplistic though it is. It removes
the possibility of the worst possible outcome, showing a maximin strategy in
an extensive form.
[box]

The First Credit Card


One of the hardest things for a person to learn is the difference between a
loan and a gift. In some countries, kids grow up getting many gifts and not
enough loans. Later in life, the ability to do things like buy a car, rent an
apartment, and buy a home will depend on credit history—a signal to lenders
that you know the difference between a gift and a loan and that you have the
wherewithal to pay something back.

As a parent, you want to help your offspring navigate these tricky waters. That’s
easier said than done.

Your teenager wants his own credit card, and you agree to co-sign—become
the default payer of the card—which allows your son to get his first credit
card. This thing is magic. He can swipe or tap to his heart’s content and
quickly spend more than his allowance or after-school job is capable of paying.
What do you, the parent and co-signer, do next? If you let your kid start
missing payments, his credit history will be damaged, and he might be living
in your basement a lot longer than you had planned. If you pay off the card,
he might not learn the difference between a loan and a gift, and he’ll overspend
repeatedly.

So, his choice and yours can affect his future outcomes. The fun part of this
game—and all games in the parenting space—is that you’ll both have different
outcomes, and you have to navigate the difference between money and jollies.
Spoiler Alert: we’re not solving this game for you! Everyone plays it game differ-
ently, and there isn’t a “right” answer. Indeed, later in life, your kid will be
playing games like this with you. Adult children trying to deal with adult
parents have just as many complex conflicts. Maybe Granny didn’t say it, but
“it’s hard to raise your parents.”

[end box]
106 J. Prag and A. I. Prag

Rollback Solutions and First-Mover Advantages


Sequential games with your future self highlight the importance of
sequencing in decision-making. Throughout your life, you make contempo-
raneous choices that affect all future choices and outcomes. Game theory has
approaches and perspectives that are different for sequential games because of
this temporal element. When outcomes are a factor of choices and time, there
are specific considerations we, as humans, may use to make sense of choices.
Playing a sequential game well means looking ahead at how another player
will respond to your move. You’ll never win at checkers or chess if you can’t
look ahead and figure out your move based on the other player’s response.
Chess Grand Masters (champions) often see 10–30 moves ahead in any given
game (and some can play multiple boards at a time)!
The solitaire game Pyramid is an excellent, one-person sequential game.
You are dealt a deck of cards in a large triangle starting with one card, then
two, then three, etc. until you have seven rows; the cards should overlap a bit
to “block” the cards in rows above. You will have twenty-eight cards when
you are finished. The remaining twenty-four cards form a draw pile.
You match up two cards that equal 13 to eliminate them and reveal the
cards underneath. Every number card equals its sequence: kings = 13, queens
= 12, jacks = 11, and aces = 1. If you have no available combinations that
equal 13, you draw a card and look for a combination with that card.
When played according to the rules, you can combine 6 and 7 (ok, here’s
some math: 6 + 7 = 13). Let’s say you deal your cards, and you have two
possible combinations of 6 and 7: one doesn’t free any new cards and the
game will be over; the other allows for more choices and the game keeps
going.
Looking ahead before you make your next choice is critical for winning this
game. It also means you’re looking at future choices you will want to make
but can’t make yet. This approach to sequential games is called a rollback
solution. A player determines an ultimate outcome of winning the game and
works backward through choices to get there.
Like simultaneous games, you can also use the temporal aspect of sequen-
tial games to determine whether or not you should actually play. You could
look at the cards and see a layout that makes it impossible to win at all.
That means there’s no rollback solution available to you. What do you do in
that case? End that game immediately and play a new game. Granny might
have been a Kenny Rogers fan, singing “you’ve got to know when to hold
‘em/know when to fold ‘em…” It is a poker version of the Pyramid game,
6 A Stitch in Time…: Sequential Games 107

but he lays out a clear choice. There are times we have to walk away from a
no-win situation.
Sequential games do get trickier when you’re playing with another person,
in part because the final outcome is not as simple as you win or you lose. In
sequential games, potential outcomes are dependent on both players’ choices.
Depending on the circumstances, it may depend more on who goes first.
Playing Tic-Tac-Toe? We prefer to go first. This is less strategic than it is
numerical. Turns out, the source of the first-mover advantage in Tic-Tac-Toe
is actually mathematical. There are nine squares, and the first mover will get
to put their mark in five of them. We’ve heard it said that you have to be
really bad at sequential games to go first in Tic-Tac-Toe and lose. Try it, let’s
see.
If there’s a distinct advantage to go first in a sequential game, it has a first-
mover advantage. If there’s benefit in waiting and seeing what happens first,
there’s a second-mover advantage. Let’s take a look at a few examples.

First-Mover Advantage: The Coke and Pepsi


Game
Suppose The Coca-Cola Company and PepsiCo, Inc. are considering adding
a new drink to their respective product lines. Working with their research
and development departments, each company has narrowed the choice to a
new diet cola or a new flavored soda. The companies have done some market
research, and they know that there won’t be enough new markets for them
to both choose the same thing. Ideally, Pepsi would launch one and Coke
the other. Their marketing departments do a lot of research and determine
the potential profits under all scenarios. The payoff matrix they develop, in
revenue, is in Fig. 6.4.
Ok, let’s go through the usual ideas. Neither player has a dominant
strategy. If Coke chooses Diet, Pepsi wants Flavored, and vice versa. These
are the two Nash Equilibria: outcomes where, if they can get there, neither
company can unilaterally make itself better off.
How do we get to one of these results? We didn’t specify anything about
how this game would be played—simultaneously or sequentially. If it’s simul-
taneous, the best we can say in game theory is that a heuristic or other
form of advantage will push the game to a particular solution. This is largely
unhelpful and not reflective of the real world. It’s appropriate in this case
to use a sequential game to see what happens when one company goes first.
108 J. Prag and A. I. Prag

THE COCA-COLA COMPANY


DIET FLAVORED
SODA SODA
COKE COKE
-$1M $10M

SODA
DIET PEPSI PEPSI
-$3M $6M
PEPSI CO.

COKE COKE
FLAVORED

$15M -$2M
SODA

PEPSI PEPSI
$5M -$4M

Fig. 6.4 A sweet choice (in revenue)

Company 1 chooses one innovation, then Company 2 chooses the other, and
voila, they get to one of the solutions.
Here’s the problem with that simple approach: Coke and Pepsi both want
to go first. This game has what’s called a first-mover advantage—and you
can see it in the payoff matrix. Both companies prefer diet soda over a new
flavored soda, as long as the other company chooses the opposite. Likely, it’s
because the market for diet sodas is much larger.
Let’s show the first-move advantage with two sequential decision trees. If
Coke goes first, it looks like Fig. 6.5.
If Pepsi goes first, it looks like Fig. 6.6.
But… the outcomes are the same, right? Not really. Remember, a sequen-
tial game’s first-mover advantage has each player looking at the final outcomes
and then the possible paths to get there.
If Coke goes first, it can see its best outcome is $15 M, and this is achiev-
able if Coke chooses Diet first. Pepsi sees that its only possible outcomes
given Coke’s choice are $5 M or − $3 M. It will choose flavored, and that
gets Coke it’s $15 M.
If Pepsi goes first, it’s best possible outcome is $6 M, which would be
possible only if it chooses diet. After that first move, Coke’s only possible
outcomes would be $10 M or − $1 M. Coke would choose Flavored, and
Pepsi gets its $6 M.
If Coke goes first, it ends up with $15 M and Pepsi ends up with $5 M.
If Pepsi goes first, it ends up with $6 M and Coke gets $10 M. In this
6 A Stitch in Time…: Sequential Games 109

PEPSI
CO.
FLAVORED COKE: -$2M
SODA PEPSI: -$4M

FLAVORED
SODA

DIET COKE: $10M


SODA PEPSI: $6M
THE COCA-COLA
COMPANY

FLAVORED COKE: $15M


SODA PEPSI: $5M
DIET
SODA

DIET COKE: -$1M


SODA PEPSI: -$3M

Fig. 6.5 Sweet choice, Coke as first mover

THE COCA-COLA
COMPANY

FLAVORED COKE: -$2M


SODA PEPSI: -$4M

FLAVORED
SODA

DIET COKE: $15M


SODA PEPSI: $5M
PEPSI
CO.

FLAVORED COKE: $10M


SODA PEPSI: $6M
DIET
SODA

DIET COKE: -$1M


SODA PEPSI: -$3M

Fig. 6.6 Sweet choice, Pepsi as first mover


110 J. Prag and A. I. Prag

game, with these payoffs, the player that goes first can steer the outcome
to its advantage. Coincidentally, this lowers the payoff of the other company,
which is consistent with competition in the soft drink market.
When the rules of a simultaneous game are not pre-established or very
clear, we often experiment with timing. “What do you want for dinner?”…
“You go first!” We try to play the game sequentially instead. In some cases, it
takes another rule or requirement, like who goes first, to make this a useful
way to get to a solution.

Second-Mover Advantages
It doesn’t surprise people, based on experience with Tic-Tac-Toe or many real-
world situations, that there are first-mover advantages in sequential games.
Are there times when it’s better not to go first? We could simply make up a
sequential game that has a second-mover advantage and prove it theoretically,
but the point of this book is to find real-world examples of game theory.
It’s 1989. You live in the Midwest. It’s a SNOW DAY, and you’re oh so
happy. Reading Rainbow is done, and now you’re in front of the TV for
some midday programming. A familiar face appears and says to the audience:
“come on down!” Life is good.
A familiar second-mover advantage (in this case, last-mover advantage)
from US television is in the game show “The Price is Right.” Contestants
will win a prize (a new jet ski! Because that’s so useful!) if they correctly guess
the price of the showcase prize. Whoever is closest without going over wins.
There are three contestants. You can hardly contain your excitement as you
pour another bowl of Fruity Pebbles.
The first person guesses $500, which you’re sure is too low. The second
person guesses $900, and you’re sure that’s also too low, but getting closer.
Here comes the third person’s guess, and it’s… yep, $901. The last contestant
has an advantage because they can simply guess $1 higher than the highest
previous guess. They get all the benefit of the second person’s guess, but now
contestant 2 can’t possibly win.
A few things about this game. First, it’s zero-sum. There is one winner
and two losers. When there is uncertainty in a game, going last allows you
to decide with a little more information than the first person had available
to them. The first mover, roughly speaking, has no idea what range to set,
how high to guess, or even the relative value of a Jet Ski. The last mover now
has a clear range, and he knows exactly how to beat at least one of the other
players.
6 A Stitch in Time…: Sequential Games 111

Business strategists, especially in innovation and marketing, have explored


the issue of first- vs. second-mover advantages for decades. The consensus:
a first-mover may reap initial benefits, but they will also lead in flaws and
mistakes. Second movers, or any subsequent followers, have a chance to see
these mistakes and improve on the product before launch. As an iterative,
sequential game, it is often the second mover who ends up with most market
share.
[box]

Second Mover Advantage: The Joke


Two people walking through the woods are suddenly set upon by an angry
bear. As they try to run away one of them says “we’ll never be able to run
faster than that bear!” The other responds “I don’t have to run faster than the
bear. I only have to run faster than you!”

[end box]

Home Field: A First-Mover Advantage?


There is a well-accepted perception that playing a team sport in one’s home
stadium gives the home team an advantage. Baseball is zero-sum, and it has
a sequential aspect to it; the visiting team bats first (has the first opportunity
to score), and the home team plays the field. The teams reverse roles after the
batting team gets three outs. That pattern repeats in every inning for nine
innings, or until there’s a clear winner. So, in every inning, the home team
knows how many runs the visiting team has scored when they come up to
bat.
This structure gives the home team an advantage; they always know how
many runs they need to score to tie the game, take the lead, or win. They can
devise a strategy with more information than the visitors.
Home field advantage isn’t just the ability to observe and react in the game.
It also includes human factors: players can sleep in their own beds, eat in their
own kitchens, and use their own locker rooms. Fans in the stands are more
supportive—perhaps quieter—when the home team is trying to score (and
they can be disruptive when the visiting team is trying to score).
Home field advantage has been shown, statistically, to be a real thing. Is it
game theory? After all, sports are supposed to be outside of the traditional
game structure, right? Well, some games—including some sports—have a
distinct advantage based on location or timing.
112 J. Prag and A. I. Prag

We know that our choices and decisions require some thought process; an
ability to work through all of the choices, payoffs, and iterations that lead to
your best possible outcome. If you’re making choices in your home, generally
relaxed, you probably make better choices. It’s not just that you make more
cooperative or better-informed choices with more information, it’s that you
process things better in a comfortable situation.
When you’re in public, at a gathering of professionals who are mostly
strangers, or in an art museum, or at a concert, you are, quite literally, a
different person. You are subject to peer pressure, noise, discomfort, and other
external forces. You make choices that often, in retrospect (or “when you get
home”) can be seen as incorrect or inconsistent. These choices might not
have been optimal, or they didn’t fit the model of your typical choice. Peer
pressure, in particular, might be thought of as a jolly killer. You find your-
self doing things because the people around you are doing them, not because
you actually want to do them. This is best thought of as a “jollies conflict.”
The desire to be, or feel, accepted is real, but it’s also often in conflict with
other sources of happiness. One could imagine that a person makes different,
perhaps even better choices when they are able to make their choices on
“home turf.”
A good example of this difficult situation can be seen when a person goes
away—as in, away from their hometown—to college. The short-run desire
to feel accepted frequently blurs the long-run consequences of a student’s
choices. Sports fans think of home field advantage as the impact of home
comfort on players. Parents think of home field advantage as a reason to keep
kids close to home: things are more familiar and easier to control, with more
information and a shorter processing time.
[box]

Order Matters
The sequential reality of life, what we might think of as historical realities,
make us realize that order matters. Here’s a colorful example: The Beatles’ first
number one hit in the US was “I Want to Hold your Hand,” released in 1964.
It’s a bouncy, catchy, non-threatening song in every way and arguably, it helped
to cement rock and roll as a musical form. Jump ahead thirty years to Sir Mix-
a-Lot’s 1992 hip hop hit “Baby Got Back,” and you’re tempted to say popular
music has come a long way. Granny might have some strong opinions as well.

The bands that came before and pushed the limits of censorship paved the way
for today’s artists. Question: would it be possible to reverse the order? Imagine
the moral outrage, the parental protests, radio stations being shut down, etc.
if “Baby Got Back” had come first. As it is, there were protests against some
6 A Stitch in Time…: Sequential Games 113

songs and some performers, but “I Want to Hold your Hand” being sung on
the Ed Sullivan Show allowed us to, three decades later, hear “Baby Got Back.”
In sequential games, even esoteric ones, order matters.

[end box]

The Farm System


All games have an inherent uncertainty in them. You can structure the payoff
matrix any way you like—dollars, jollies, time in jail, whatever—but you can’t
be 100% certain you’ll receive what you should, what the payoff matrix tells
you. In the introduction, we said that most of those issues are for another
book—a book on probabilities and statistics.
Sequential games give us another way to link uncertainty and game theory.
It’s easy to tell a child “stay in school” and “don’t use drugs,” with the justifi-
cation that they’ll be a lot better off in the long run. What if the child has no
examples of the long run? What if they don’t see a lot of people around them
with a happily-ever-after? Long life is something a young child can’t fathom,
and therefore it may not be a major factor in sequential decisions.
Let’s go back to baseball. Baseball, and many other professional sports,
use a farm system, where the major league club gets some of its players
from lower-level teams—often called the minor leagues. The minor leagues
are where players are honing their skills, training for the big time. They
grow, thrive, and eventually move up, if asked. While this personal growth
is happening, the players aren’t paid as much as they will be if/when they
become stars. So, making it to the majors is a major achievement. For the
major league teams, the process seems a lot like farming.
There’s a catch (ba-dum-tiss): major league teams have different financial
resources. Clubs located in big cities (ex, New York City and Los Angeles)
have a lot of fans and thus a lot of revenue with which to incentivize star
players to join. Teams in smaller cities will often lose their star players in
what’s called free agency, a time when a player can sell their services to the
highest bidder.
A team in a small city usually builds its pool of talent by building a good
farm system, but they have to decide when to bring their up-and-coming
stars to the major league level. The choice: when to bring a young talent to
the big league. The outcomes: winning a championship and building for the
future.
Inject some human uncertainty into the problem. A team’s decision-maker
thinks “if we call up all of our best prospects and try to win today, some of
114 J. Prag and A. I. Prag

them could get hurt, our strategy will fail, and we will have lost the season in
the short and the long run.” Another could say “bring up our best prospects
today, train them up, and have them ready for the World Series.” A third
coach could say “bring the prospects in today, but don’t use them in the
championship.” Whatever the approach, there is a connection between the
choices today and the long-term success of the team.
Economist John Maynard Keynes once said, “In the long run, we’re all
dead.” He was saying that economic policies whose payback was expected in
the very long run wouldn’t be beneficial, because it was just too far in the
future to matter to everyday people. His statement was made in the context
of government policy during the Great Depression, but the idea is important
for all sequential games.
There is a distinction between repeated games, which could go on forever,
and sequential games, which we think of as having a last period. The
approach that we used for solving a sequential game assumed an endpoint.
Chess, checkers, Tic-Tac-Toe, and other sequential games end. You can say
that the end is far in the future, or even uncertain. Still, there is an end—and
its presence changes how you play the game.

Unpacking: A Stitch in Time


Granny’s advice tended to be short and sweet. It didn’t include a lot of
explanation and middle ground. Here’s a fun one: “a stitch in time saves
nine.”
Sewing is something that was far more common when Granny learned this
expression from her Granny. If you mend something small now, you won’t
have to mend something big later. A tear in your pants or shirt or dress tends
to get bigger over time if you don’t mend it.
There’s a game in there (Fig. 6.7):
Similar to the expression “never put off until tomorrow something you
can do today,” “a stitch in time” warns you that little problems usually get
bigger. When we try to put real life into formal games, we have to make a
few assumptions:

• The only reference to the future is the number “nine.” The rest we have to
infer.
• The only way to measure the magnitude of the downside is to compare the
number of stitches—one versus nine.
6 A Stitch in Time…: Sequential Games 115

FIX NO TEAR
SMALL TO REPAIR
TEAR
TODAY

TODAY
YOU
TEAR
NEEDS 9
STITCHES
LEAVE
IT
ALONE
TEAR NEEDS
FEWER THAN
9 STITCHES

Fig. 6.7 A stitch in time

• How long is “long?” The expression doesn’t give us a way to know how
long this process takes. Will the larger tear happen in hours, days, months,
or longer?

The quote is consistent with simple, sequential, game-theoretic advice. No


one knows the future, but we can connect potential future outcomes with
the actions we take today.
By now, you’ve realized that a lot of Granny’s expressions are short and
memorable. The “stitch in time” expression leaves out a lot of detail so that it
can be easy to remember and easy to repeat. What’s the significance of “nine?”
It rhymes. That’s it. Memorable expressions often have a rhyme, cadence, or
are catchy in some other way. In that sense, these are usually best thought of
as mnemonics; they aren’t meant to be literal.
One more thing: we noted that the outcome of every game, even with
your future self, is jollies. Jollies are the happiness you get from a decision.
The “Stitch in Time” game assumes that having a much bigger tear in the
future costs is bad, and it costs you a lot of jollies. If the person involved in
this problem is a fan of tears and distressed clothing, then the stitch doesn’t
really apply. If the person runs a sewing business, they might not like it either.
Jollies, preferences, can change over time, making the other player (even your
future self ) harder to understand.
116 J. Prag and A. I. Prag

Moving a Big Ship/Building the Plane While


Flying It
Granny might have been a pilot. When we make decisions that are parts of
sequential games, we often find ourselves immersed in a process with many
steps, many decision-makers, and a great many unknowns. Uncertainty—
outcomes that are based on probabilities—is not the focus of this book. In
that sense, we are separating game theory from Statistics, since we want to
focus on game setup and patterns of choice.
In the real world, though, probabilities of choices weigh heavily in daily
decisions. Specifically, we make decisions at a point in time when we have
made past choices and will make future choices (what we call, well, “life”).
Real games never happen in a vacuum, so we need to be aware of the realities
of change over time.
Legally, organizations and their officers can be held liable for decisions that
lead to bad outcomes. Not all bad outcomes, though. The legal system has
determined that you can hold a firm accountable for things that could have
been avoided, or predictable bad outcomes.
Consider the literal application of “building the plane while flying it.” If
the people “flying it” understand the risks and are paid to take the risks (like
test pilots), and the incomplete plane crashes, the company involved prob-
ably hasn’t exposed itself to undue liability. If, on the other hand, you are
still perfecting the plane when you fill it with passengers, you’ve got a very
different situation.
Here’s where game theory and statistics shouldn’t (can’t) be separated.
When a player makes choices in a sequential game that he knows could lead
to very bad outcomes for others and “hopes for the best,” other players can
blame him (even sue him) for exposing them to high risk. Risk is just another
word for probability.
Legal systems, to varying degrees, don’t allow people to make unfettered
use of uncertainty when making decisions. When something bad happens,
you could sue the decision-maker if the uncertainty posed too high a risk.
Legally, the affected party has to show actual harm caused by the first player.
If an unbuilt plane’s cargo hold opens and millions of dollars in luggage and
goods are lost, that harm is easy to see (and quantify, financially).
Some situations aren’t so direct. Let’s take the expression “moving a big
ship” or “righting a big ship.” Companies use this to illustrate (to shareholders
and employees or other stakeholders) why change is taking such a long time.
The implication is that although small changes may be imperceptible, they
6 A Stitch in Time…: Sequential Games 117

should add up to big changes in the long run. The big ship will course-correct
slowly, but effectively.
Because the future is so long run, and because the changes are unbelievably
small, it is hard to observe whether the decisions are right or wrong. Some of
them may actually be perceived as “the wrong thing to do” in the short run.
Because decisions are happening frequently and their results compound, the
process of “righting a big ship” is inherently uncertain and risky. In the case
of corporate development, it’s the reason stakeholders want to see a strategic
plan. The plan helps everyone see the final direction of the ship, removing
some uncertainty and anxiety around incremental decisions.
In the US, legal codes include regulations that limit a decision-makers
ability to cause harm with uncertainty. The obvious reason is to save lives
and prevent injuries. The less obvious one is that suing a company, or even
a person, for causing harm that they should have been able to avoid assumes
that the victims (or their survivors) will get monetary compensation for the
harm. Companies and people have an escape clause from paying damages:
they can declare bankruptcy. Even without that, the decision-maker that
caused the harm might not have enough money to pay for the damages they
caused. As with other rules, societies can attempt to control decision-making.
Without adequate enforcement, these rules aren’t always as effective as people
choosing to be responsible.

We’ll Cross That Bridge When We Come To It


Chess Grand Masters, the best of the best, are often thought of as some of
the smartest people in the world. There are a lot of ways a person can be
considered smart. In the case of chess, the perception is that a Grand Master
can process a large volume of information—all available decisions on a chess
board—at a single time and use that information to chart a path to a win.
Suppose you have a sequential game with ten or twenty or fifty decisions
in it. Achieving the rollback solution, winning the game, is unfathomable. I
do this, then you do that, then I do this, then you do that… many, many
times and with each player trying to out-think or sometimes fool the other
player.
Granny wasn’t especially good at chess, but she was aware of the daunting
task that decision-makers encounter in real-world situations. Complexity can
be demoralizing (or inspiring, depending on who you are).
When you told Granny about a complicated, multi-step, sequential
problem in which you were focusing too much on a decision that’s down
118 J. Prag and A. I. Prag

the road, she would say, “just cross that bridge when you come to it.” She’s
telling you to focus more on the immediate choices and their outcomes, the
short-run decisions. The hope is that payoffs of later games—and therefore
their correct choices—become clearer after each step.
In a complicated, chess-like game, you are seeing a path to your desired
outcome which has twenty steps or choices over time. One of those, step
seventeen, requires you to cross a rickety old bridge that might collapse when
you’re halfway across it. That fear of Choice 17 might stop you from making
Choice 1.
Here’s where Granny will really bake your noodle: focusing on a problem-
atic choice seventeen steps down the road is a mistake because you might
never have to make it. Somewhere in choices 1–16, the possibility of crossing
the rickety bridge could go away. You should start playing the game, make
your best choices, and deal with the bridge problem when you’re more certain
that you have to cross it, IF you have to cross it at all.
Suppose when you start seeing things more clearly, your desired outcome
takes you down a path that means you will have to cross that bridge and you
are certain the bridge will collapse. Given all of that certainty, you know you
won’t make it to the end if you start down that path. This is when you follow
Plan B; look for the next best outcome and find your new path. Reaching the
end of the game alive is more important than certain death.
Granny had a term for this too. She said, “that’s a fool’s errand,” or maybe
something about snowflakes in hell. The decision-making problem involving
the rickety bridge comes down to your level of certainty about your ability to
ever make it down that path and come out on the other side.

Conclusion
Life is a sequential game; there…we’ve said it.
While there are many sequential games involving more than one player,
games like poker, chess, and Tic-Tac-Toe, you will play sequential games with
your future self more than you play with anyone else. The ability to think
ahead is a lot of what Granny was trying to get you to do for your own sake.
Somewhat ominous proverbs like “as the twig is bent, so grows the tree”
are warning us that actions today will change our future in ways that we
might not like. Sequential games are not simply the outcome of experience,
learning, or other actions. They are forks in the road; situations where one
choice changes future choices permanently. In a sequential game, each choice
eliminates some possible futures and adds others.
6 A Stitch in Time…: Sequential Games 119

Staying in school and learning how to do math allows you to become an


accountant or an engineer. It doesn’t eliminate the possibility of becoming a
poet. Never learning how to do math (somewhere) does pretty much mean
you can’t be an accountant, an engineer, and a lot of other things. Game
Theorists have attempted to formalize, mathematically, the Sequential Game
called Life.

Questions

6.1. Sequential games depend on time and order. In that sense, they are
different from repeated plays of simultaneous games. We’ve said that
rules (like who goes first) are imposed by players and can usually be
changed if it leads to better outcomes. Discuss how making a simulta-
neous game sequential is like making a commitment in a simultaneous
game.
6.2. Societal and cultural “norms” are often just rules or approaches to games
that were determined once upon a time and have become ingrained in
daily life. They are expectations, modus operandi, “accepted” ways of
doing things. Life is a sequential game. Discuss the “rule-writing” part
of a society as a sequential game. Can you show how some injustices
(unfavorable outcomes) or progress (favorable outcomes) are the result
of who got to have a say first?
6.3. Going first in Tic-Tac-Toe is considered an advantage because there are
nine possible places to put your X, and the first player will have a chance
to occupy five of the nine spaces. This would lead one to believe that
anytime the first player has more total choices than the second, there is
a first-mover advantage. If you always go first, are you always going to
win? Not necessarily… When is having the opportunity to make more
choices not an advantage?
6.4. When a sequential game is thought to have a first-mover advantage, we
sometimes randomize the choice of who gets to go first. For example,
in the game of (American) football, one team will start the game by
kicking the ball to the other team and the “receiving” team will therefore
have the first opportunity to score points. This switches in the second
half. The decision about which team will kick and which will receive is
determined by a coin flip where the winner of that random event gets
to decide if they will kick or receive.
Frequently, the winner of the coin flip decides to kick the ball and
defer the opportunity to receive the ball until the second half. Why
120 J. Prag and A. I. Prag

would that team give up the first-mover advantage of being able to score
first? Might there be a different first-mover advantage in the second half?
6.5. The rollback approach to decision-making in sequential games is an
important differentiator from simultaneous games. You still can’t guar-
antee your best possible outcome; the other player’s choice might
preclude it. In most cases, you can make choices that avoid the worst
possible outcomes when the sequential game is with your future self.
Sequential games allow someone to make a Maximin-type decision
without having to convert the game into jollies. Given that, could a
person make a Maximin-type decision in a simultaneous game that’s
played with her future self?

Appendix 6.1: The “Marxism Versus Capitalism”


Game
Should an outcome be fair? Or large?
In game theory, we classify outcomes as balanced, where every player
has a relatively similar payoff, or imbalanced, where one player has a much
larger benefit than the other. Consider the game in Fig. 6.8 between two
populations in the same country, specified in jollies.

A
X Y
A A
2 1
X

B B
1000 1
B

A A
5 2000
Y

B B
5 3

Fig. 6.8 Two populations, A&B


This game has players, choices, and payoffs, but no rules yet.
6 A Stitch in Time…: Sequential Games 121

At a quick glance, the game has two Nash equilibria. If the game is played
simultaneously without cooperation and with each player only interested in
maximizing their own outcome: B gets 1000 and A gets 2, OR B gets 3 and
A gets 2000.
What can we infer from the payoff matrix? For starters, the country is
Capitalist. Outcomes in any quadrant are lacking a lot in terms of “fairness,”
or what some people call a balanced income distribution. There are very rich
people and very poor people. In addition, there is no way to solve this game
beyond identifying the two possible Nash equilibria.
Can we impose a rule to guide us to a single outcome? Well, we could let
somebody choose first, make the game sequential. Turns out, the game has a
first-mover advantage. The group in society that chooses first always ends up
better off.
Suppose the prevailing culture says whenever a game with multiple Nash
equilibria can’t be solved straightforwardly, both players revert to a Maximin
strategy. A and B should avoid the choice that could give them their worst
outcome. B would avoid X and A would avoid Y. The Maximin solution
would lead to the outcome B gets 5 and A gets 3. Let’s call this the Marxist
outcome. This is the second-best outcome for both players: it is fair and
balanced—and also worse than each player’s best outcome.
How could we structure the game, make rules, that would allow for
something that feels fair, but doesn’t give up so much of our potential payoff?
What if the game is sequential, but repeated? The first-choice alternates
between the two players. This would allow each player to get their best
outcome every other play. Over the course of time, several iterations, that
would increase the average payoff considerably over the maximin result.
This should lead every society to take this approach, right? Maybe. The
win–win result depends on two critical things. First, the alternating first move
needs to be immutable and enforceable. Second, the amount of time between
plays can’t be too long. If the time between each round is, say, a hundred
years, the first-mover advantage will return. Human beings aren’t incentivized
to make decisions that make them worse off in the short run for a long run
they might never see.
We leave it to those interested in politics and economics to use this result
to explain why long-term income imbalance tends to be the norm in many
capitalist countries. The shorthand is, being “first” usually means controlling
the means of production (once upon a time, land), and no society has figured
out a way to “alternate” that advantage given the human lifespan.
7
I Solemnly Swear…: Promises,
Commitments, and Threats

Commitments, promises, and threats feature prominently throughout our


life. If you think about it, these three are among the first game theory-related
experiences that people can remember. When we’re young children, we aren’t
aware that we are in a game. At some point, the usefulness of these and other
tactics becomes evident.
We make commitments, promises, or threats all the time not knowing
that these are actually manipulations of games called strategic moves: things
that one player does to change the other player’s choice, thus changing the
outcome of a game.
Game theorists have tried to differentiate these tactics, but the line is a
little blurry. When do we call something a promise versus a commitment?
Most of the time, a promise has an unstated threat built into it. At the same
time, built into any threat is an implicit promise.
It’s easy to get lost in the weeds in this area of game theory, and we don’t
want to miss the forest for the trees. So, let’s focus on the big picture, keep
our eyes on the prize, and filter out the noise. (Did we miss a saying?).
To understand promises, commitments, and threats, we’ll draw on what
we’ve learned about game structure in simultaneous and sequential games
and how we operate in each scenario. This chapter covers perhaps the most
human of everything in game theory: the ability of a player to change the
outcome of a game via communication. Why? Well, the efficacy of promises,
commitments, or threats comes down to very human things like reputation,
credibility, and who someone perceives a player really is.

© The Author(s), under exclusive license to Springer Nature 123


Switzerland AG 2024
J. Prag and A. I. Prag, Useful Game Theory, Palgrave Executive Essentials,
https://doi.org/10.1007/978-3-031-75154-7_7
124 J. Prag and A. I. Prag

Commitments
We usually associate commitments with simultaneous games and promises
with sequential games. This often means a lot more to academics than it
does to normal people, so let’s break it down logically. In the context of
decision-making, a commitment is an information we share before players
make choices. A commitment happens in what we call the “pre-game” of
a simultaneous game. It’s information, known to both parties, that changes
how players choose at the same time.
A commitment is made to push the other player to change their perceived
choice. If all goes well, the person making the commitment ends up better off
than they would have been without the commitment. Why does this matter?
A commitment needs to be credible to affect change. Think of it this way: if a
person makes a commitment to something that clearly makes them worse off,
you’d second guess them. Why would they do it? A commitment is credible
if and only if it changes the outcome of the game so the person making the
commitment gets more jollies. It should convince the other player to make a
choice that they would not otherwise have made.
Let’s look at a game involving two stores: Jane’s store, the larger of the two,
and John’s store. Both stores are trying to determine if they should cut their
prices to capture a larger share of the market. While getting more customers
is nice, they both would be better off if they both kept their prices (and thus
their profit margins) high. Here’s the payoff matrix for this game:
If you play the game in Fig. 7.1 simultaneously, without cooperation, and
with full information (everyone knows everything), Jane’s and John’s both cut
their prices.
The folks at John’s Store will see that if the owner of Jane’s Store cuts prices,
John’s Store will get either $3 M or $1 M. $3 M is better, so John’s owner
will cut prices. If Jane’s owner doesn’t cut prices, John’s owner will get either
$4 M or $5 M. $5 M is better, so the owner of John’s Store won’t cut prices.
The owner of John’s Store does not have a dominant strategy.
Jane’s owner sees that if John’s owner cuts prices, Jane’s Store will get either
$30 M or $25 M. $30 M is better so Jane’s owner cuts prices. If John’s owner
doesn’t cut prices, Jane’s owner will get either $54 M or $50 M. $54 M is
better so again, the folks at Jane’s store will cut prices. Jane’s owner has a
dominant strategy to cut prices.
Jane’s has a dominant strategy to cut prices. John’s does not. However,
with full information, the folks at John’s know that the owner of Jane’s will
cut their store’s prices, so John’s owner simply has to make the choice that’s
best in that case. John’s too will cut prices.
7 I Solemnly Swear…: Promises, Commitments … 125

JOHN’S STORE
CUT DON’T CUT
PRICES PRICES
JOHN’S JOHN’S
$3 MILLION $1 MILLION

PRICES
CUT
JANE’S STORE
JANE’S JANE’S
$30 MILLION $54 MILLION

JOHN’S JOHN’S
DON’T CUT

$4 MILLION $5 MILLION
PRICES

JANE’S JANE’S
$25 MILLION $50 MILLION

Fig. 7.1 A tale of two stores

As with other games, we see that this one ends up somewhere we could
call suboptimal: both players could be better off, but the equilibrium didn’t
land there. In previous games, we’ve fixed this by allowing somebody to go
first (making a game sequential). We could also employ a commitment.
Suppose, before the game is played, the owner of Jane’s makes a commit-
ment to not cut their prices. Maybe they call the owner of John’s and state it
outright, or maybe they imply it at a community meeting somehow. Given
the setup of this game, the owner of John’s store is effectively playing “follow
the leader,” so Jane’s owner’s commitment would make John’s owner switch
choices; they would not cut prices at John’s store. The game ends with neither
of store cutting prices and both of us are better off.
Our first criteria for a commitment is met: the commitment in this case
is credible. So is the second: the player making the commitment is better
off because that strategic move convinced the other player to change their
choice. As we mentioned earlier, there are some important details that need
to be worked out. For example, there needs to be a way for the owner of
Jane’s Store to communicate this commitment in an effective way.
Students of business law are ready to jump up at this point. Legally, Jane’s
store can’t announce publicly that they won’t cut their prices. In the US, a
formal announcement of price setting is considered collusive or anti-trust
behavior. So, stores get around it by being more subtle: “we are holding the
line on prices.” This is an important part of the story, and it leads us to a third
criteria: a commitment must also be observable. The point of a commitment
is to change the behavior of the other player in a simultaneous game. There
126 J. Prag and A. I. Prag

has to be a way to communicate the commitment and observe the intent of


the committer.
So, Jane’s store commits to hold prices and John changes his choice.
What if Jane’s store… bluffs?
Imagine Jane’s commitment is credible, and the owner of John’s store
chooses not to cut prices. The folks at Jane’s could renege and cut their store’s
prices anyway. This could happen in a simultaneous game. Eek.
The outcome for Jane’s store is actually better in this case; indeed, they get
the best possible result from bluffing the commitment. In fact, using a bluff,
Jane’s store ends up with profits of $54 M—the best outcome in the payoff
matrix.
To quote the Bard: here’s the rub. We said both players have full informa-
tion, so this possibility—the possibility that the commitment is a bluff—is
known to the owners of John’s store. Furthermore, if this is a bluff and the
folks at John’s fall for it, they will get their worst possible outcome. So, the
owner of John’s store won’t believe a commitment unless they have some kind
of guarantee. The people at Jane’s store will need to do something more than
say that they won’t cut prices. As Granny said, “actions speak (a lot!) louder
than words.”
A commitment must be credible, observable, and irreversible. No take-
backsies. A commitment must be permanent once made. Problem: this isn’t
possible in a simultaneous game. Any way you slice it, the owner of Jane’s store
could find a way to convince the owner of John’s store that this commitment
isn’t a bluff, and they could make any choice when the time comes.
One more detail should be mentioned here. Since commitments are made
before the game is played, either player can make a commitment. In this
game, the owner of John’s store isn’t likely to make a commitment because
they know that Jane’s has a dominant strategy (a choice that Jane’s should
make regardless of what the owner of John’s does. A commitment on John’s
owner’s part wouldn’t change Jane’s choice or the outcome of the game
because of this dominant strategy. There is no commitment John’s could
make.
The John’s and Jane’s game shows how a commitment might change the
outcome of a game if only one player has a dominant strategy. It also showed
how that player has a clear incentive to bluff. There are many other games in
which commitments lead to unambiguously better outcomes. Supposed you
had a game with no dominant strategies, for either player, and a mutually
beneficial Nash Equilibrium (where no player can make things better unilat-
erally). In this case, a commitment by either player would easily solve the
game to one of the equilibria with no downside risk of bluffing.
7 I Solemnly Swear…: Promises, Commitments … 127

[box]

What Happens in Vegas, Stays in Vegas?


This catchy advertisement would seem to promote fun and merriment in a city
with fewer rules and regulations than most. One could also argue that it wants
people to feel less inhibited—and even forget their commitments. If you work
for a company that expressly forbids gambling as a condition of employment,
you certainly wouldn’t organize a Saturday poker game at your home with
everyone in your department. On the other hand, if you’re at a conference in
Vegas, what might the slogan suggest you do?.

[end box]

Commitments with Threats


When is a commitment not a commitment?
The transfer portal is, roughly speaking, a commitment combined with a
threat. I’ll “commit” to playing for your team (commitment). If I don’t like
how things went after any year, I’ll go play somewhere else (threat). Signing
with a college team used to be a commitment that lasted for your entire
college career. Now, with the transfer portal, the commitment is taken 1 year
at a time. In game theory, when a commitment is combined with a threat,
the threat wins: if they give you an out, a threat supersedes the commitment
every time.
A commitment works because it allows the person receiving the commit-
ment to believe that the other person’s choice is already made. The receiver
can then act accordingly with this new information. How does a threat
in a commitment differ from a bluff in a commitment? A threat shortens
the lifespan of the commitment. In the example above, what once was a
long-term, 4-year commitment is now 1 year at best. The choice has to be
re-made each year. Both the player and the college have lost the ability to
plan (reliably) for the long term, and the game structure has changed.
Does the Transfer Portal hurt school sports? Let’s look at motivation of the
players (the players of the game, not just the athletes…): a coach’s goal is to
develop players. Development takes more than a year. A player’s goal might
be to get playing time quickly, regardless of adequate training, since their
commitment horizon is much shorter. The Transfer Portal misaligns each
player’s jollies and makes a cooperative solution at the end of each season
harder to reach.
128 J. Prag and A. I. Prag

Threats
On its own, a threat is also a tactic employed before the first round of a game.
Where commitments are made before a simultaneous game, threats work best
in a sequential game. Before a game begins, threats are made by the second
mover to change the first player’s choice: “If you choose x, I’ll choose y.”
Let’s look at a bigger example:
While this isn’t the most fun game in the book, let’s see how a threat can
help solve what we call the Mugger Game. Suppose you’re walking through
the park and stranger walks up to you and demands your iPod Shuffle. Also,
it’s 2009. Let’s call you, owner of said Shuffle, the “muggee.” Perhaps to mini-
mize his chance of escaping, the mugger says, “if you scream, I’ll hurt you.”
This is a threat intended to change your behavior. The mugger’s idea is simple:
you don’t want to get hurt, so threaten to hurt and get the Shuffle.
The formal version of this game is shown in Fig. 7.2.
This game, by its nature, is sequential. The “muggee” will choose whether
to scream first, and then the mugger responds. And, yes, you can analyze
the game with a traditional payoff matrix instead of in tree form, the way
we usually show sequential games. The matrix actually has advantages in this
case.
In this game, neither player has a dominant strategy, and the game has two
Nash equilibria: scream and get hurt (upper left quadrant), or don’t scream
and don’t get hurt (bottom right quadrant). When we discussed games like

MUGGER
DON’T
HURT HURT
MUGGER MUGGER
30 10
SCREAM

MUGGEE MUGGEE
MUGGEE

-6 -4

MUGGER MUGGER
SCREAM

20 50
DON’T

MUGGEE MUGGEE
-8 -3

Fig. 7.2 I scream, you scream


7 I Solemnly Swear…: Promises, Commitments … 129

this earlier, we said there usually needs to be a way to get to one of the
equilibria—and the mugger’s threat might be just the ticket. If the threat
works, the mugger—the threat issuer—and the muggee are better off! Like
the commitments that we talked about earlier, a threat is said to be credible
if it leads to an outcome that makes the player issuing the threat better off.
In this case, it’s a win–win.
Well, not really. Because… you know… felony theft.
We still have things to establish, like how can you know the payoffs of
the other player (described in the setup as a stranger) in a game like this.
How can you know that the threat is, in fact, credible? If all goes well, the
threat gets the game to the best of the Nash equilibria for both players. Not
coincidentally, this threat can be issued as a promise, “if you don’t scream, I
won’t hurt you,” with the same result. In most cases, a threat has an implicit
promise associated with it and a promise, as we’ll see later in this chapter, has
an implicit threat tied to it.

Credibility and Leniency


There’s a lot more to issuing a threat than you might think. Human nature,
empathy, and what’s sometimes called a “lack of resolve” can all affect the
outcome. The first thing that must be true is that the threat changes the
actions of the other player. Second, the threatener needs to be better off if
the other player chooses differently. These tests of the effectiveness of a threat
are another form of credibility.
Many laws in our society are effectively threats. If you’re caught after
robbing a bank, you’re going to jail for 10 years. Losing 10 years of freedom
will certainly deter the bank-robbing tendencies of most of the population,
so the threatener—generally, society at large—is better off because the money
put in the bank will stay safe.
A bank robber might have a compelling story, though… a good reason
for undertaking the heist. If that story convinces the judge to be lenient, to
lessen the sentence to, say, 5 years, prospective bank robbers everywhere will
question the credibility of the threat. Hang on… credibility was established
when we saw that the threatener was better off…right?
Here’s where payoffs in jollies get very tricky. Before the game is played, the
threat of 10 years would, by our definition, credible. However, if the robber
is caught and his sob story “tugs on the heart strings” of the judge who then
changes the punishment to 5 years, we see a problem with agency. The jollies
of society (an anonymous, amorphous concept) and the jollies of the threat’s
130 J. Prag and A. I. Prag

enforcer (a human judge with a family and a career putting careers in jail)
were not aligned.
Every parent knows about this problem. An effective threat requires an
understanding of how exacting the punishment will feel on the punisher. In
coparent households, for example, you never know who will meter out the
punishment in each case. As soon as a kid learns she can get ice cream from
Daddy even when Mommy says no, all bets are off.
[box]

If You Give an Inch, They’ll Take a Mile


Threats work when they are credible, unambiguous, and, to a large extent,
immutable. If a threat is really going to change the other player’s choice,
possibly leading to a better long-run outcome for both of you, Granny says:
“don’t give an inch.”

You tell junior to turn off the computer and do his homework or else you’ll
take away his computer time completely. He might well test the voracity of
that threat with a small pushback: “Can I just finish this one game?” or “Just
five more minutes, I’m almost done.” What Granny warned you was, “If you
give an inch, they’ll take a mile.” Not enforcing a threat even a little bit shows
the other player that you aren’t (entirely) credible.

Threats, like rules, have exceptions. The cautionary expression about inches
and miles is really about threats and credibility. Credibility isn’t always lost
all at once. Sometimes, it’s chipped away over time. Little Junior is surprised
when he gets five extra minutes the first time; but after awhile, he won’t even
take off his headphones to hear your threat.

[end box]

Threats, Information, and Prejudice


Let’s do the Mugger game again (because it was so fun and non-polarizing the
first time), but this time, you’re prejudiced. You have a negative connotation
to people who wear jeans. Yes, jeans.
Suppose there are two types of muggers in the world: the earlier one, we’ll
call him Type A, and an eviler one, Type B. Their payoffs are in Fig. 7.3 and
7.4, respectively. You can see that the Type B mugger has a dominant strategy
to hurt you, so the threat attempting to get you not to scream isn’t really
7 I Solemnly Swear…: Promises, Commitments … 131

credible; this mugger is going to hurt you either way. So, if you encounter a
Type B mugger, you should scream.
There is no way to know which type of mugger you’ll encounter. There
is no accurate screen. Statistically speaking, let’s say 99% of muggers are the
friendly, nice Type A.

MUGGER
DON’T
HURT HURT
MUGGER MUGGER
30 10
SCREAM

MUGGEE MUGGEE
MUGGEE

-6 -4

MUGGER MUGGER
SCREAM

20 50
DON’T

MUGGEE MUGGEE
-8 -3

Fig. 7.3 A “Type A” mugger

MUGGER
DON’T
HURT HURT
MUGGER MUGGER
30 10
SCREAM

MUGGEE MUGGEE
MUGGEE

-6 -4

MUGGER MUGGER
SCREAM

60 50
DON’T

MUGGEE MUGGEE
-8 -3

Fig. 7.4 A “Type B” mugger


132 J. Prag and A. I. Prag

Unfortunately for you, you believe that any mugger that wears blue jeans
is a Type B. To you, jeans signal somebody mean, cruel, and an overall worse
human being. Half of all muggers wear blue jeans. So, your prejudice will
make you scream half the time and lower your jollies, for no good reason.
Based on reality—there is no good screen for a Type B mugger. Since they
only comprise 1% of the mugger population, the numbers say you should
never scream, and you will almost never get hurt. Your prejudice led you to
more negative jollies.
All this said, if you find yourself repeating this game, we should talk.

Be Careful What You Wish For


The credibility of a threat requires both sides to have a lot of information.
Consider the problem of a “high-skilled jerk.”
You’re the boss at a small technology company that has ten employees. All
of them are good at what they do, but their outcomes depend critically on
their ability to collaborate. The most skilled person in the group is smart, but
he’s just not a team player. Actually, it’s even worse than that: he’s a total jerk.
One day, high-functioning jerk walks into your office and says he wants
a big raise because he’s the smartest person at the firm. He says he’s gotten
an offer from another firm across the country that pays 20% more than he’s
currently making. “If I don’t get a 20% raise, I’m quitting and taking that
other job,” he threatens.
Threats can be bluffs too. The first part of credibility is easy to see: he
would indeed be better off if he got his 20% raise. So, you ask: does he
really have the offer? It turns out he does. He shows you an offer letter that’s
legitimate.
The part of the threat that he couldn’t know for sure is how you determine
the company’s jollies. He thought you prioritized smarts, intellectual labor,
and individual achievement. Those were his prized factors. However, your
company rewards an ability to collaborate and mentor a team. Your values
are not aligned, and you probably take some pleasure in using a really great
Granny expression: “don’t let the door hit you on your way out.”
Jerk mis-specified the game, and you called his bluff. In the parlance of
poker, he guessed wrong at your hand, at what determines your jollies. He
assumed you needed him more than you actually do, and that your interests
and value propositions were aligned. Now he’s picking up bubble wrap on
the way home.
7 I Solemnly Swear…: Promises, Commitments … 133

Funnily enough, the night before he came to see you, your (now former)
employee’s Granny listened to his plan and said, “be careful what you ask for,
you might get it.” She knew that he should delve a little deeper before issuing
a threat he couldn’t take back.

Threats and Future Games


Remember how simply asking a question can change a game? Threats can
change games too. These implications are subtle and cautionary.
You’re in a bad mood 1 day at work and you blurt out “if I don’t get
my raise, I’m going to quit!” The boss calms you down and the day proceeds.
Does your boss think the same of you before the outburst? Maybe. Or, maybe
there’s new uncertainty around your commitment to the company.
What if you say “if I don’t get that promotion, I’m going to sue?” Like
many such threats, your boss knows that employees sometimes sue when they
feel they were wronged. He wants to keep you, but he’s an officer of the firm.
Saying “I’m going to sue” changes your relationship with your boss—and
your company—forever. In some companies, once you threaten legal action,
your boss has to escalate the discussion to HR, legal, and other parties. Your
threat has irrevocably changed your game.
This isn’t always a bad thing. When you’re trying to reign in a spoiled
child, a threat can “get her attention” and make some future games work out
better. Here’s where credibility and the payoff matrix become intertwined.
You don’t like making your 3-year-old unhappy; that decreases your jollies.
So, threatening to take away her favorite toy for a day to get her to go to
bed on time seems like a non-credible threat; you just don’t want to do it.
But, bedtime (and all parenting) is a repeated game. The “payoff” comes on
future nights when her behavior is modified, and bedtime becomes less of a
problem. As parents, you may choose to live through your discomfort with
the punishment in the short run to raise a better child in the long run.

Promises, Promises
Most people think that the difference between a promise and a threat is just
the tone of your voice. It’s more like they’re two sides of the same coin.
Game theorists try to differentiate promises and threats more explicitly by
focusing on the person receiving the strategic move. Threats are trying to deter
134 J. Prag and A. I. Prag

a person from making a choice. Promises are trying to compel the person to
do something. The difference can be subtle.
You threaten your teenager by saying “if you don’t clean up your room, you
can’t go out with your friends today.” The promise would be “if you clean up
your room, you can go out with your friends today.” Tone of voice notwith-
standing, we see the mirror image aspect here. When you say “if you don’t
clean up your room you can’t go out with your friends,” you are implicitly
promising that you can going out with your friends if you do clean up your
room.
Similarly, if you promise your 6-year-old, “if you eat your vegetables, I’ll
take you to get ice cream,” you are implicitly threatening “if you don’t eat
your vegetables, I won’t take you to the get ice cream.” Aha, here’s a fun
different: a promise allows the player to suggest future outcomes that the
player receiving the promise didn’t know about or didn’t think about. A
promise can give more information. A promise expands the potential jollies
of a game.
[box]

Strike While the Iron’s Hot


The advice in the expression “strike while the iron’s hot” is related to metal-
lurgy. Hot iron is much more malleable than cold iron (which… isn’t really
malleable at all). So, if you’re trying to change the shape of a piece of iron, as
any blacksmith would tell you (if you could find a blacksmith), you heat the
iron until its red hot, and then you strike-shape it with your hammer.

In game theory, threats and promises exert influence on decision makers. The
level of influence depends on the timing. Whether you’re using commitments,
promises, or threats, you have to use them before the other player has made
a choice. That doesn’t mean that the game is already over, it means that the
other player has decided what they’re going to do. Even if we loosen that a
bit—the other player is “pretty sure” what they’re going to do, we can glean
some wisdom from the local blacksmith. It’ll be easier to influence the other
player’s choice when they haven’t thought much about it. Threats, Promises,
Commitments are most effective with the player is malleable. Strike while the
iron is hot.

[end box]
7 I Solemnly Swear…: Promises, Commitments … 135

Reputation Redux
Reputation becomes a big part of promises and threats in repeated games.
Like the parenting example earlier, if you show an ability to do something
that makes you worse off in the short run, you gain credibility and can get
better results in the long run.
Putting a sign on the door of your restaurant that says “Restroom is
for Customers Only—No Exceptions” might seem harsh. It deters non-
customers from asking. If the restaurant owner is, by nature, kind and
considerate, she doesn’t like saying “no” to anyone. The problem is, her
customers are paying for that restroom, and there’s no way of limiting non-
customer use to just a few select people. Signals and strategic moves combine
here in an interesting way. The commitment—the harsh sign—isn’t because
the restaurant owner is mean. It’s because she is too nice and needs the sign
to reinforce the policy.

Game Structure and Strategic Moves


Promises, threats, and commitments are made all the time. Like so many
aspects of game theory, they are often misused, overused, and misunder-
stood. Not all games are structured to allow these strategic moves to work.
Sometimes, a strategic move isn’t able to change the outcome of the game.
Sometimes, you can’t declare the commitment or make the promise or threat
openly in the first place.
When playing a sequential game with your future self, we sometimes say
we’re making a commitment to self-improvement (a gym membership, spin
class, giving up chocolate, etc.…). That sounds great, but by the nature
of sequential games, you’re really just making a repeated, sequential choice.
Whether you actually improve yourself will happen over time and depend on
many more choices. Making the commitment is step 1 in the right direction,
but it is not the choice of the game.
Similarly, if the situation that you’re dealing with is one period only and
a simultaneous game, promises and threats can’t work at all. Commitments
may help guide choices prior to the game being played, but the payoffs of a
promise or a threat have to be built into the matrix before it starts. Again,
the strategic move in this case is simply another choice.
[box]

Dr. Strangelove, Teddy Roosevelt, and the Biggest “To-Do” for a Viable Threat
136 J. Prag and A. I. Prag

Question: When is a threat never going to work?


Answer: When no one knows about it.
The point of a threat is to change the actions of the other player by
communicating your intention to do something they won’t like. Key word:
communicating. A threat never works if you keep it to yourself.

In the earlier-referenced classic movie Dr. Strangelove, the title character


(emphatically) tells the Russian ambassador that the whole point of their
Doomsday weapon (an earth-destroying nuclear bomb that will wipe out
everyone if Russia is ever attacked by a nuclear strike) is lost if they don’t
tell anyone about it. The goal was deterrence. Nobody was deterred because,
well, they didn’t know about it.

Similarly, President Teddy Roosevelt’s famous line “speak softly and carry a big
stick” might work in a jungle, or a zoo. He was trying to deter international
aggression by saying diplomacy should be backed up by military capabilities.
That said, you don’t want to speak too quietly lest your threat (aka, the stick)
not be recognized.

Successfully communicating your desire for the other player to change actions
is key. What’s less important is the use of specific, verbal communications.
Parents and economics professors alike will tell you that a stern glare will work
just as well as an actual threat.

All strategic moves—promises, threats, and commitments—need to be


communicated if they are to change the outcome of a game. Communicating a
commitment means you are telling the other player what your choice is before
a simultaneous game starts. Communicating a promise or threat means you
are telling the other player about some action you will take depending on their
choices in the second round of a sequential game. In all cases, communication
is the key.

[end box]

Bribes
Consider the game in Fig. 7.5.
If both players make their decision simultaneously, without cooperation,
acting in their own best interest, and knowing the entire payout matrix, they
will end up with Player 1 getting 11 and Player 2 getting 15. That’s because
7 I Solemnly Swear…: Promises, Commitments … 137

PLAYER 1
A B
PLAYER 1 PLAYER 1
11 9

A PLAYER 2 PLAYER 2
PLAYER 2
15 21

PLAYER 1 PLAYER 1
9 10
B

PLAYER 2 PLAYER 2
10 12

Fig. 7.5 Bribe time

Player 2 has a dominant strategy to choose A and, knowing that, Player 1


(who has no dominant strategy) would choose X.
Now, suppose Player 2 offers to pay Player 1 a bribe of 3 (a promise of
sorts) to choose Y instead. Well, Player 2 is trying to get her highest possible
outcome, 21. That would require Player 1 to choose Y instead of X. Would
the bribe change the outcome?
You can look at the numbers and tell yourself it would; if Player 1 did
choose Y and got the bribe, they would get 12 instead of 11 and assuming
Player 2 paid the bribe out of the 21, they would end up with 18 instead of
15. It worked, right?
Sorry, trick question. The bribe won’t change the game as it’s currently
structured. No player has an incentive (if they are entirely self-interested) to
follow through on the bribe. To see why, ask yourself when the bribe would
be paid. If it’s paid before the game is played, Player 1’s best strategy is to
stick with X and get a total of 14 (11 + 3). If the bribe is to be paid after
everyone chooses, why would a self-interested Player 2 ever deliver? Knowing
this (remember, full information!), Player 1 won’t choose Y if the bribe will
be paid after the game is played. The bribe doesn’t help us solve the game.
As Granny would say, “if an offer seems too good to be true, it probably
is.” Granny often worried about what we call the “strings attached” to a gift.
In the case of the bribe, the challenge is in the implementation. It looks like it
could work, and feels like a bribe should work, but the structure of the game
makes it hard to get to a great outcome.
138 J. Prag and A. I. Prag

Conclusion
We make promises, threats, and commitments all the time. Little did we
know that these things are always game-theoretic in nature. Strategic moves
are attempts to influence another player in a situation where two or more
decision-makers will determine the outcome.
Commitments depend heavily on a lot of information being easily avail-
able all of the time. Commitments always operate by moving one player away
from a dominant strategy toward a choice that is better for both players.
Dominant strategies don’t change. The player making the commitment is
always better making that choice. That’s because commitments work when
they convince the other player to change his choice simultaneously.
Among the interesting things that we discovered is that strategic moves,
especially threats, sometimes change all future games between those players.
A threat reveals information. It tells the other player “where your head is.”
So, in addition to the direct outcome of the threat, we usually have a change
in the information set for all games between these people—including future
interactions. Unlike in classroom game theory, commitments, promises, and
threats have real, long-term implications for those of us who go through life
interacting with others. Strategic moves can alter your reputation dramatically
in all walks of life.

Questions

7.1. Commitments and information go hand in hand in any real-world situ-


ation. A successful commitment includes observability, or monitoring.
Information can come from many sources. One of the most important
commitments people make is marriage. Explain how a big wedding—
one that includes hundreds of friends and family members—is actually
an example of a commitment with lots of information.
7.2. Breaking a promise doesn’t make much sense if the promise really
was credible. Credibility means that the player making the promise
should be better off. We hear about breaking promises all the time, so
there must be something missing from the analysis. One possibility is
uncertainty about the future outcomes.
Suppose you promise your child that you will take her to Disneyland
for her fourth birthday. She’s elated. You buy tickets, make arrangements
for the pets, and start a preemptive ibuprofen regimen, but when the
7 I Solemnly Swear…: Promises, Commitments … 139

time comes, you have to break your promise because of a family emer-
gency. She’s devastated. When you try this again for her fifth birthday,
what should you do differently?
7.3. Death threats are interesting. Ignoring everything related to the after-
life, and assuming the threatener is making you choose between your
money or your life, how would a credible death threat dramatically
change future payoffs? What is the implicit promise behind this threat
that might change your choices?
7.4. Suppose a bully says, “give me all of your money or I’ll beat you up” and
you give him your money. On seeing this, someone says, “you have to
stand up for yourself.” Aside from the fact that they weren’t the one who
was going to be beaten up, how is that advice-giver seeing your situation
differently? (hint: think about sequential games.)
7.5. Based on what we’ve discussed about game structure, why can’t you
make promises and threats in simultaneous games under normal circum-
stances? What about the fulfillment of a promise makes it impossible?
8
Life Imitating Art: Repetition and Reputation

Some common expressions are literally advice on playing a game: “you’re


going to get a reputation,” “once a cheater, always a cheater,” and “never
say never” are commonplace. We say them almost without thinking of the
implication. Namely: life is a series of repeated games, and behaviors today
greatly affect your choices and payoffs tomorrow.
The frequency of a game is how often you play it, and this is an important
determinant of a game’s outcome. In daily life, you certainly prefer to know
if an interaction is only once or repeated. It helps to know how many sets
your friends plan on bowling; you might pace yourself better. You might ask
your blind date if they’re staying local or moving away, since this will deter-
mine how often you interact. You might treat a 1-on-1 with your professor
differently if you know you have to take three of his courses in the coming
semesters.
Formally, we need to define a few things to see how and when frequency
matters in the setup of a game.
As you increase the number of times you repeat a game, your outcomes
can change. In fact, change is the basis of the Granny favorite “if at first you
don’t succeed…” She was telling you that outcomes might be different the
next time, so you shouldn’t give up. What exactly changes game-to-game over
time? In this book, we have specified most games as having full information,
which means both players know exactly what possible choices and payoffs are.
We don’t allow for many unknowns, so there isn’t much more information
time could give you, right?
Wrong.

© The Author(s), under exclusive license to Springer Nature 141


Switzerland AG 2024
J. Prag and A. I. Prag, Useful Game Theory, Palgrave Executive Essentials,
https://doi.org/10.1007/978-3-031-75154-7_8
142 J. Prag and A. I. Prag

First, repetition allows us—as humans—to develop things like trust. In


repeated games with the same individual, we incorporate these considerations
into choices and payoffs and develop our feelings about the other player:
their reputation. Based on reputation, gameplay can evolve. Players develop a
cadence about the game: a historical record of how the other person acts and
a new set of strategic actions to benefit both players.
Second, repetition allows players to find a path to cooperative outcomes—
a possibility that might never exist in single-play games. If a game is repeated
infinitely (what Game Theorists call “at the limit”), repetition can shift a
game from fully self-interested to fully cooperative decisions. In turn, this
cooperation can lead to higher payoffs for all parties.
Sure, it’s impossible to play a game to infinity (at least, for us mere
humans), but the time horizon doesn’t have to be too long to develop a repu-
tation and learn how to interact better with the other player. Let’s use a few
examples to show how core social structures, family values, and even religious
beliefs evolve as repeated games.

Prisoner’s Dilemma Redux


Can repetition help Butch and Sundance out of their Dilemma? We hope the
guys won’t be arrested an infinite number of times, so let’s play the Prisoner’s
Dilemma, as we described it in Chapter 2, just twice. Each robber finishes his
prison sentence, then they both rob another bank the day they are released
from jail. They’re caught and interrogated again. Butch and Sundance are…
well… tenacious, and really bad robbers.
Recall the basic setup: full information, two self-interested players, two
choices, and a simultaneous game. The payoff matrix is shown in Fig. 8.1.
We showed that both players have a dominant strategy to confess, and the
game ends with both players going to jail for 5 years.
To play the game twice, we need to add all possible combinations of
choices and punishments together. The easiest way to do this is in a deci-
sion tree. Note: a tree can make it look like a sequential game, where one
player goes first. It is not. It’s just the most straightforward way to show a
multi-play, simultaneous game.
If both players get the same prison sentence, they can meet again after they
are released, rob a bank again, and go to jail together again. Note 2: If they
are in jail for different amounts of time, we need to rethink the structure
of the repetition. The easiest way to do that is the player that gets out first
8 Life Imitating Art: Repetition and Reputation 143

BUTCH
DO NOT
CONFESS CONFESS
BUTCH BUTCH

CONFESS
5 YEARS 10 YEARS
SUNDANCE
SUNDANCE SUNDANCE
5 YEARS 1 YEAR

BUTCH BUTCH
1 YEAR 3 YEARS
CONFESS
DO NOT

SUNDANCE SUNDANCE
10 YEARS 3 YEARS

Fig. 8.1 Butch and Sundance are recaptured

will wait for the other one before they play (rob a bank) again. Butch and
Sundance are besties.)
If both players are simply trying to get the shortest combined prison
sentence, each would confess every time the game is played. Confirm that
choice by working backwards through Fig. 8.2. Look at what each player
would do in the second choice and work backwards.
Once you have payoffs at the second and first choice, you can combine
them and put everything back in a new payoff matrix. Here’s what we mean:
Look at the endpoints of the tree. There are four cases in which Butch
confesses in the second game. When he does, Sundance has the following
potential payoffs:
C −→ 10 C −→ 15 C −→ 6 C −→ 8
Sundance Sundance Sundance Sundance
N −→ 15 N −→ 20 N −→ 11 N −→ 13

In each case, Sundance will get a lower combined sentence if he chooses to


confess. If Butch does not confess, Sundance’s possible outcomes are:
C −→ 6 C −→ 11 C −→ 2 C −→ 4
Sundance Sundance Sundance Sundance
N −→ 8 N −→ 13 N −→ 4 N −→ 6

Once again, Sundance has the incentive to confess in all cases. So,
Sundance has a dominant strategy to confess in the second round of the game.
144 J. Prag and A. I. Prag

SUNDANCE BUTCH 2 SUNDANCE 2

BUTCH: 10
CONFESS SUNDANCE: 10
CONFESS
DON’T BUTCH: 6
CONFESS SUNDANCE: 15

CONFESS BUTCH: 15
5/5 CONFESS SUNDANCE: 6
DON’T
CONFESS DON’T BUTCH: 8
CONFESS SUNDANCE: 8
BUTCH: 6
CONFESS CONFESS SUNDANCE: 15
CONFESS
DON’T DON’T BUTCH: 2
CONFESS CONFESS SUNDANCE: 20
1/10
BUTCH: 11
CONFESS SUNDANCE: 11
DON’T
CONFESS BUTCH:
DON’T 4
CONFESS SUNDANCE: 13
BUTCH BUTCH: 15
CONFESS SUNDANCE: 6
CONFESS
DON’T BUTCH: 11
CONFESS SUNDANCE: 11
CONFESS
10/1 BUTCH: 20
CONFESS SUNDANCE: 2
DON’T
DON’T CONFESS DON’T BUTCH: 13
CONFESS
CONFESS SUNDANCE: 4

BUTCH: 8
CONFESS SUNDANCE: 8
CONFESS
DON’T DON’T BUTCH: 4
CONFESS CONFESS SUNDANCE: 13
3/3
BUTCH: 13
CONFESS SUNDANCE: 4
DON’T
CONFESS DON’T BUTCH: 6
CONFESS SUNDANCE: 6

Fig. 8.2 … Here we go again: Butch and Sundance, sequential

Similarly, there are four cases in which Sundance confesses in the second
game. When he does, Butch’s possible payoffs are:
C −→ 10 C −→ 6 C −→ 15 C −→ 8
Butch Butch Butch Butch
(continued)
8 Life Imitating Art: Repetition and Reputation 145

(continued)
N −→ 15 N −→ 11 N −→ 20 N −→ 13

And, Butch’s potential payoffs if Sundance does not confess:


C −→ 6 C −→ 2 C −→ 11 C −→ 4
Butch Butch Butch Butch
N −→ 8 N −→ 4 N −→ 13 N −→ 6

In each case, Butch chooses to confess in the second round. He too has a
dominant strategy to confess.
Now, work backward. Since Butch and Sundance both have a dominant
strategy to confess in the second round, that limits the possible outcomes for
the first round. Of the sixteen possible outcomes in round two, only four of
them could happen.
After the second game, the only possible outcomes are:
Butch −→ 10 Butch −→ 6 Butch −→ 15 Butch −→ 8
Sundance −→ 10 Sundance −→ 15 Sundance −→ 6 Sundance −→ 8

Since they are added together, these four outcomes can become a new
payoff matrix for the two games (Fig. 8.3):
When we combine the outcomes from both periods, we see once again,
that both Sundance and Butch have a dominant strategy to confess. Knowing

BUTCH
DO NOT
CONFESS CONFESS
BUTCH BUTCH
CONFESS

10 YEARS 15 YEARS
SUNDANCE

SUNDANCE SUNDANCE
10 YEARS 6 YEARS

BUTCH BUTCH
6 YEARS 8 YEARS
CONFESS
DO NOT

SUNDANCE SUNDANCE
15 YEARS 8 YEARS

Fig. 8.3 Butch and Sundance, combined sentences


146 J. Prag and A. I. Prag

they will confess in the second round means they will also both confess in
the first round. Playing the game twice with self-interested players did not
change anything for Butch and Sundance.
We’re sorry to have put you through that, but here’s the thing: if the setup
doesn’t change in any way, playing the game five times or ten or fifteen times
wouldn’t change the choices or the outcomes. Why? The Last Game Matters:
each player always has an incentive to confess in the last period. Working
backward, that means they would confess in every period that comes before,
no matter how many games they play.
In game theory, we call this a last-period problem: self-interested players
are always self-interested in the last period, and that strategy feeds back to
earlier games.

No End in Sight
We’re all comfortable with the Prisoner’s Dilemma at this point. However, if
the payoffs are based on years in jail, the game doesn’t lend itself to a sensible
discussion about repetition.
Let’s go back to our two stores, Jane’s Store and John’s Store. For this
new discussion, assume the stores are the same size. Recall, each store is
trying to decide if it should cut prices and capture more market share. Both
store owners know the payoff matrix (aka, complete information). Figure 8.4
illustrates each store’s profits in all possible outcomes.
As with the Prisoner’s Dilemma, if you play the game once, the game will
end up with both stores getting profits of $3 M when they both could have
gotten $5 M. Not ideal. Let’s confirm that and also add some details about
the game.
Setup: Jill’s and John’s choose simultaneously, without cooperation. Each
store wants to maximize profits.
The folks at John’s store see the game like this:

• If the people at Jane’s store cut their prices, profits at our store will be either
$3 M or $1 M. $3 M is better and that happens if we also cut prices.
• If Jane’s owner doesn’t cut prices, our profits will be either $10 M or $5 M.
$10 M is better, so again, we’ll cut prices.

Regardless of what Jane’s owner does, John’s owner has a dominant strategy
to cut prices.
8 Life Imitating Art: Repetition and Reputation 147

JOHN’S STORE
CUT DON’T CUT
PRICES PRICES
JOHN’S JOHN’S
$3 MILLION $1 MILLION

PRICES
CUT
JANE’S STORE
JANE’S JANE’S
$30 MILLION $10 MILLION

JOHN’S JOHN’S
DON’T CUT

$10 MILLION $5 MILLION


PRICES

JANE’S JANE’S
$1 MILLION $5 MILLION

Fig. 8.4 Jane’s & John’s play the odds

Spoiler alert: the owner of Jane’s store has the exact same numbers in the
payoff matrix (just remember to look up and down in the columns), so the
folks at Jane’s also have a dominant strategy to cut their prices.
If this game is played once, both stores cut prices and they both get profits
of $3 M. If neither of them cut prices, they could have $5 M each. How can
we get there?
Maybe they should arrange a meeting and agree not to cut prices? Unfor-
tunately, in many places, that’s illegal. It’s called price collusion, and you
can see why it’s banned: collusion usually keeps prices higher, and that hurts
consumers.
Given that reality (same as the Prisoner’s Dilemma), the result doesn’t
change if you play the game two, five, or ten times with this setup. Each
store should/would always cut prices in the last period; they have nothing to
lose. That choice “feeds back” to every previous round.
What if you never had a last period? That makes a huge difference.
Knowing there’s no last period means you don’t have a last-period problem.
The same mechanism that perpetuates the self-interested choice (cut prices),
also works for a cooperative outcome. If the incentive to cut prices is never
there (in a last period), the cooperative choice prevails—forever.
If John’s and Jane’s play forever, there are only two possible scenarios that
could occur every time they review prices: both stores cut forever or both
stores don’t cut forever. In repeated play, one player may deviate occasionally,
but look at the payoffs. With that kind of decline in profits after a single
game, neither player would make the same choice more than once. There
148 J. Prag and A. I. Prag

may be momentary lapses, but in aggregate, each player would make the
cooperative choice every time.
True, we can’t play the game to infinity, but we can approximate it. We
can play the game a thousand times, and our outcomes would be in billions
of dollars. The payoff matrix in Fig. 8.5 is what we call an infinity payoff
matrix. It combines “forever” numbers and “one-time” numbers to illustrate
a repeated game. Because the only options that could occur every period are
both store owners cut prices (forever) or both store owners don’t cut (forever),
each store can approach the game with those numbers.
The stores can cut prices forever and get $3B or not cut prices forever and
get $5B. The game now has two Nash Equilibria; if you don’t cut (forever);
you’ll be better off by $2B. Each store’s owner sees if they’re ever tempted to
be self-interested and cut their prices just to get one period of higher profits,
they risk losing $2B forever!
This aggregated, “infinity” payoff matrix shows how the short-run gains
from being self-interested are (far) less than the long-run loss. The key to
this result is imagining the game will be played forever and seeing any one-
time noncooperative behavior as a switch, of sorts. Multiplying the outcomes
by a thousand is a way of magnifying the results for either of the only two
outcomes that could happen forever. This is a useful abstraction.
As we said, “forever” doesn’t have to be forever. Dominant competitors
in the industry play this game quarterly, sometimes weekly—and any large
number of games can achieve this aggregated outcome. Here’s an interesting

JOHN’S STORE
CUT DON’T CUT
PRICES PRICES
JOHN’S JOHN’S
$3 BILLION $1 MILLION
PRICES
CUT
JANE’S STORE

JANE’S JANE’S
$3 BILLION $10 MILLION

JOHN’S JOHN’S
DON’T CUT

$10 MILLION $5 BILLION


PRICES

JANE’S JANE’S
$1 MILLION $5 BILLION

Fig. 8.5 Jane’s and John’s forever


8 Life Imitating Art: Repetition and Reputation 149

tidbit of human nature: if we have a game that is perceived as forever (like


this one), and one of the players falls on bad times and believes that this will
likely be their last period (they’re facing bankruptcy, for example), they will
cut prices or in self-interest. This is the nature of decision-making when one
player feels that they have “nothing to lose.” This reality makes people revert
to self-interest and reinstates the “last-period” problem.
Similarly, if the payoff for being self-interested even once is a lot bigger,
say $2B, then both players will try to cut prices. The cost of cutting prices
and benefit of cutting prices would be the same.
So, if the short-run benefits of self-interest are really big, or if there’s a
change that creates a last period, noncooperation is likely for both players.
Fundamentally, cooperative behavior comes from the belief that the short-run
benefits of noncooperation are smaller than the long-run costs.
Repeated games have all kinds of tricks for getting people to see this long-
run versus short-run problem. One we referenced earlier is “tit-for-tat”; a sort
of repeated play version of the golden rule. Each player knows that the other
player will mimic his choice in the next period. If you don’t cut your prices,
neither will they; if you cut, they’ll cut too.
[box]

Price Cooperation
Legally, stores can’t cooperate to fix prices. The setup of Jane’s and John’s is
realistic in that sense: companies need to choose independently and simultane-
ously. Still, companies the world over collude all the time, since there are other
ways to signal understanding of a game and its outcomes. Jane’s and John’s
can’t directly say “here’s where we’re setting prices,” but they can put signs in
store windows that read “Price Match Guarantee.”

This is a form of tit-for-tat known as “price cooperation.” It is not literally (or


legally) collusion. It is just a signal that tells your competitor something like
“don’t bother cutting your price; we’ll match it, and you won’t get any of our
customers anyway.” Fundamentally, getting to the forever cooperative solution
becomes a matter of signaling that you’re on board with that plan without
actually colluding.

This works really well unless, and until, there is a last period. If one of the stores
is in distress, perhaps facing bankruptcy or closure, they give deep discounts,
knowing they won’t be around to play another day.

[end box]
150 J. Prag and A. I. Prag

What Goes Around Comes Around


If stores play a Prisoner’s Dilemma style game forever, they can reach the
cooperative outcome. Can price cooperation apply to individuals?
We invoke reputation frequently in this book because it is, fundamentally,
a game-theoretic concept. Once you have a reputation—good or bad—it
determines how people interact with you; how they play games with you.
Granny warned you at some point, when you were being too self-interested
or otherwise doing something that was potentially damaging to your repu-
tation, “what goes around comes around.” She’s reminding you that games
are played again and again. People talk; people say things about you. They
say you’re a “good egg” or a “bad seed.” Some of these expressions made
more sense, as far as cautions are concerned, when we lived our entire life
in a small town where everyone knew everything about you. Social media
and other modern forms of interaction have connected us in ways we never
thought possible—and re-upped the importance of things like reputation.

Cheaters Never Prosper


When you’re trying to convince a child that noncooperative behavior doesn’t
work out well in the long run, you can’t just say “cheating never pays off.”
In the short run, it can and often does. By the time she reaches an age when
lifetime-improving advice begins to resonate, she will have had personal expe-
riences in which a noncooperator (a bully, a teacher on a power trip, her
naggy younger brother) got the best outcome by cheating with no conse-
quences. Someone cut in front of the line; someone hogged the best toy in
the playroom; someone ate all of the cookies.
You didn’t tell her “cheaters never succeed.” You said: “cheaters never
prosper.”
In the John/Jane game, either of them could do better in one period by
cheating (cutting their price) but by doing so, they definitely don’t prosper.
The cheater loses $2B in the long run!
Prospering, by definition, happens over time. It’s not a one-time return,
like success. It represents repeated successes, or an individual who succeeds
more than he fails. Cheaters aren’t better off in the long run.
Remember to clearly unpack the expressions you hear. It doesn’t say if
any one person cheats, no one prospers. It is focused on the cheater and the
cheater’s long-run outcome: Cheaters never prosper.
8 Life Imitating Art: Repetition and Reputation 151

While this view might be more aspirational, the expression implies that
cheaters will be noticed and singled out in a world where most people do
not cheat. Being rude, being a bully, being a jerk, anything that might be
described as an obvious noncooperator will make other players change their
behavior when they play games with you over time. Interactions with (games
played with) a cheater will have different choices and payoffs as time passes.

“Cheaters never prosper” is short and sweet and as these things go, a very good
expression. If an academic game theorist had come up with it the expression
would have been, “noncooperative choices will evoke noncooperative choices
by other players in all future games and you will get a suboptimal outcome
over your lifetime.”

Thank God Granny got there first.

Forever Prisoner’s Dilemma Games and Social


Structure
Mortality is a “last-period” problem. It’s the same problem that Jane’s or
John’s had when they were suddenly in financial distress: they re-instated a
last period, stopped making the cooperative choice, and started being self-
interested. The short-run gains of self-interest are higher than the long-run
costs if you take away forever.
That’s the case for people when they make life choices with a too-short
horizon in mind. If you live somewhere that’s crime-ridden or facing other
life-threatening realities, you often make choices that have short-run benefits
and long-run costs, simply because the long run may never get here. You live
in the constant reality of a last period. Put it in game theory terms: you assign
a high probability to the short run occurring and a low probability to the long
run occurring. Repeated theft of food from a restaurant might put the place
out of business, and it’s risky in the long run. But, a hungry person is hungry
in the short run. In humanity, short run almost always takes priority.
Ultimately, the problem is time. A human being’s time horizon, how much
time they have to repeat a game, affects how they play. In repeated games,
cooperative behavior begets cooperative behavior, and returns are better for
everyone. So, how do we get people to think in terms of good and forever?
Family honor.
“Your good name.”
“Think of the Children…”
152 J. Prag and A. I. Prag

Many long-term perspectives help fix the problem that mortality creates in
decision-making.
One of the most direct tactics to solve the last-period problem: promise
people an afterlife.
Most religions have tenets that help society achieve the cooperative
outcome every day. With some exceptions, religions typically have three
things in common: an omniscient, supreme being; a set of do’s and don’ts;
and an afterlife. All three can be game-theoretic.
First: do’s and don’ts specify cooperative and noncooperative behavior (aka,
sins). This helps societies and communities set boundaries for certain types
of choices and better specify daily games.
Second: an omniscient observer means you don’t need a sign in the window
or worry about direct communication; God is always watching.
Third, the afterlife: well… it completely eliminates the last-period
problem. The afterlife removes the incentive to make a “bad” choice simply
because your time on earth is ending. Bonus points for creating an afterlife
that depends on you making good choices before you get there.
It’s worth noting that the afterlife isn’t always heaven and hell. Reincarna-
tion is another form of perpetuity that allows the spirit or soul to live forever.
In religions that believe in reincarnation, how you live one life (sometimes
called your Karma) will determine your reincarnated form in the next. It’s the
presence of an afterlife, and its parameters, that rein in the human instinct to
act in self-interest, preserving the cooperative solution indefinitely.
Have you noticed that most “do’s and don’ts” in organized religions are
rather rigid? These laws aren’t presented as ambiguous, nor optional, and they
don’t allow for a lot of exceptions. Why? Allowing people to find exceptions
would defeat the purpose. Take what Granny called a “little white lie.” A
white lie is a lie perceived has having few—if any—consequences. They’re
used when lying is perceived as better than telling the truth. Understanding
that there is a case to be made for white lies, they are still lies and they can
change the outcome of future games. So, rules were written to forbid lying
outright, regardless of the circumstances.
The social structure of religion—sins, a supreme being, an afterlife—is
meant to move people away from being entirely self-interested and toward a
more cooperation. The hope is that society’s outcomes are better in the long
run, and society prospers as it grows generation after generation. Religions
don’t exist only to solve games, but they do impact the quality and longevity
of the human experience.
8 Life Imitating Art: Repetition and Reputation 153

Forgiveness and the Prodigal Son


Game theory and social and religious constructs share the study of a key
idea: forgiveness. In repeated games, players find that reverting to the nonco-
operative outcome forever makes them all worse off, and they try to avoid
that fate. Sometimes, that means allowing players to make bad choices—be
noncooperative—once in a while. Jane’s and John’s may occasionally stray
and cut prices, but they always see the error in outcomes and revert to the
cooperative solution next time. To err is human, after all.
Granny said, “give a little grace.” That’s the idea behind forgiveness. In
repeated games, though, frequency of forgiveness matters. The limits of
forgiveness depend on the specifics of the game and the particulars of the
indiscretion each player is forgiving.
In religion, why is forgiveness a major theme? After all, the rules are
rigid and set—why bend them? First: it’s not bending the rules; forgiveness
happens after the game happens. It’s not a pre-approval to lie, steal, cheat, or
otherwise sin. Second: once a person believes he has broken so many rules
he can’t possibly reach heaven, society has a last-period problem. If someone
doesn’t think he can ever achieve a good outcome, he’ll revert back to self-
interest and do a lot of really bad things. Ultimately, religions are trying to
get people to act cooperatively most of the time, with built-in safeguards
to avoid the last-period problem. Forgiveness helps manage human error by
placing players back on the cooperative path, even after they fall off.
As game theory concepts go, forgiveness is rather fluid. Every player and
every repeated game (simultaneous or sequential) uses forgiveness differently.
Granny may have said, “forgive and forget.” President John F. Kennedy said,
“forgive, but remember their names.” The implication of the latter can be
seen in the “price” of playing games with noncooperative players. If their
behavior is truly a one-time indiscretion, your loss is small (in the Price Cut
game, smaller one-period profits). If the noncooperative behavior is repeated,
it will cost you and the other player much more, since you’ll both make the
noncooperative choice forever. Kennedy was on to something.
Parents are often tormented by concerns about the correct application of
forgiveness. The Biblical approach, as familial as it sounds, is more about the
religious family (a larger community) than a nuclear family. Future payoffs
are aggregated for the entire group, so rules and punishments are cut and dry.
In childrearing, however, the future payoffs are more ambiguous and related
to the “wiring” of the child. As parents know, there isn’t a concrete rule book
for setting a child up for a future of good choices.
154 J. Prag and A. I. Prag

In practice, we often combine forgiveness and punishment to maximum


effect. One acts to correct the behavior in the short run (punishment), and
one acts to change the game in the long run (forgiveness). The combination
helps the noncooperative player focus on both short- and long-run strategies,
potentially changing their behavior forever.

The Good Lord Works in Mysterious Ways


If there’s an occasional bad outcome, how do you keep cooperative players in
the fold? We get used to accepting the occasional bad outcome and figuring it
won’t happen very often, especially when probabilities are concerned. Granny
might have had a well-known bumper sticker with a variation of “Stuff
Happens.” Every once in a while, you make the cooperative choice and
another player doesn’t, and you get a really bad outcome: your car crashes,
you lose an investment, you miss a flight. The rationale we often apply in
a religious structure is to say the outcome was part of a grand plan. The
supreme being operates in a way that we just don’t understand. Some say,
“stuff happens;” others say, “the Good Lord works in mysterious ways.”
Either expression is better than saying “you can’t trust anyone out there, so
always look after yourself, the end.”
Having a mentality of forgiveness or, at the very least, “forget-ness” when
you get the occasional bad outcome will keep you and the societal game
from achieving the best long-run outcome. Like it or not, attributing a bad
outcome to a supreme being’s mysterious plan is a great way to explain a
game-theoretic world where it just doesn’t turn out right sometimes.

The Devil Made Me Do It


Among the more compelling characters in most religions is the devil. We
don’t depend on the threat of eternal damnation to dissuade people from
noncooperative choices. We associate these choices with powerful, ugly, evil
images. We try, quite literally, to scare the hell out of you.
When you teach young children about the devil, you are trying to rein-
force good behavior in a person who is probably too young to comprehend
the afterlife. Why? To a child, the afterlife is too far away to encourage coop-
erative behavior. The devil? That’s a little more immediate. When it comes to
short-run motivation, the devil is to bad behavior what Santa Claus is to good
behavior. Not a short-term punishment, nor a long-term promise, the devil
8 Life Imitating Art: Repetition and Reputation 155

is a personification that provides a medium-term guide toward cooperative


solutions.

[box]
Cleanliness is next to Godliness
A lot of what Granny was doing was modeling good behavior.

Having taught children about a divine being—the omnipotent, omniscient,


entity who controls our fate in the afterlife—the concept applies to anything.
Washing your hands and bathing don’t seem especially important to chil-
dren. They tend to focus more on getting dirty and picking up pretty much
everything that they see so that they can ask you what it is ten times.

When the parents are around, hand-washing and other forms of hygiene are
usually imposed. How do you get your kid to wash their hands consistently
when they are in kindergarten? Voila, “cleanliness is next to Godliness.”

[end box]

On Mortality
As we’ve seen, game theory gives us a new way to understand structures and
relationships that exist in the world around us. At the same time, it helps
to explain how the complicated interaction between biological and economic
realities has developed over time. One such case is the “last-period” human
problem: mortality.
Even with an afterlife, there are tangible things that happen as a person
approaches death and ultimately dies. For close friends and relatives, few
things are as challenging as watching a loved one going through this process.
Those two games—the stage of the game being played by the person who
is dying, and the ongoing game being played by the survivors—are very
different. The person who is dying is approaching the end of their partic-
ipation in all games (afterlife or not), but the others around him are
not.
New, complex games arise for the survivors that again, invoke things like
an intergenerational Golden Rule: do unto the dying as I want done to
me when I’m on my way out. It’s another heuristic that tells us to provide
comfort, support, and other personal things to the dying, so that we can be
treated the same when it is our time. Empathy is the emotional, biological
version of tit-for-tat.
156 J. Prag and A. I. Prag

How much of a person’s time and money do relatives spend on a rela-


tive’s end-of-life issues? How much of the dying person’s money does the
relative spend, if they know the remaining money will pass on to them,
the living? These are decisions that are not straightforward nor easy. If you
spend $1,000,000 of Dad’s retirement money on medical expenses that might
extend his life for a few weeks or months, what will Mom have left to live
on? This game even includes the problem of who should be making those
choices.
We have an expression and a lot of literary references for achieving immor-
tality the wrong way. Channeling classic works, we often hear references to
making “a deal with the devil.” The devil, in most versions of this story, offers
us a short-run success (ascending a throne, getting rich, relieving pain) in
exchange for our soul—forever in hell. These parables are old because the
problem is old. Immediate returns in exchange for something so far in the
future we can’t even imagine it… seems like an attractive choice. Convincing
people to forgo short-run glory because the cost will be eternal damnation is
difficult if they don’t have a concept of eternity.

Mortality, But Not People

Let’s switch the example to something much more palatable: the death of a
Fortune 500, multi-billion-dollar, international conglomerate. The company
is floundering, people are jumping ship, investors are panicking. What should
you, as an employee, do?
For a person, there’s a direct way to close up shop when things get too bad:
a DNR (do not resuscitate) order. It is a commitment a person makes well
before a life-or-death decision needs to be made. If the person is too far to
save, a DNR says let them go; the choice is removed from all parties involved.
Unfortunately, there is no DNR for major corporations. There is no
commitment to let the company die with dignity. And because of that, the
game is often harder to solve.
You know that if you and everyone who believes the company is dying
decides to leave, the company will die faster. Before you play “stay or leave,”
ask yourself this: does staying matter? Put differently, does you staying or
leaving have an effect on whether the company survives?
This brings up an important part of the setup that we haven’t yet discussed.
Game theory is used when you and others make choices that together deter-
mine the outcome. Just the existence of other decision-makers does not
mean you are in a Game-Theoretic situation. If there are a lot of others
making choices, but no one of you makes an appreciable difference in the
8 Life Imitating Art: Repetition and Reputation 157

outcome, then you aren’t in a game-theoretic situation. If your decisions—all


of you—can’t affect change, you’re not playing a game.
Now, your choice is: how much of time, money, energy, and emotions (as
an employee) should you spend on the dying company? Loyalty may exist in
good times, or even challenging times, but finality changes how most people
play a game.

Repetition and Reputation


Any game in which reputation can be used is, by definition, a repetitive game.
Reputation is an established behavior that reinforces your resolve to make a
certain choice. Put another way, it is a screen others use to decide how you
might choose, and it is reinforced in each game. An easy way to incorpo-
rate reputation is to have a game that does not have dominant strategies and
requires other approaches to reach a solution.

Deadlines, Deadlines, Deadlines


We’ve unpacked a lot of expressions in this book so let’s unpack a single
word—deadline. We use this word so frequently that we don’t usually notice
that it combines dead—our last period—with line, something we draw;
something we impose.
The “last-period” problem isn’t always a problem. It can also be a chance.
Whatever you want to call it—a “line in the sand,” a “do-or-die moment”—a
deadline incentivizes players to do something they don’t want to do.
Imagine a repeated game in which one choice is best or correct, but you
don’t want to make it. It’s not going to feel great, even if it’s the best option. If
you specify the payoff matrix in jollies, the choice involves a negative number.
You could just do it, get it over with, and taking the “loss” of jollies. The alter-
native is procrastinating, avoiding that choice. You may get a smaller negative
numbers of jollies in the first game, but you’ll compound that negative every
time you play. Over time, procrastination means your negative jollies grow
over time. Better to rip the band-aid off now.
A deadline can push you to do the right thing. You are better off making
the choice, submitting the document, closing the door, finishing the thing.
Human nature doesn’t add well. If you put things off indefinitely, the high
negative jollies tend to add up. A deadline stops the pattern, to your benefit.
158 J. Prag and A. I. Prag

Sometimes, we impose a deadline on ourselves. It’s a “soft” deadline, a


suggestion, a goal. For many of us, a soft deadline can become more of a theo-
retical aspiration pretty quickly. Last periods must be real if they are going to
change our behavior. To affect the outcome of a game, a last period has to be
real and imminent.

Conclusion
Repetition does a lot more than teach you how to play games throughout
life. It can fundamentally change the outcome of games. Pushed to the limit,
playing forever, payoffs are now the collection of all plays—the “sum total” of
how you’ve lived your life. We’ve shown that some combinations simply can’t
happen every time in repeated play. Because of that, repeating some games
can change the outcome for the better over time.
“Forever” isn’t real for any mortal decision-maker. So, we find a way to
make it real. We are taught about an afterlife, family honor, and other ways
that the benefits of short run, self-interested behavior are dwarfed by the long-
run costs. The closer you make the difference between short-run gains and
long-run losses, the more likely people will be self-interested.
For all the expressions, social structures, and other ways that we try to get
people to ignore death, there are plenty of reminders to enjoy life. “Eat drink
and be merry, for tomorrow we die,” “Gather ye rosebuds while ye may,” and
“There’s no time like the present” are all reminders that life is short and finite.
Balancing those two perspectives is left to the individual and the situation.
The occasional indulgence is sometimes considered life-affirming, an
acceptable choice of short-run jollies. Constant indulgence, on the other
hand, is one of the seven deadly sins—gluttony—which together with its
six colleagues (lust, envy, wrath, pride, greed, and sloth) are all self-interested
game-theoretic choices that we are taught to avoid.

Questions

8.1. Repeated games usually have a “last-period” problem. Arguably, it is


basically the same as the problem you face when you play a one-period
game with a total stranger. Explain.
8.2. Drinking (alcoholic beverages) and driving is problematic at best, deadly
at worst. Unlike many drugs, though, alcohol is legal (in the US). Say
somebody gets drunk, drives, and causes an accident. What went wrong?
8 Life Imitating Art: Repetition and Reputation 159

Assume they knew about the worst-case scenario before they started
drinking and ultimately driving before they caused the accident.
8.3. Why do we often observe lawless behavior in war-torn countries? Or,
why do they always show a lot of looting in movies when an asteroid is
about to hit earth?
8.4. An interesting tweak on the theme of this chapter is repeated games
but with different players. Arguably, a company is chartered with the
intention of being in business forever. Theoretically, its officers should
make cooperative choices that allow it to thrive most of the time into
perpetuity. What’s wrong with this assessment?
8.5. Theologians have argued that hell, the place of eternal damnation, was
not originally part of the biblical story (the foundation of many reli-
gions). So, there was heaven—eternal life after death in a good place,
for the good actors. Otherwise, you just got “death,” game over, end
scene. In what way does the absence of hell make the game of life more
problematic?
9
If Everyone Jumped Off a Cliff:
Brinkmanship

When we talked about threats in Chapter 7, we discussed the importance


of credibility. The most important determinant of credibility was that the
receiver could verify it would make the threatener better off, and then change
their choice. The threatener also needs to follow through, which dramati-
cally impacts reputation in repeated games. The perceived creditability of the
threatener is a major factor in whether a threat is believable and effective.
Credibility problems can also be with the threats themselves: they can be
“too big.” If a parent says to a child, “if you don’t clean your room, you
can’t go to Disneyland,” the threat might be effective because it’s propor-
tional, manageable, and likely. If you say “if you don’t clean your room, we’re
moving,” it’s not so proportional, and really not likely. That threat won’t
change your kid’s choice because it’s just too big. You’re not moving. They
know it, you know it, and the room stays messy.
How do you analyze a game when the only available threat is “too big?”—
something seemingly disproportional to the game being played. The situation
seems unusual, but it happens all the time. You don’t have to threaten to do
something catastrophic immediately, you might threaten to start a process
that could lead to a catastrophic outcome. You could set players up for a
repeated game of escalation.
Many decades ago, as game theory was finding its formal footing in
academia, there was trouble brewing between the US and (then) USSR. Polit-
ical strategizing and military mind games paved the way for a sub-discipline
we call brinkmanship.
(Note: it’s not brinkSmanship. No “s.” Don’t you dare put an “s” in the
middle.)

© The Author(s), under exclusive license to Springer Nature 161


Switzerland AG 2024
J. Prag and A. I. Prag, Useful Game Theory, Palgrave Executive Essentials,
https://doi.org/10.1007/978-3-031-75154-7_9
162 J. Prag and A. I. Prag

Brinkmanship is a series of strategic moves intended to get what you


want by taking a situation to the brink of catastrophe, where both players
face unconscionably bad outcomes, and somehow avoiding this disastrous
endgame. It involves a last-period problem, threats, promises, and reputa-
tions—a delightful mix of factors circling a common question: are we (the
players) going to survive this game or not?
Brinkmanship always has one payoff that’s really bad. Like, catastrophically
bad—and it affects all players. Think nuclear war or a complete economic
shutdown. The existence of that potential payoff actually changes how players
play. All players know the game could end badly for everyone. They can see
it, and it’s plausible, so that is an effective and credible threat. Brinkmanship
is often associated with Cold War-era events, famously the Cuban Missile
Crisis That international incident is generally considered to be the closest the
world has come to nuclear annihilation.
Game theory is fun!

The Cuban Missile Crisis: A Brief Synopsis


In the Fall of 1962, the US government discovered that the Soviet Union was
building nuclear missile launch sites in Cuba and arming them with missiles
capable of reaching many major US cities in about five minutes. Eek! The
geographic proximity gave the Soviet Union a very big first-mover advantage.
Politicians, military generals, and armchair academics in the US argued
that that advantage dramatically increased the likelihood that the USSR
would launch a preemptive attack on the US, hoping to eliminate the ability
of the US to retaliate. For this (and many other reasons), the US demanded
the USSR remove the missiles and dismantle the sites.
The Soviet Union refused.
Plot twist: the US also had missiles on the other side of the world capable
of inflicting the same damage on the Soviet Union. Double Eek! No wonder
they wouldn’t remove their missiles. Well, at least arms on both sides should
help neutralize the threat, right?
Not exactly. That each country had relatively comparable levels of “first-
mover advantage” actually made the catastrophic outcome of the game worse.
Instead of a first-strike followed by world domination, the game tree included
a first-strike, a second-strike, a third strike, and a series of choices that led to
total global destruction.
Triple eek.
9 If Everyone Jumped Off a Cliff: Brinkmanship 163

US leaders had to find a way to get the sites in Cuba dismantled (to them,
the only acceptable way to neutralize the major threat) without provoking
the USSR to start World War III. Some decision-makers, mostly in the mili-
tary, advocated for an invasion of Cuba or bombing the sites. Political leaders,
including President John F. Kennedy, worried that any such provocative move
could start the very nuclear war that they were trying to stop. Armchair
academics wrote paper after paper.
The US chose to impose a naval blockade off the coast of Cuba in inter-
national waters to block further Soviet military equipment from reaching the
nuclear sites. While still risky, a blockade on neutral territory could limit the
potential for damage and loss of life. It is what’s characterized in the military
as a passive strategy. One player puts up barriers to control the other’s reac-
tion in a subgame. The other player (in this case, the USSR) would have to
make the active choice, putting all responsibility for nuclear war on them.
In decisions large and small, we often choose to abdicate to another player.
In some games, like poker, we check the bet to the next player as a way to get
information about their potential hand. If you’re taking your new girlfriend’s
parents out for dinner, you let her pick the restaurant. Delegated authority in
decision-making absolves you of any fault that comes out of the decision. If
her parents hate the food, you can always say you didn’t choose it.
This is a game employees and managers at firms across the world know
well. You and your manager know you’re not happy with your job. Both
of you have a choice: to leave/stay (for you), and to fire/keep you (for the
manager). The manager might keep you even if they want you out. Why?
If you choose to leave, it’s your decision. The manager gets what he wants
without the guilt (or cost) of firing you.
Ultimately, the blockade worked. The USSR agreed to remove the missiles
and dismantle sites in Cuba. In exchange, the US removed missile sites in
Turkey.
Whew!
Diplomacy helped secure a mutually beneficial outcome—but smooth-
talking politicians were not the reason for success. The passive blockade
subgame allowed diplomatic negotiations to commence in the first place.
When the game was simply “who launches first,” diplomacy wasn’t even an
option.
Brinkmanship is not an ideal situation; we wish we never had to discuss
a “nuclear” option in our daily interactions. Yet, it highlights an important
aspect of the decisions we make. Recall, we said we always have another
option in any game setup: we can choose not to play. Brinkmanship shows
us that we also have another option, in certain circumstances: we can change
164 J. Prag and A. I. Prag

the game. As Granny may have said, if she listened to conventional polit-
ical wisdom: “if you don’t like what’s being said about you, change the
conversation.”
The Cuban Missile Crisis also revealed a fancy logical conundrum. Having
built up massive arsenals of nuclear weapons, would the leaders of either
country ever really “push the button” and annihilate millions of people? If
they weren’t willing to do it, why build the weapons in the first place? Why
spend on an arsenal if it wouldn’t constitute a credible threat.
The Crisis has been studied in game theory for half a century, and most
academics agree: the arsenal itself wasn’t the threat. The building of the
arsenal was the game-theoretic move. It was the first action in a subgame that
could have taken the world down a really bad path. Brinkmanship doesn’t
start with the nuclear option (in this case, literally). It is a series of strategic
baby steps that iterate to a nuclear last period. The arsenal moves each player
toward the brink slowly. The last-period result is clear from the start, with
the hope a better outcome may be found by players along the way.
Brinkmanship stems from a seemingly… well… weird part of human
nature: apparently, we’re willing to entertain the possibility of annihilation
just to see if we get something better along the way. We don’t always do this,
though. Sometimes, we draw a line and agree to not cross it. When you were
being particularly whiny about dinner, Granny would say “don’t start with
me.” When we sign a contract for services, we’re limiting how much each
party can escalate, thereby protecting everyone. Humans have created ways
to manage subgames to ensure we don’t go over the cliff because of every
little thing. Maybe we don’t have a death wish as a species after all.

Layers of Players
At the heart of brinkmanship situations are “too-big threats” that have conse-
quences so bad that no responsible person would ever directly choose them.
Starting a nuclear war clearly applies. There’s another unique aspect of
brinkmanship games: each player can have a different perspective of the game.
Players can come from the highest level (world leaders, corporate execu-
tives, heads of labor unions, etc.) and the front lines (factory workers, military
personnel in the field, perhaps even everyday citizens). In the Cuban Missile
Crisis, decisions were made by political leaders with input from military
generals. The situation didn’t just impact those groups, though. History is
fraught with imagery of bare store shelves, families huddled together, and
school children required to do nuclear war drills (hiding under desks, as many
9 If Everyone Jumped Off a Cliff: Brinkmanship 165

of us remember). The threat impacted preferences, choices, and payoffs of


citizens on both sides, en masse. The Crisis didn’t happen in a vacuum. Each
decision-maker—on either side—were left to make choices that affected an
entire civilization.
When we delegate authority willingly to a counterpart—somebody with
the same level of decision-making power to alter a game that we have—we
usually seek to absolve ourselves of any fault for the consequences. When
citizens delegate to a higher authority—such as elected or appointed officials,
the assumption is that “delegate” will make decisions in our best interest.
Most election campaigns feature a common message: “we will work for you.”
In practice, higher-level decision-makers will act differently, think differ-
ently, and react differently than the people they represent—the people on
the front line. Sometimes, we describe leaders as being more rational. In a
game-theoretic sense, this means their behavior should be more predictable
or follow a recognizable set of parameters. Is this correct?
The difference between leaders (corporate, family, or otherwise) and
followers (general population, employees, children, etc.…) is the level of
exposure to the consequences of a choice. Leadership may be responsible for
making the decision, but it’s the followers who have to deal with the fallout.
Leaders act with longer-term outcomes in mind; followers (like soldiers)
have to deal with short-term outcomes on the front lines. What do we call
the long-term? The Big Picture: a concept that often has clearer, defined
outcomes. Yes, a leader operating with the Big Picture in mind should be
more straightforward, clear, and rational.

It’s a Slippery Slope


In a brinkmanship game, each choice brings players closer to or further from
“the brink.” In the case of the Cuban Missile Crisis, the decision was taken
out of the hands of the military, whose short-term goal was to neutralize an
immediate threat. The short-term gain of military action would have initiated
the long-term, catastrophic outcome. We may have won the battle, but would
have started World War III.
Granny’s phrases like “slippery slope” come from brinkmanship. As a situ-
ation unfolds, there’s a point beyond which things cannot be stopped. In a
strategic series of decisions, subgames, the players decide whether to take steps
toward or away from the nuclear option. Blockades and diplomacy were steps
in the right direction, bringing us “back from the brink.”
166 J. Prag and A. I. Prag

Brinkmanship can be used to examine other types of decisions. The key


is finding a game where following through on a threat makes everyone a lot
worse off, but making the threat still has value.
Headlines in 2023 warned that a strike by Danish unions could close
the entire country’s economy. How can this possibly be good for anyone
in Denmark? Because so much of Denmark is unionized (about 2/3 of its
workers), it is certainly a threat that could be effective if carried out. But,
if unions were to follow through, the entire country—from production to
commerce to schools to grocery stores—would shut down. Is this threat
credible?
Recall the very nature of negotiations. Two parties are creating some kind
of joint value and deciding how to divide the final product. A too-big threat
is tantamount to saying, “meet my demands, or the joint venture disap-
pears.” Just like the Cuban Missile Crisis, the game starts before the “nuclear
option”—the strike—is on the table. In fact, an effective negotiation tactic
involves a hint of a strike. Danish union leaders never said they were going
to strike: the newspapers did. And to the negotiation table we go.
When unions directly go on strike, like the Writers Guild of America
(WGA) did recently in the US, there is usually another strategy at play. The
strike lasted months, during which negotiations restarted multiple times and
ended with the WGA getting the majority of their demands. A strike galva-
nizes union membership toward a common goal, against a common enemy.
However, going over the brink has consequences in repeated games. In this
case, the “winning” jollies were considerable, but they may get smaller and
smaller in each game. Next year’s contract negotiations will look consider-
ably different. The world on the other side of World War III is irrevocably
changed.
[box]

No Payoffs? What Gives?


We haven’t included an actual payoff matrix or game tree in this chapter yet—
and we’re not going to. Brinkmanship is a subset of game theory (and if you’re
into math, we can point you to some great books on the topic), but it is not
easy to show graphically. Brinkmanship typically involves multiple players with
a series of choices. We don’t need to show numbers to prove there is a wrong—a
very wrong/bad/terrible—outcome, and players aren’t trying to get there.

Situations that involve brinkmanship are legitimate uses of game theory, but
there isn’t a useful purpose in simplifying these complex situations enough
to make them look like our earlier games. By now, the idea of game theory
should be well rooted; the lightbulb should be on. We can see that there
9 If Everyone Jumped Off a Cliff: Brinkmanship 167

are players, choices, and outcomes and we can analyze Brinkmanship game
without a formal structure.

Following through on the “too big” threat is something that is made possible in
a multi-layered (different players), multi-period game if things go too far. Let’s
recap the Cuban Missile Crisis and try to see the game without any formal
structure.

A tense situation between military superpowers could have escalated into the
use of nuclear weapons. One decision, and players are, all of the sudden, too
far down that slippery slope to stop the nuclear outcome. Both sides will
let the public situation get closer to the bad outcome while more reasonable
negotiations proceed.

Negotiations are where information (not all, but some) is exchanged. It’s where
players start to understand possible outcomes and the other side’s preferences.
What Granny called a “no win situation” is one where negotiations aren’t
beneficial, maybe not even possible.

World War III did not happen, but union strikes happen all the time.
Brinkmanship is always tricky, occasionally catastrophic, and not driven by
any one payoff matrix.

[end box]

Tiananmen Square, Waco Texas, and Rationality


When brinkmanship does go wrong, we are tempted to say one or both sides
were irrational. According to what we’ve described, the failure to resolve a
situation like this leaves everyone worse off. We don’t just lose the product of
a joint venture; we lose everything.
Recall, though, that irrationality in game theory doesn’t mean the other
side is wrong; it means their jollies aren’t specified the way yours are. If a
brinkmanship game ends with the worst possible outcome, one side erro-
neously applied their jollies to the other side. It is a failure to consider other
ways of thinking.
In the case of Tiananmen Square, the protestors believed that world
opinion would dissuade their local government from using force to end a
protest. The government, unfortunately for the protestors, needed to make it
clear that they would never give up control to a comparatively small fraction
of an otherwise huge country. To do so would invite insurrection everywhere.
168 J. Prag and A. I. Prag

Preserving human life maximized jollies for one side, but not for the other.
Protesters standing in front of tanks—even with cameras watching—were
killed.
The WGA strike in July 2023 effectively shut down all California-based
production for movies, television, and online content. One side wanted
financial strength, the other wanted longevity. In this case, the financial
strength of many production companies, networks, and streaming services
was weakened. The pie that these unions want a bigger piece of was made
smaller. Although the conflict was resolved, and work resumed, the damage
to longevity was done and the entire industry, on which the WGA depends,
has shifted.
At the heart of most brinkmanship situations is a layer of decision-makers
who have different incentives, desires, or approaches to the ongoing problem.
In the Cuban missile crisis, the leaders of the US and the USSR wanted to
avoid nuclear war while getting a strategic advantage. The risk that both sides
saw when the Kennedy administration imposed a blockade on Cuba was that
a ship captain at the blockade would feel like his life was in danger, fire a shot,
and start a conflict. The presence of different players in a single game injects
a type of uncertainty information may not be able to help. Kennedy removed
this uncertainty by bringing the interests of both sides—the human interest
in staying alive—closer together.

The Straw that Broke the Camel’s Back


Brinkmanship situations are played out in small scale all of the time—we just
rarely call them that. Taking small steps toward a dramatic—usually bad—
conclusion is the idea behind the expression “the straw that broke the camel’s
back.” A single straw weighs nothing, but a critical mass of straws can add up
and exceed a camel’s capacity.
No one knows which straw will be the back breaker, but everyone knows
there will be one. We try to play these games carefully, fully aware that at some
point, you’re over the brink, and everyone loses. You can push an employee
to perform over and over, but eventually, burnout sneaks up and you’re hiring
again.
For some games, it’s not always bad.
Parenting has little brinkmanship moments. Your 3-year-old is acting out
(as they are prone to do), and you use a lot of patience, reasoning, and other
good parenting skills. Hopefully, it works. Once in a while, though, their
behavior doesn’t change. Mama is the broken camel. (Granny says, “oh, now
9 If Everyone Jumped Off a Cliff: Brinkmanship 169

you’ve done it!”). Toys are taken away, voices are louder, the toddler is crying,
and Mom is… well, she’s not happy.
Did the payoff actually accomplish anything?
On one hand, your toddler, who is just now developing the skills of
strategic decision-making, knows that games can turn out very badly. Threats
of “I’ll take that toy away if…” might be more effective the next time. On
the other hand, you may have emotionally scarred your child. The “nuclear
option,” as parents put it, is almost never a risk we want to take; the downside
is just too severe.
We say “almost” because there are exceptions to every rule. Yelling damages
a child, yes. It’s probably not a great way to get her to clean up her room.
If she’s about to run into traffic, though, maybe yelling (and running and
scooping her up) is appropriate. In that case, emotional damage isn’t the worst
outcome possible in the game.

The Tipping Point

Like the straw that breaks the camel’s back, the tipping point is a point
beyond which a game changes dramatically and often for the worse. The situ-
ation and the imagery are different from the ones discussed in brinkmanship,
but the results are similar. Rather than a slippery slope where a situation slides
irrevocably toward a bad outcome, a tipping point is more like a switch or
a shift in weight that moves the situation from one side to another like a
playground see-saw.
Tipping points are described in heavy-duty mathematics and physics. A
process is stable for many possible data points, but if observations exceed a
certain level, the rules change (rules like gravity, inflation, what medication
will be effective, etc.…). If we were to combine tipping points and brinkman-
ship, we’d be trying to figure out the point beyond which sliding down the
slippery slope cannot be stopped. During the Cuban Missile crisis, both sides
recognized that any large-scale military strike would be a tipping point, and
they sought to avoid it.
Tipping point is one of several game-theoretic expressions that has become
overused and often misused. Not everything that ultimately goes to hell—
has a bad outcome—did so after passing a tipping point. One player making
the cooperative choice and the other making the self-interested choice in a
Prisoner’s Dilemma style game gives the cooperator a bad outcome. That’s just
the result. There’s no slow, iterative approach in play. It’s just what happened.
Furthermore, the mere presence of a temporal element—sequential games
or repetition of simultaneous games—doesn’t mean a tipping point exists. In
170 J. Prag and A. I. Prag

the Nancy Reagan game “Just Say No,” trying drugs allows for the possibility
of becoming a drug addict, but there’s no single point beyond which you are
officially addicted. This point depends on a range of human physiological
factors. It’s why the strategy is recommended: we can’t say for sure where the
point-of-no-return is, so just don’t start.
The key aspect of a tipping point is the dynamics of the decision-making
process. Like all of our games and all of the results that we’ve described, this
tipping point outcome depends on the structure of the game.
Avoiding this problem, staying on the correct side of the tipping point, and
fixing this problem (getting back on the right side of the tipping point after
you’ve passed it) requires more than just an understanding of tipping point
dynamics. It requires an ability to monitor the situation as it’s happening.
There needs to be a way for decision-makers to see the point coming and
understand that reversing the outcome will require more than just taking out
the last straw. Once a camel’s back is broken, the fix is much harder.

Is This the Hill You Want to Die On?


This Granny saying, like many others, needs a bit of unpacking. At the indi-
vidual level, dying is the very bad ending; the nuclear war; the breakup; the
firing; the worst possible outcome. It is the too-big sacrifice for what you’re
getting in exchange. The hill, in this case, is the problem or the issue; it’s
what you’re playing the game for. Granny’s expression is trying to dissuade
you from playing by pointing out the irrelevance of the problem. She might
pair this with another Granny classic: “you’re making a mountain out of a
molehill.”
When you take the first steps down a path that has a slippery slope, perhaps
one you don’t know about or didn’t expect, you might have the opportunity
to ask “why does this situation have to cost me my life?” We want our social
structure, our laws, rules, and incentives, to be such that there aren’t a lot
of hills that need that level of sacrifice. Granny is asking you to consider
whether the game you’re about to play is worth playing, given the possible
bad outcome. Not so subtly, she’s suggesting you don’t even have to play. It’s
a maximin strategy where the way to avoid the worst payoff is to not play the
game.
[box]

Based on a True Story


9 If Everyone Jumped Off a Cliff: Brinkmanship 171

A Fine Arts graduate student is heading home with a $200,000 painting in her
folio. The curator of the museum said she could take it home for the weekend
to continue research she was doing on the piece. “Just be very careful with it
and bring it back on Monday,” she was told.

The student was exiting a subway car when the folio got caught in the auto-
matic sliding doors. Before she knew what to do, the train started moving. Still
trying to extricate the folio, she was flung into a support column in the station
which caused her grave bodily damage.

She sued the transit authority for millions of dollars. City lawyers simply said,
“You should have let go of the folio.” Her lawyers said, “Why was your subway
system designed in such a way that she had to make that difficult, split-second
decision?”

Much like a brinkmanship game, people are sometimes put into a situation
that quickly begins to spiral out of control. Correct choices are hard to make.
Lawsuits like this one are decided by juries who focus on whether the plaintiff
is reasonable or unreasonable (not right or wrong); they gauge the rationality
of the decision at hand. It’s easy for jurors to put themselves in the student’s
predicament and to say to themselves, “she shouldn’t have had to (almost) die
on that small of a hill.” The student won the lawsuit.

[end box]

Whistleblowing
You see something illegal—and possibly dangerous!—happening at your
company. When you report it to your boss, he tells you to mind your
own business. When you insist that this problem could hurt customers or
employees, he makes up a reason to fire you. The system, the structure of
many companies, allows that to happen.
But you were doing the right thing. Why did that have to be your hill?
In cases like this, we don’t want people to think that doing the right thing
is going to cost them their job. So, the US Federal Government (and many
states) decided it shouldn’t have to be. In 1989, the US government enacted
laws that protected whistleblowers from retaliatory punishment. Rules were
written to protect good actors from facing what was, to them, a too-big
threat. Society decided that doing the right thing shouldn’t be a punishable
offense.
172 J. Prag and A. I. Prag

If I Have to Stop This Car…


There they were: five kids under 10 years old in a 1965 station wagon. It is
a world without smartphones or iPads for a two-hour trip. Mom’s up front,
under-caffeinated and weary. Dad’s calculating the cost of gas and counting
miles. They both just want to reach the destination with as few delays as
possible.
Little Fred kicks the back of the driver’s seat. “Fred, don’t kick the seat.”
Sister Jane starts whining, “Fred stole my yo-yo!”
“Fred, give back the yo-yo.”
A little later:
“Mom, Julie won’t stop chewing gum in my face!”
“My crayon broke!”
“Bobby’s not sharing!”
The rearview mirror-monitored mediation continues to very little effect
until the Mom says in a calm, clear tone, “If I have to stop this car….”
The consequences are unspecified, but does that matter? The older kids
understand well; they’ve seen this game before. They shush.
Precocious 2-year-old Julianna, on the other hand, says, “well… what IF
you stop the car, Mommy?” The others shush Julianna.
Brinkmanship works because the children believe something very bad
could happen. Perhaps they have experienced a car being turned around, or
toys being taken away, or the promise of ice cream disappearing.
Road trips highlight that the bad outcome doesn’t even need to be known
explicitly on a payoff matrix for the threat to be effective. Again, the bad
outcome isn’t the deterrent. Rather, it’s starting down the path that changes
behavior. Brinkmanship can be played effectively without ever harming a
soul. The threat just has to be credible enough to convince the players not to
go any further down the slope.

Hell and Brinkmanship


Let’s talk about hell again!
Recall that religion and a belief in the afterlife help bring us to a coopera-
tive solution. Hell—eternal damnation—is clearly a very big threat. Religions
struggle with the question of what kind of behavior warrants eternal damna-
tion. How bad is bad enough to end up in hell? If it’s seriously horrible stuff,
then hell won’t be a great deterrent from doing lesser bad stuff. Hell might
deter murder, but people still might lie.
9 If Everyone Jumped Off a Cliff: Brinkmanship 173

That’s a problem, theologically speaking.


You could try to convince people that sinning is a slippery slope. Lying
leads to mass murder. They might not see this the same way, though. So,
hundreds (or thousands, depending on the religion) of years ago, religions
made the too-big threat smaller: purgatory. In some religions, it’s the waysta-
tion for non-egregious sinners on the way to heaven. It’s a subgame. Hell
deters big bad behaviors, and purgatory deters the smaller ones. Society,
overall, is better off.

Brinkmanship and Credibility


A too-big threat is one that is so disproportionate to the action, it just doesn’t
make sense. There are other reasons, though, that at threat may not be cred-
ible. When a harried parent threatens to burn the house down, it’s both
disproportional and unrealistic. If General George Washington had said to
the British during the Revolutionary War, “surrender now or we’ll fire a
nuclear missile at London!” the British (really, everyone in 1775) would have
been really confused.
Like any threat, a too-big threat needs to be credible to work. Kids know
Mom’s not going to burn down the house, and the British don’t even know
what nuclear technology is.
A more difficult issue is whether a threatened action has to have been done
at some point to be effective. It has been suggested that the US did not have
to drop the atomic bomb on Japan; it just had to show them that it existed
and was horrible. Similarly, does a labor union have to have, at least once,
gone out on strike? Will that help the employer believe that threat? Does the
parent have to, at least once, stop the car or raise her voice?
In all cases, the sad fact is, no one knows. A child could have heard another
parent yell at their kid and thus know it isn’t much fun. The union could
show the employer how much money they would lose if the workers went out
on strike. The US could have dropped the bomb on an uninhabited island
close enough for the Japanese to see it, rather than kill millions. Historians
have focused on this last strategy quite a bit, wondering why the US dropped
two nuclear bombs, 3 days apart, before Japan finally surrendered.
174 J. Prag and A. I. Prag

Conclusion
“Taking something to the brink” is a phrase that people use all the time.
Brinkmanship in a game-theoretic setting is most often associated with
complex situations featuring many people, or even groups of people (like
governments and executive boards). There are layers of decisions and a poten-
tially enormous, often catastrophic outcome. The point of brinkmanship is
leaders getting their “people” geared up for some titanic move.
While the discipline originated in studies of nuclear war, Brinkmanship is
applicable to many less dramatic situations. Taking a situation to the brink,
what some people might remember as “playing chicken,” can have good
outcomes as well.
Brinkmanship is an area of game theory that explains the presence of a
too-big threat. As humans, we often wish humanity wouldn’t have to get to
a certain point. We don’t want nuclear war “on the table” in any way, shape,
or form. When a brinkmanship game fails, when the threatened action takes
place and we experience the enormous loss, we get the sense that there has
to be a better way. Your optimistic authors hope that some improvement
in humanity—better approaches to negotiations, more perspective, more
information, or a way to remember we are all in this together—could help.

Questions

9.1. Here’s a phrase you’ve probably heard: “cooler heads prevailed.” Short,
simple, and to the point, the phrase is saying that all of the decision-
makers were invested, but the outcome was determined by unemotional
players. While game theorists certainly appreciate that mentality, it
might not be entirely correct.
Return to our discussion of high-level leaders vs. frontline followers.
Why might somebody on the front lines not have a cool head? When
are high-level leaders in a better position to decide? When are frontline
followers?
9.2. Here’s another relevant expression; “diffuse the situation.” Describe a
stick of dynamite or a firecracker from the lighting of the fuse to the
end. Does that sound like a brinkmanship situation? If so, why is the
only good solution to the situation diffusing it?
9.3. You could flip the brinkmanship situation upside down. Suppose the
player with the long-term horizon is not the high-level leader, but the
9 If Everyone Jumped Off a Cliff: Brinkmanship 175

frontline player. What does this mean for the decision-making capabili-
ties or the rationality of the high-level individual? (hint: think about the
horizon of an elected official.)
9.4. Egomania, megalomania, and narcissism are ways of describing a love
of oneself beyond all reason. Self-glorification is not irrational, but it
doesn’t make for a very effective leader. Why?
9.5. In studying brinkmanship, you will hear phrases like “chickening out,”
“blinking,” and “flinching.” In all cases, the player that backed down is
portrayed as weaker. Consider the game known as Chicken in which the
players drive cars toward each other, usually at great speed, and see who
swerves first. Aside from “strong,” what else would you call a player who
never, under any circumstances, swerved? (Answer: eventually, dead!)
Given that, what’s the point of creating an image of weakness for players
who occasionally “blink?”
10
All for One, and One for All:
Collective-Action Games

What if the “other player” isn’t one player, but a whole group? game theory
principles still apply to situations where you are making choices in a game
you’re playing with an amorphous “everyone else.” Granny knew about this
problem. You said to her, “you have to let me go to the party, everyone will
be there!” She might have responded, “If everyone was jumping off a cliff,
would you jump too?”
Early in life, you are warned against mindlessly following the herd,
drinking the Kool-Aid, being the lemming that swan dives off the cliff with
the others (bringing your species to an end). Until now, we’ve focused on
more individual games, where self-interest and cooperation strategies yield
the best outcome for you. Group-based games are a bit different, because the
goal is to align interests between and within each group. This group-think
area of game theory is called collective-action games.
Game theory deals with situations where one player makes choices, another
player makes choices, and the two combine to determine an outcome.
Usually, that means there aren’t very many decision-makers. It may be easy
to understand the motivations of one or two co-decision-makers. It’s a little
harder with three or four. When there are many, many simultaneous decision-
makers, you really have no clue about each person’s preferences. In these
games, you could make choice without regard to everyone else, since there
isn’t a way to specify the game correctly anyway.
Or is there?
Collective-action games are ones where you explicitly consider the choices
of everyone else when making your decision. These games often show up
when the outcomes, the payoffs, are societal. It could be something as simple

© The Author(s), under exclusive license to Springer Nature 177


Switzerland AG 2024
J. Prag and A. I. Prag, Useful Game Theory, Palgrave Executive Essentials,
https://doi.org/10.1007/978-3-031-75154-7_10
178 J. Prag and A. I. Prag

as building and paying for a city park, a new highway, or a state-of-the-art


battleship. As individuals, we feel like our choice doesn’t matter. We are teeny,
insignificant, too small to matter. If we align with others and form a large
enough group, though, we might be able to change the outcome. Anyone
who’s ever worked on a group project really knows this is easier said than
done.

What Collective-Action is NOT: Perfect


Recall that two or more players making choices together determines an
outcome. The existence of another player is a requirement for a game. It
is not a requirement for strategic thinking. There are situations that include
multiple decision-makers at one time, but each can ignore the others’ prefer-
ences. It happens when there are so many participants that no one player can
change the outcome: one student on a campus of thousands, one voter in a
national election, one farm of hundreds growing crops nationwide.
Game theory is a subset of Economics too (Economics and Mathematics
go hand in hand, probably because they’re equally mind-numbing). When a
market has so many producers that no single one can affect the price of a
good, Economists call it perfect competition. In sunny Southern California,
there are at least 100 independent strawberry farms. The decision to increase
prices by any one of them is not strategic, it’s not a game. Why? The move
by one farm does nothing to change the payoff matrix of another. A price
increase by one farm simply means it will lose customers (in the classic version
of this economic model, it actually loses all of its customers). By definition,
perfect competition means no one producer can play a game.
Until these strawberry farms find a way to coordinate their pricing deci-
sions (recall, that’s legally problematic), they will have to accept that they have
no ability to change their individual farm’s price. This is an interesting case
where a cooperative decision in their pricing game would actually make them
all better off, but alas, price collusion is illegal.
There’s a similar concept in Finance (only slightly less mind-numbing than
Economics) known as efficient markets, where a single investor cannot make
excess profits in actively traded financial markets. Finance professors explain
market efficiency in many ways, but it’s actually pretty simple: in modern
markets, there are so many investors, no one entity is likely to “stumble” onto
some “magic” mis-priced stock. Everybody is looking for the pot of gold, so
nobody will find it. Granny used to say “you’ll never find a one-hundred
10 All for One, and One for All: Collective-Action Games 179

dollar bill on the sidewalk.” She knew someone else already found it and
picked it up!
Economics, Mathematics, and Finance use tightly controlled models to
predict human behavior. These disciplines are allowed to use words like “per-
fect” and “efficient,” because they control the parameters in mathematical
models. Game theory, as an exploration of human behavior, does not. We
have many examples of situations where an individual does, in fact, take the
choices of the group into account when they make their own decisions. Firms
do make profits, people make money in the market, and there is such a thing
as a premium brand.

Representative Government
In some cases, collective-action games are ones where people have found ways
to remove the irrelevance of single players. Put another way, they have figured
out how to row in the same direction to maximize effect. A representative
republic is one such system. In the US, we’ve been doing this for so long that
we don’t really think about what it means. In theory, one person represents
the interest of thousands (or millions), and decision-making gets a whole lot
simpler.
Creating this super-entity, the government, by choosing representatives to
make decisions for the group gives a small number of people a lot of power.
They are supposed to find a way (usually a very un-analytical way) to maxi-
mize the jollies of the people they represent. We give these people titles to
remind them they work on behalf of thousands of people, not just themselves.
[box]

Of the people, by the people, and for the people


Game theory reminds us that government representatives are also human
beings who make choices that are imperfect and sometimes overly individ-
ualistic. As such, there will always be a push-pull between the jollies of the
individual serving and the jollies of the group that is being served.

The democratic ideals that Abraham Lincoln invoked in his brilliant speech,
“The Gettysburg Address” are deeper and more subtle than we imagine. In
using the word “people” three times in short order, he was certainly high-
lighting the notion that we elect one of our own when we decide who will
govern us.
180 J. Prag and A. I. Prag

But nowhere in this ideal does he say that government is superhuman or


infallible. Government is comprised of people who have bad hair days and
hangovers and egos and get their feelings hurt. Games that involve govern-
ments are very much games that involve people and people’s jollies. Lest you
think that is not true, remind yourself about the governments of Napoleon,
Hitler, Mao, and Stalin. Ego is a thing.

Put differently, elected officials, might be empowered to represent their


constituents but they also know that their time in office is short relative to
the life of the country. Being liked by the voters means pandering to desires
of the current population. That’s often done at the expense of future genera-
tions. After all, those future generations can’t vote yet. Short- and Long-term
outcomes matter greatly to the motivations of elected officials.

[end box]

Follow the Leader Redux: Getting Lost


in the Crowd
In collective-action games, Granny calls the idea of noncooperative decision-
making “getting lost in the crowd.” When you don’t come to a complete stop
at a stop sign or park illegally while waiting to pick someone up, you know
you’ll probably escape the consequences because there are so many people
around. This behavior becomes more problematic if it escalates to a “mob
mentality,” when people loot stores, riot, and vandalize.
A group provides protection from negative jollies via anonymity.
Collective-action games are unique, in that they can motivate the individual
in two ways: “I am part of a group, I can make a difference;” or, “I am part
of a group, what difference could I possibly make?” Collective-action games
can be empowering or induce apathy.
Empowerment requires two things: power and coordination. Power comes
from numbers; the more individuals who align with a particular goal,
the better. Coordination means everyone is rowing in the same direction.
Collective-action games rarely stop there, however. With enough rallying,
group-think takes over. This mentality makes players believe their individual
interests are inextricably tied to the actions of the group. They get more jollies
when their group gets what it wants. We needn’t look far to see this in action:
people often feel better, get jollies, when the sports team they love wins. After
the trophy is awarded, fans will say “we won!” Actually, you did nothing
10 All for One, and One for All: Collective-Action Games 181

besides cheer at your TV. The athletes did all the work, but you’re part of
their “group,” you’re after the same goal, and you find jollies in their success.
The same mechanism is at work in political elections—for better or worse.
You can be politically apathetic (which, statistically, most people in the US
are), completely abdicating decision-making jollies to elected officials. Or,
you can feel empowered as part of a political party. Sometimes, as trends
change and information becomes more available, individuals find themselves
misaligned with the choices of the groups they consistently voted for. In
recent presidential elections, for example, many voters have stated they are
disenfranchised with all available candidates and want something “new.”

Cartels
Let’s step away from the political grandstanding (get it?) for a moment. There
are plenty of business or economic examples of collective-action games too.
In Economics, we learn that a monopolist—a single dominant firm in an
industry—has the ability to change market output and thus market price.
Monopolists can alter supply by producing fewer units which leads to higher
market prices. In most circumstances, monopolies see higher profits than
firms in more competitive industries.
A cartel is a group of companies (or countries, in the case of OPEC)
that collude with each other to set prices for the market. They communi-
cate directly, so they don’t have to suffer lower prices on the open market.
For this reason, Economists sometimes call them a “group monopoly”.
Forget for a moment that price collusion is illegal in most places. Interna-
tional law is a little fuzzy on this topic.
Suppose there are five equal-sized firms, each producing one thousand
pencils a day. Current market demand says, together, the firms can sell five
thousand pencils at $5 each. At this point, in a competitive market, we say
this means $5 is the break-even price: no firm is making a profit. $5 covers
the cost of producing each pencil, and no firm makes excess money above the
cost of production.
To make excess profits, the industry needs to reduce output to (say) 4500
pencils in total, across all 5 firms. This would increase the market price to $12
per pencil, which represents higher revenues and lower costs (because they
are decreasing production). This approach would certainly maximize industry
profits. So, each firm agrees to produce 1/5 of 4500—or 900 units—and
evenly split the new industry profits.
Does the game end there? Do all firms make profits forever? Nope!
182 J. Prag and A. I. Prag

It turns out, while this level of production might represent the maximum
profits for the industry and the firms split profits evenly, it does not repre-
sent the maximum profits for each firm. Consider Firm #3. It would say:
“if I alone increase my output a just little—from 900 to 910—that wouldn’t
decrease the market price because I’m such a teeny part of the whole industry,
BUT, I can sell ten more units and get slightly higher profits.” Firm 3
increases production this “negligible” amount.
Then firms 4, 5, 1, and 2 follow. They all see the same thing. This is an
individual versus group problem. When all five firms produce “a little” more,
prices and profits fall for the whole group, and the entire cartel is worse off.
Every individual firm has incentives to do things that are good for them and
bad for the group.
Most collective-action situations have this problem. Individual incentives
are not well-aligned with group outcomes. Notice: when these individual
incentives have negative group outcomes, even the individual is worse off
in the long run. The individual can make a choice that impacts the entire
group by starting a chain reaction. Cartels operate in repeated games.
With a completely different setup, we have effectively recreated the Pris-
oner’s Dilemma result. When individuals think too much about themselves
and choose in a self-interested way, the group—society—is worse off.

Labor Unions
Once upon a time, a remote mountain town’s largest employer was a coal
mine operator. Because it was large and the only shop in town, it could take
advantage of dominance in the local labor market to keep wages low. “If you
don’t take this job at what we’re offering, there’s a dozen other people who
will.” With no other choice, this approach kept many laborers poorer than
they had to be (and working in terrible conditions). The coal mine operator
was a monopoly, and it could set prices (for labor, meaning wages) wherever
it wanted.
Then 1 day, someone came along and said: “hey, join a labor union and
get better wages for all of you.” Basically, if all qualified mine workers band
together, they reclaim power to set wages. They can tell the mine operator
what to pay them, or else. They now have what’s called collective bargaining
power.
Forming and acting like a coordinated group changes power balances in a
game. Coordination games have different distributions of power and informa-
tion which can change the outcomes. Power shifts are not calculated, though;
10 All for One, and One for All: Collective-Action Games 183

we simply recognize that a great number of individuals can affect greater


change. Collective bargaining gives many unions the ability to demand more
than just a fair wage; they can threaten the ability of a company to survive.

Fair Representation
We are not going to talk about the headline-grabbing election issues that
have plagued the US and other countries. Instead, we want to note some
interesting, game-theoretic aspects of election procedures. To do this, we have
to go way back… to the origins of modern governments.
People owned land and farmed/grew everything they needed. Then, they
started to interact. I’ll trade you a chicken for a few ears of corn. Humans
interacting can cause problems sometimes, and they would deal with those
problems like human beings: killing each other, imprisoning people, acting
animalistically. Still, societies grew. Skip forward a millennium or so, and we
have fiat money: currency, dollar bills, and pieces of paper we use instead of
exchanging a chicken for wrapping paper. Things cost money. We also deal
with things a little better… call the police if somebody breaks into your store.
At this point in evolution, people start demanding things that are good
for society: better roads, safe places to gather, ways to make sure food is safe.
People elect representatives to make decisions for an entire group because,
well, the group is pretty large at this point. They elect a single official (or
team officials known as public servants) to do the research and come to a
beneficial outcome. People do this so they can focus on their own lives and
livelihoods, delegating authority to others to make informed decisions for the
greater good.
People expect these things—rules, parks, standards, whatever—to be avail-
able to everyone. So, you ask the people: “how much are you willing to pay
for these things?” Everyone says: “well, nothing.” Maybe taxes, but even those
have an acceptable limit. How much do we want to pay? Nothing.
In a large society, we observe what’s called a free-rider problem: individuals
want the benefits of government choices without paying for it. Problem is, if
no individual pays for it, there’s no money and there aren’t any government
programs.
We solve this problem by electing representatives who know that they
speak for many. We give them money, and they—having agreed to abide by
the choices and decisions of their electorate—decide where/when to spend
that money. Perfect! What could go wrong?
184 J. Prag and A. I. Prag

Well, choosing a “representative” representative isn’t that easy. If a place


(country, city, municipality… family unit) bases elections on majority rule,
then the winner really only represents the jollies of just over half of the
electorate. This applies to any referendum that requires a simple majority.
Can we get closer to a more representative coverage of the people? Ranking
systems are one alternative some countries prefer.
Suppose there are four candidates for mayor, and you are asked to rank
them one through four. Points are given in reverse order: the candidate ranked
first gets four points, the candidate ranked second gets three points and so on.
The selection process is certainly more representative, but it might result in a
leader no one group actually liked the best. Let’s see how that’s possible.
There are four candidates for mayor: Ed, Fred, Pat, and Wimbley. There
are also four types of preference rankings in the voting public, in equal
numbers. Preferences can come from many different things: skills and
capability, personality, charisma, business acumen, height—anything the
individual voter cares about.

(a) Preference Ranking #1 Ed, Fred, Pat, Wimbley


(b) Preference Ranking #2 Wimbley, Fred, Pat, Ed
(c) Preference Ranking #3 Pat, Fred, Wimbley, Ed
(d) Preference Ranking #4 Ed, Fred, Wimbley, Pat

You’re running the election. You assign each candidate a number—4 for
somebody listed first, 3 when somebody’s listed second, etc., and you add
up the numbers. The final results are:

Ed = 4 + 1 + 1 + 4 = 10
Fred = 3 + 3 + 3 + 3 = 12
Pat = 2 + 2 + 4 + 1 = 9
Wimbley = 1 + 4 + 2 + 2 = 9

Fred wins! The fact that no voting group actually liked Fred the most is irrel-
evant! Put differently, Fred doesn’t actually represent anyone the best. He’s
everyone’s second choice.
What does this system give a society? Well, in ranked systems like this,
candidates learn that they should not reveal things about themselves that
would be problematic for any one voter. It encourages more middle-of-
the-road, less-polarizing policies that are digestible by more groups in the
population. In basic math, more “higher” numbers sum greater because
extremely “low” offsets extremely “high.”
10 All for One, and One for All: Collective-Action Games 185

Ranking systems have not made their way into the mainstream, as elec-
tions go. While they seem to be more “fair,” they assume that the difference
in magnitude between the candidates is straightforward—one point for each
difference in the rankings. Voters might not want to give any points to some
candidates, or they want to give all of their points to another. That’s been
tried as well, and it has the same potential flaws. Suppose each voting group
had a strong preference for a different candidate. They would each get four
points, and no one would win!
There’s also a problem with primary elections. Suppose the system says
Ed and Fred will face off in one primary and Pat and Wimbley in another
other, with the winners of each going head-to-head to determine the overall
winner. To make this interesting, suppose these are the voter’s preferences for
all possible head-to-head combinations (recalling that these preferences could
be based on anything):

(a) Ed is preferred over Fred


(b) Ed is preferred over Pat
(c) Wimbley is preferred over Ed
(d) Fred is preferred over Pat
(e) Fred is preferred over Wimbley
(f ) Wimbley is preferred over Pat

Ed will win his primary. So will Wimbley. Then, Wimbley will win the final
election. Fred could have beaten either Pat or Wimbley. but he couldn’t
(didn’t) win his primary. Elections with primaries are, in effect, two-step,
sequential games. Voters who are being strategic need to think about winning
the final election (the last period), not the primary. But, history has shown
us that’s easier said than done.

Agenda Setting
If someone is in charge of the order of head-to-head choices, that person can
set the agenda—the timing of choices—to get the final result that they prefer.
If Fred faced Pat in the primary, we may have a different outcome.
Let’s do a homey example of this kind of gamesmanship.
Suppose the family—Mom, Dad, Timmy, and Betty—are going to watch
a movie together. They’ve narrowed the choices down to four possibilities:
Star Trip, Undersea Adventure, Cowboy Times, and Baseball Blues.
186 J. Prag and A. I. Prag

Mom is in charge of the final selection, and she knows two things: her
personal preference (Undersea Adventure), and the head-to-head preferences
for the family. The head-to-head preferences are:

(a) Star Trip is preferred over Cowboy Time


(b) Star Trip is preferred over Undersea Adventure
(c) Undersea Adventure is preferred over Baseball Blues
(d) Baseball Blues is preferred over Star Trip
(e) Undersea Adventure is preferred over Cowboy Time
(f ) Cowboy Time is preferred over Baseball Blues

Mom is going to ask the family to make pairwise choices which will elim-
inate one movie and bring the other on to the final round. Mom is really
competitive. She sees that the only thing that could beat Undersea Adventure
is Star Trip. So, she sets the first pairwise choice between Baseball Blues and
Star Trip. Once she’s done that, Undersea Adventure can beat Cowboy Time
and Baseball Blues. If she starts with any other pair, she won’t get to see her
movie (and Mom, like Granny, knows game theory).
Run pairwise eliminations on the combinations above to see how each
movie could “win.”

Threats Versus Persuasion


We’ve talked about a lot of things that we encounter in our day-to-day life
like rules, laws, and regulations that are similar to threats. In essence, they say
“do something the right way or else….”
Consider the ultimate collective-action game problem: getting people to
vote. How do we approach that ?
Some countries have mandatory voting, and it’s usually based on a threat:
vote, or you don’t get your tax return (or another form of government
benefit). Others take a gentler approach: peer pressure. They put advertise-
ments on TV and post signs everywhere; volunteers and others go door to
door. When they vote, they get a little sticker that says “I Voted.” It is
intended to be a badge of honor. Subtly, it says “I Voted…Did You?”
Why do some countries use persuasion as opposed to threats to get people
to vote?
Arguably, you want people to vote full stop. You want them to want to
vote. You want them to care. From the long-run version of the Prisoner’s
Dilemma to many of the issues in collective-action games, our societies
10 All for One, and One for All: Collective-Action Games 187

are structured to try to get people to care about something other than
themselves—not because they have to, because it’s the right thing to do.
These threats and forms of persuasion are intended to shift cultures from
self-interested to cooperative outcomes.

The Median-Voter Model: Hot Dog Stands


on the Beach
Classroom game theory does have a few absolute classics. The Prisoner’s
Dilemma is certainly one. Another is Hot Dog Stands on the Beach. The
setup (which is absolutely crucial to the final result) goes like this:
There are two identical, but independent, hot dog stands on a one-mile
stretch of scenic, celebrated Newport Beach in sunny Southern California.
Beachgoers are equally distributed along the beach. Since they’re humans,
and the hot dogs quality, Yelp rating, condiments, and temperature of the
surrounding sand are all the same: people will always walk to the nearest
stand to get a hot dog. There aren’t any price differences, seating, umbrellas,
or other reasons any person would walk to a stand farther away.
After the beach closes at the end of each day, the stands are allowed to
move around. The beach is open every day for 3 months of summer. At the
end of the first week, where with the stands be along the strip of beach?
Instinctually, un-strategically, you might say the stands will space them-
selves out: something like Stand 1 will locate at the ¼ mile point and Stand
2 will locate at the ¾ mile point. Those locations are best for beachgoers,
because no one has to walk more than ¼ mile to get a hot dog. Figure 10.1
is a good starting point.
At the end of the first day of the week, the two stands will have split
the market equally and they will be making the same profits. But are they
individually maximizing their profits? Nope!
After the stands have both closed, the owner of Stand 1, Benny, starts
partying. The owner of Stand 2, Jenny, starts strategizing. Jenny thinks,
“given where Stand 1 is located, I can increase my market share and my profits
by moving my stand up the beach, more or less right next to Stand 1.” So,
during the night, she does moves to Fig. 10.2.
Jenny’s Stand 2 is now getting ¾ of the market and making ¾ of the
profits. Even though he’s been partying, Benny can still see how to respond.
While Jenny is out celebrating her strategic success, Benny moves his stand
just past Jenny’s and captures most of the market on Day 3, in Fig. 10.3.
188 J. Prag and A. I. Prag

DAY 1
STAND 1 STAND 2
¼-MILE ¾-MILE
MARKER MARKER

0-MILE 1-MILE
MARKER MARKER

STAND 1 STAND 2
MARKET CAPTURE MARKET CAPTURE

Fig. 10.1 Hot dog stands on a beach, Day 1

DAY 2
STAND 1 STAND 2
MOVES NEXT DOOR TO STAND 1
¼-MILE
MARKER

0-MILE 1-MILE
MARKER MARKER

STAND 1 STAND 2
MARKET CAPTURE MARKET CAPTURE

Fig. 10.2 Hot dog stands on a beach, Day 2

DAY 3
STAND 2 STAND 1
MOVES TO THE
OTHER SIDE OF
STAND 2

0-MILE 1-MILE
MARKER MARKER

STAND 2 STAND 1
MARKET CAPTURE MARKET CAPTURE

Fig. 10.3 Hot dog stands on a beach, Day 3


10 All for One, and One for All: Collective-Action Games 189

FINAL
STAND 1 & STAND 2
½-MILE
MARKER

0-MILE 1-MILE
MARKER MARKER

STAND 1 STAND 2
MARKET CAPTURE MARKET CAPTURE

Fig. 10.4 Hot dog stands on a beach, final result

We could do this process for a couple more nights, but we can also jump
ahead and ask: is there anywhere they could be located where they would not
want to move again? (What do we call this? That’s right: a Nash Equilibrium!
Whoo hoo!)
The Nash would be right in the middle of the beach. Benny and Jenny
both locate at the ½-mile marker. Once again, they split the market and have
equal profits—and they do not have an incentive to move (Fig. 10.4).
Hang on… When we started the game, Jenny and Benny equally split the
market and profits. At the end, they also equally split the market and profits.
What’s the difference? Well, beachgoers now, on average, have to walk farther
to get a hot dog. When the stands were at the ¼ mile location and the ¾
mile location, the most anyone had to walk for a hot dog was ¼ mile. In the
final version, some customers have to walk ½ mile. That’s unfortunate and to
many readers, it probably seems unfair. Game theory doesn’t pass judgment.
Given the setup of the game, the final outcome might not be fair for those
not playing. We set up the game initially in the interest of beachgoers, but…
they’re not the decision-makers. They delegated hot dog-making authority,
but the hot dog stands played their own game.
Maybe we can do better, though. Perhaps there’s a surf break or restroom
beachgoers congregate around, so they’re not evenly distributed along the
beach…. Actually, the hot dog stand result can be “fixed” with something:
a regulation. The government can pass a law that requires the stands stay at
the ¼-mile and ¾-mile markers; the stands are no worse off and customers
are better off. If you think this type of regulation is far-fetched, ask your-
self why restrooms are evenly spaced at most beaches. If you think the whole
game is far-fetched, as yourself why food courts exist in most malls. Auntie
Anne’s and Cinnabon are usually right next to each other.
190 J. Prag and A. I. Prag

The hot dog stand problem is a game theory classic, and it explains a lot
of things. Countries around the world elect their leaders in many ways. The
US, like many, popularly elects its President. In most years, that comes down
to a vote between a nominee of the Democratic Party and a nominee of the
Republican Party. Historically, when people compare the political platforms
of these candidates, they find a lot more similarities than differences. The
candidates have, like the hot dog stands, tried to locate in the middle.
This is sometimes called the median-voter model.
The median voter is the voter who falls, statistically, in the middle of the
distribution of political preferences. In modern politics, we use the terms
liberal and conservative; the electorate on the far right are very conservative,
and on the far left very liberal. In the middle is the majority of voters, and
we sometimes call them centrist. Winning an election usually requires the
candidate who can “win the middle;” that is, sway median voters toward a
particular side.
Let’s revisit one other option for the hot dog game: polarization. What if,
instead of gravitating toward the middle, the stands went the opposite way:
one at the zero-mile marker and one at the 1-mile marker, like Fig. 10.5
If the game setup holds and people go to the stand closest to them, the
result would be the same: each stand splits the market and profits equally…
on Day 1. On Day 2, Jenny (Stand 2) figures out she can capture much more
of the market if she moves just inside Stand 1, to the 0.01-mile marker. Over
time, the game iterates to the middle the same way it did before.
[box]

Our Modern Times: A New Middle?


Political science pundits have argued that the most recent US elections have
been unusual, inconsistent with the median-voter model, because they were
more about personalities than political positions. Charisma become more

STAND 1 STAND 2

0-MILE 1-MILE
MARKER MARKER

STAND 1 STAND 2
MARKET CAPTURE MARKET CAPTURE

Fig. 10.5 Hot dog stands on a beach, polarized


10 All for One, and One for All: Collective-Action Games 191

important to voters, to be sure, but there’s nothing about the median-voter


model that says the middle has to be about the political middle. A so-called
populus candidate, “a person of the people,” quite literally appeals to everyone.
This person is likely to capture the party and the median voter to secure a win.

When does this approach—the appeal to the median voter—not work? Quite
simply: if you don’t need them. If voter turnout is low in a two-party system,
the winning party only needs to get more voters than the other. Put another
way: the median-voter model assumes there are more voters toward the center
of the political spectrum. If those with median political preferences don’t vote,
they are removed from the game, and they don’t really matter to the candidate.
A polarized candidate could win if her far-right/far-left voters are the only ones
who vote.

[end box]

Third-Party Candidates and Parliamentary


Governments
In another example of “be careful what you ask for, you might get it,” many
frustrated voters in the US blame the mediocre outcome of elections on the
lack of choice in a two-party system. In that vein, we frequently hear calls for
a third party: specifically, an independent third party that has about the same
size and volume of appeal as the other two.
Let’s go back to the beach and see how that might turn out. We’ll use the
same setup (and no regulations about locations), but now there’s a third hot
dog stand, shown in Fig. 10.6.
Start with each stand spaced equally—1/6 mile, 1/2 mile, and 5/6 mile
points—so each stand gets 1/3 of the market.

STAND 1 STAND 2 STAND 3

1/3 2/3
0-MILE -MILE -MILE 1-MILE
MARKER MARKER MARKER MARKER

STAND 1 STAND 2 STAND 3


MARKET CAPTURE MARKET CAPTURE MARKET CAPTURE

Fig. 10.6 Hot dog stands × 3


192 J. Prag and A. I. Prag

STAND 1 STAND 2 STAND 3

1/3 2/3
0-MILE -MILE -MILE 1-MILE
MARKER MARKER MARKER MARKER

Fig. 10.7 No equilibrium to be found…

When they move around, trying to get additional customers, the two
stands on the ends will move toward the center, getting about half of the
market. This lasts for exactly one day, after which the stand in the middle
moves to one end or the other and re-captures some of the market. Then,
stands 1 and 3 follow. Stand 2 returns to the middle, and the process repeats
without end. Check out Fig. 10.7 for the dynamics.
For students of game theory, the game shown in Fig. 10.7 has no equi-
librium. The stands never stop moving around. In a purely game-theoretic
setting, we would say this version of the model is unstable. In a mathematical,
classroom game, there is no solution.
How does this translate to political parties? Imagine the spectrum above
isn’t a pleasant stretch of sand, but political leanings (left–right, liberal-
conservative). The winner in a three-party race never has to appeal to
anything at all. They would randomly move around between sets of polit-
ical preferences; each party would switch platforms on political issues from
election to election. Why wouldn’t they adhere to a set of principles? Their
goal is to be elected. In any system where elections happen more frequently
than other outcomes (i.e., elections happen every 4 years, but economic poli-
cies take six or more years to affect change), you won’t see the preferences of
society represented well at all. You might see voter preferences represented in
each election, but candidates will shift based on their whims. In a three-party
(or more) system, this flip-flopping is amplified.
Generally speaking, the same results occur when the leader of a country
is chosen from the party that wins a majority in the representative body:
a parliamentary system. If there are two parties, the government in charge—
the majority party—usually has time to fix things and solve problems because
they have appealed to about half of the population. If there are many, simi-
larly sized parties (some countries have ten or more such groups), the group
in charge must form a coalition to gain any power and enact any change.
10 All for One, and One for All: Collective-Action Games 193

This is the situation facing many representative parliaments that stipulate the
parliament has to represent every individual faction of society, usually by reli-
gion or caste. True, there’s more accurate representation of the populous, but
there are too many voices in the room for consensus building, policymaking.
From a game theory perspective only, this could hinder societal progress.
Aha, we invoked time. Can that help find a solution?
Well, no. Playing the game repeatedly to infinity is assumed in elections,
since no society anticipates or plans for its demise.
Here’s an interesting thought… We showed in the hot dog game that a
three-party system iterates too much for voters. It’s unstable because candi-
date and party leanings change so much. We can’t say how much these things
change. If the shifts cause wide swings in each election, each year, or multiple
times a year, this could destabilize the whole of government. If shifts in prefer-
ences happen over decades or more—if the iteration is much slower compared
to human life—the effect would not be as destabilizing. Indeed, they could
simply represent fluctuations in society. Believe it or not, in the US, there was
a time when the Republican Party stood for environmental preservation and
the Democratic Party wanted to uphold segregation. Times change.
[box]

Boards of Directors and Stewardship


Collective-action games and information problems collide on some occasions,
putting decision makers in a difficult spot. Consider the predicament that the
Board of Directors of an organization faces.

Boards are representatives of shareholders, and the exist to act in their interests.
In the case of a corporation, they represent the owners—the stockholders. In
not-for-profit organizations, they represent all future beneficiaries of the orga-
nization’s mission. In either case, Boards must navigate the short-run problems
and decisions as stewards of the company, maximizing the chance that the orga-
nization is passed on to future generations. A Board is playing a long-term,
repeated game into infinity. They are elected or appointed officials charged
with making decisions on behalf of followers.

All of this becomes tricky when you consider a Board’s primary source of
information about the organization is its current leaders and employees. Those
players are critical to the organization in the short run, but they have a clear
short-run bias when it comes to information: employees want to keep their
jobs. They aren’t likely to reveal bad information that the Board might need
for maintaining the long-term health of the organization, especially if that
information threatens their job.
194 J. Prag and A. I. Prag

Human nature is embedded into complex sequential games, often with infor-
mation asymmetry. Boards need to act on reliable info, which the employee
has but probably doesn’t want to reveal. Boards have to rely on screens, signals,
and numbers to inform each decision. If that all sounds hard to figure out,
welcome to the world of the Board of Directors. [end box]

Whose Team Are You On?


Collective-action problems occur when a player is a member of more than
one group at any one time. At work, in your family, and in general society,
your decision is combined with the decisions of others to determine the
outcome for everyone.
Recall in Chapter 3 we discussed what makes people tick and we argued
that the payoff matrix is best specified in jollies. Team leaders often spend as
much time (or more!) learning, mitigating, or changing how to measure their
team’s jollies as they spend developing a “winning strategy.”
… and this assumes the leader knows all of the teams that players are
playing for.
When talking about a team—any team—it helps to draw on the tenets
of collective-action games. Here’s an intriguing expression: “whose team are
you on?” Managers call “team meetings,” and they’ll say things like “we’re all
playing on the same team” or “we need to row in the same direction.” All of
these cliches are the subject of memes and the invocations of a coach or team
leader when some players don’t seem to be “giving it their all.”
Now, think of all of the things that could cause a manager to mis-specify
the payoff matrix of each individual player: they don’t have faith in the
strategy, but you think they do; they don’t have faith in their teammates,
but you think they like everybody; they don’t have a team mentality, but you
put them in charge of sub-teams; they prefer to work alone, but you keep
assigning them group work; they stopped drinking caffeine and are really
ornery, but you schedule them for 8:30 meetings; they don’t see a path for
forward advancement, but you think they’re in it for the long-haul. “Get-
ting people on board” and other ways of inculcating a team mentality is part
of leadership, but success depends on understanding each player’s jollies. As
a leader, you might be working out ways to gather the team together not
knowing one key member is trying to become a free agent.
A person at work is usually on a team there and a different team at home—
their family. A person at work is also, always, a member of their own team—a
free agent. Most people have some degree of personal glorification in their
10 All for One, and One for All: Collective-Action Games 195

payoff matrix. “Being a member of the winning team” is, hopefully, a major
motivator, but knowing you are an important member of that team win or
lose can be a stronger one.
In professional sports, a player’s ability to change teams—to sell their
services to the highest bidder through free-agency—is often viewed as a
dollars-and-cents thing (players being motivated by money). Many free agents
will tell you it is usually about control. The desire to change teams might
come from motivators like respect, independence, mobility, or a degree of
personal glory. The decision may result in more money, but it is rooted in an
individual’s ability to choose for themselves.
In the workplace, free-agent movers might be looking for an organi-
zation that better understands the many teams that the player is on in
addition (cough*remote work*cough), rather than higher pay. An organiza-
tion’s responsibility is to consider the skills it needs to grow, the ability of
each player, and the player’s payoff matrix. In almost all cases where a team
member can choose whether to play for you or someone else, success depends
on the ability to understand all of the teams that the player is on.

[box]
Team Conflict
When a person’s job regularly requires them to risk her life, she often finds
herself having to make decisions that conflict with other teams she is on.
For the US general public, the national divorce rate remains at about 50%.
The rate for First Responders (police, fire, fire-medic) is closer to 75%
( firstrespondersinitiative.org ). As much as we wish otherwise, a person who
is “dedicated to family” is going to find it difficult to risk her life every day in
a risky profession.

As a team leader, you don’t have to be part of every other team that your
players are part of to understand that you need to include motivations that
exist in your players’ payoff matrix. A team leader might be single and have
no children while the players are all married with kids; that works fine as long
as the complicated choices and dynamics that affect players with families are
things the leader can understand. Collective-action games often overlap and
even collide. This is when we have to remember that all groups are composed
of people, and human constraints usually win over other motivators in the long
run.

[end box]
196 J. Prag and A. I. Prag

Conclusion
Collective-action games are different to be sure. The individual, in many
cases, finds out that his choices and decisions do not change the outcome.
Sometimes that means you can “hide among the crowd,” be one member of
the mob, the herd, or the throng that’s acting inappropriately, figuring you
won’t get caught. Other times, it means you can feel powerless: your vote
alone doesn’t count or won’t change the outcome, so why vote?
To mitigate self-interested outcomes, we’ve created societal norms, rules,
and laws to dissuade the individual from many of these self-interested beliefs.
Rules and laws were created when people saw the advantage of inventing
governments to take the place of individual controls. Standards—generally
agreed upon—moved societies away from a more lawless, random existence.
In the modern world, we aggregate individuals who are themselves too small
to have any power into a social superstructure that we imbue with power.
Regardless of how it operates—and regardless of its political philosophy—
a government is an invention of people created to solve problems for people,
and many of those problems come from game-theoretic situations. Govern-
ment isn’t a thing unto itself. It is people who work for the government that
creates the structure of government, and those people have human, often self-
interested, approaches to problem-solving. They are also just people: people
who have gas, a hangnail, or their sock sliding down in their shoe. As much
as we wish the employees of the government were perfect or superhuman or
completely benevolent, they are just people after all.

Questions

10.1. There’s a lot of gamesmanship in taxation. At any given time, there are
things that the government wants to do or to buy, hopefully because
those expenditures make society better off. Governments must pay for
things and that usually means we all have to pay taxes. As it turns
out, we don’t all have to pay taxes; the government can and usually
does impose a different tax burden on different citizens for a variety of
reasons.
Suppose the citizens of a city elect their representatives democrat-
ically and the city is comprised of lower-income people and upper-
income people (two groups with no one in between). Representatives
always try to lower or eliminate taxes for the people who elected them.
What would go wrong if the population was comprised of 80% lower
income and 20% upper-income voters? What would happen if the city
10 All for One, and One for All: Collective-Action Games 197

was 80% upper income and 20% lower income? Given these results,
should anyone vote for a candidate who promises to only tax the “other
person?”
10.2. Party politics are policies that one party supports specifically to annoy
(or hurt) the other party (as opposed to simply having different
approaches to better society). Assuming we all want a better society,
should we vote for candidates who support party policies?
10.3. Voting is one of those things that we take for granted in the US. We
don’t always think hard about what we’re voting for as much as who.
In current media, personalities rule.
The US also includes a bi-cameral legislature. Fifty states get two
senators each, plus a number of house representatives based on state
population. The House of Representatives number hovers around 435
members; the Senate has a simple 100 members. One part of the
legislature is rooted in equity between states; the other is rooted in
equity within each state. Laws need to be passed by both bodies of
Congress. So: who should you care about more when you vote: your
house representative or your senator? Why?
10.4. Consider the Hot Dog Stand on the Beach game as applied to elections
and suppose people are distributed along the beach (left, center, and
right). If there are candidates from the two major parties (let’s call them
Left and Right) who, during the national campaign, have gravitated to
the political middle and a third-party candidate enters the race, who
would win if the candidate was left leaning? If the candidate was right
leaning? Given these results, which national party would support which
third-party candidate?
10.5. Mob mentality is often a justification for bad collective actions, like
rioting. It is difficult to be a voice of reason if your voice can’t be heard.
We’ve talked about carefully defining the term irrational . Focusing on
looters: are they actually acting irrationally?
11
Granny’s Gold

Granny was really smart. She, along with all the creators of parables, songs,
and sayings, gave us a lot of good advice.
In these pages, we tried to formalize and contextualize this timeless wisdom
and illustrate just how much game theory there is in the world around
us. Game theory is a process, a way of looking at situations—arguably,
most decision-making situations that you find yourself in—and systemati-
cally figuring out the structure of the interaction. What do you know about
the other decision-makers? What makes them tick? Do you know the possible
outcomes of the game? Are they best represented by something tangible like
money or something intangible like jollies? What are the rules? Who choses
what and when? Are there ways to manipulate the game and its outcomes?
Can we use commitments, promises, or threats?
We also looked at the ways human nature influences how we play games.
How do you play differently when you’re tired, hangry, or in a hurry? How
do these very human feelings lead us to inferior outcomes?
Game theory is just a way to formalize what Granny was trying to teach
us all those years ago.
Once we open this door, we find many more game-theoretic sayings and
lessons in the world around. In this chapter, we’ll visit just a few more.

© The Author(s), under exclusive license to Springer Nature 199


Switzerland AG 2024
J. Prag and A. I. Prag, Useful Game Theory, Palgrave Executive Essentials,
https://doi.org/10.1007/978-3-031-75154-7_11
200 J. Prag and A. I. Prag

The Good Parent Game


Taking care of your offspring is part of human nature. In younger years, you
get jollies from knowing your child’s needs are satisfied: they are fed, clothed,
clean, and happy. When they grow and get ready to go to college, do you act
differently? Is your threshold for maximum jollies the same?
Your kid, once a bundle of non-talkative joy, will soon need to know
how to do laundry, cook, clean a bathroom, and manage a mini-household.
Suddenly, the jollies of a parent grow from making sure they’re fed and warm
to knowing your child is ready to take on the world on their own.
“It’s not going to clean itself,” you might have said, hoping the idea of
self-care would magically germinate in your offspring. Inside the kid’s mind
(but probably not out of their mouth) the response is, “so, you do it!”
You want your kid to miss you when they go away to college. You want
them to experience some amount of homesickness; you want them to miss
your home cooking. You also don’t want your kid to be helpless. A very big
part of going away to college is learning how to take care of yourself, and it’s
your responsibility to make sure they can.
Is this actually a game-theoretic situation? As the expression goes, the proof
of the pudding is in the tasting. Can we get better results, understand the
problem and the choices better if we approach it like a game? At heart, The
Good Parent Game is just the Nancy Reagan game—with a twist:
As is often the case in this book, we’ve made some outcomes outlandish
to make a point. In thinking about this game-theoretically, what good could
possibly come from not preparing your kid to take care of his needs once he
goes off to school? Well, nothing—except the parent’s short-run jollies that
they get from taking care of the kid will last to perpetuity.
When we follow the decision tree in Fig. 11.1, we see a simplified argu-
ment for introducing responsibilities at some point in childhood. Game
theory helps us illustrate the choices we, as parents, make every day.

Turn Signals
One of the best examples of a game theory signal is an actual signal: a car’s
turn signal. On a good day, the device lets you know what the car ahead of
you is about to do so that you can plan accordingly. That’s the whole idea.
Think about how frustrated (and angry!) you get when someone leaves
their turn signal on for an inordinate amount of time. You’re driving down a
city street or zooming down the freeway and the person in front of you has
11 Granny’s Gold 201

FUTURE
KID

STAY DROPS OUT


HELPLESS OF SCHOOL
DO
EVERYTHING

LEARN ON EXTREMELY
THE SPOT HOMESICK
TODAY
YOU

LEARN ESTABLISHED
EVEN & HAPPY
GIVE KID MORE
CHORES
STAY A LITTLE
CAPABLE HOMESICK

Fig. 11.1 The good parent game

had their left turn signal on for ten miles. You’re probably showing your irri-
tation by saying inappropriate things about the other driver’s obliviousness.
What’s the real problem? The person’s signal isn’t telling you anything! A turn
signal signals action, but since you have no idea of their intentions (when and
where they’re going to turn), it’s not helpful information at all.
Do you believe the signal and slow down? Or, do you avoid passing? Do
you ignore it, like we said before, and call it a diluted signal? If you have to
guess what a signal means, it’s unreliable.

Secrets and Lies


It’s a fine line, as they say, between keeping a secret and telling a lie. Yet, being
revealed as a liar and not being able to keep a secret both get you tagged as
unreliable. How can that be? Secrets and lies are both ways to hold on to
relevant, accurate information.
Like so many things that we’ve discussed, you learn the subtleties of secrets
and lies through experience, but you can see the difference between the two
using game theory. Consider a sequential game in which you are planning
202 J. Prag and A. I. Prag

someone’s birthday celebration. You buy the presents and invite people to the
party today, and you don’t tell the birthday girl. You keep it a secret. You are
actively trying to give her more jollies (through surprise) on her special day.
You often have to lie; “no, I didn’t get you anything special and we aren’t
having a surprise party,” because the game is sequential. And your jollies in
the future are connected to the jollies that you’ve been able to create for the
birthday girl!
Similarly, you write a recommendation letter for a colleague who is
“secretly” looking for a new job. You keep the secret (let’s assume you do)
because….? Again, there’s a sequential game here in which you want to be
thought of as both a reliable friend and trusted colleague. You might need to
ask for a letter as yourself someday in the future, and you can now ask that
friend because they know they can trust you. Secrets are just another form
of asymmetric information in sequential games that you don’t reveal because
revealing the information would lead to a bad outcome for you in the future.
Lies are also forms of asymmetric information. They can change outcomes
in both sequential and simultaneous games. The innocuous “white lie” and
the acceptable bluff are intended to change the other players’ choices in a
way that might even make the other player better off in the long run. Telling
a small child a white lie about where grandma had gone (forever) might be
better for him that telling him the truth, simply because he’s too young to
comprehend death. Bluffing is lying to get something for yourself—a raise
at work, the pot in a poker game—but both parties know that’s part of the
game and it doesn’t always work.
Less innocuous lies are far more problematic. They can change the outcome
of games, often in your favor and at the expense of others—but over time,
they tag you as untruthful and unreliable. In repeated games, lying will likely
make you worse off in the long run. Aesop knew the difference and tried to
communicate it in the fable “The Boy Who Cried Wolf.” What a child is
being taught in this (slightly horrifying) story: if you outright lie for no good
reason whatsoever, you might get eaten by a wolf.
When the child who hears this story grows up and says to mommy, “what’s
the difference between the little boy who cried wolf and you when you lied
to me about Santa Claus,” what will you tell them? You probably won’t use
these words, but what you’ll tell them is something like “the situation, the
game, is different.”
The lightbulb moment referred to in the introduction is upon us: game
theory is everywhere, even without the specific numbers and formal aspects of
a mathematical game. If this book has worked, you may be running through
a game theory analysis for simple things like the question above.
11 Granny’s Gold 203

Threats and Secrets


In Chapter 7, we illustrated how a threat can change all future games
involving the same players. Let’s connect the dots a bit. Consider how things
change when someone threatens to reveal a secret. This is a powerful strategic
move. The person being threatened now knows two things about the other
player: they aren’t above using threats to get what they want, and they don’t
keep secrets very well.
Suppose an old college buddy is applying for a job at your company, and
you get to make the final choice. He’s got the right set of skills for the job,
but there are five other equally skilled candidates. When you interview him,
he reminisces about a fraternity party in which you used illegal drugs. He’s
reminding you that he knows something secret about you. Then he says if he
doesn’t get the job, he’ll tell the company higher-ups about this secret; if he
gets the job, he’ll never tell anyone.
What would you do?
Here’s the good news. Your old friend is proving to be a lot less of a friend
than you had thought. Your Granny might have referred to someone as a
“snake in the grass.” This expression generally refers to someone who proves
disloyal behind your back. In some ways, that’s harder for you to deal with;
believing someone is a friend when they are not will almost always lead you
to a bad outcome.
In the case of the threat to reveal a secret, at least you know what a person is
saying they will do and, more or less, you know they are not your friend. And
here’s why that’s important in real-world game theory. The threat itself isn’t
particularly credible; after all, how would hurting your reputation at work
be good for your friend? Suppose you didn’t hire him, and he told your boss
about your indiscretion and the worst happened, you got fired. Is your friend
legitimately better off because of that? He still doesn’t have a job!
The threat revealed additional information for you, though. Now, you
know that a secret isn’t safe with this person. From now until forever, you
won’t play any games with this person the same way.

One Man’s Secret Is Another Man’s Lie


The comparatively easy case of Santa Claus vs. “crying wolf ” belies a much
harder reality about the problems surrounding information and games. Let’s
go back to the colleague who asks you to write a secret job recommendation.
Just for fun, let’s add something to the structure.
204 J. Prag and A. I. Prag

Your colleague asked your mutual boss for a reasonable raise and, as part
of the negotiation, said she would have to start looking for another job if the
raise was turned down. Thinking this was a bluff, your mutual boss turned
her down, she did start looking, and found this potential new job. That’s why
she’s asking you for the secret recommendation letter.
Losing your colleague will be bad for the company and maybe even you,
from a “team” perspective. If we look back on the game, we can see everyone
had an opportunity to respond to accurate, full information—they just made
bad choices. What do you do? If you tell your boss it wasn’t a bluff and
he’s about to lose this important person, you might make a difference. Then
again, your colleague will know you can’t keep a secret…
Now, before you throw up your hands and say this is a no-win situation,
think about all of the choices you could have made. You could have tried
to tell your boss you don’t think the colleague was bluffing before you knew
she found another job. You could have told your colleague that you don’t
want to write the recommendation letter because it puts you in a difficult
position. It’s true, you’re caught in the middle, but that also means you have
more choices in front of you. Classroom game theory doesn’t give you these
chances to “make something right,” but the real world does.

An Option Is Only Worth More Alive Than Dead,


in Finance Books!
This isn’t a granny expression; it’s a finance classroom expression. It means
that an option, like an option to buy a company’s stock, is intrinsically valu-
able. A stock option (strictly speaking, a call option) lets you pay a small,
fee known as the premium in exchange for the right to buy a stock at a fixed
price for a fixed amount of time. If you have a hunch that a stock price might
rise soon, you can bet on that hunch by buying the option. The advantage of
the option is if your hunch turns out to be wrong, the most you can lose is
the premium. If you buy the stock based on your hunch and you’re wrong,
the price of the stock might fall, and you could lose considerably more.
If your hunch is correct and the price of the stock rises, the premium on
its options also rises and you can actually make a profit on your hunch by
simply selling your option to another investor. The “more alive than dead”
expression means you can usually sell your option and make more money
than you would if you used the option to buy the stock right away (there are
details that we’re omitting here for the sake of brevity.)
11 Granny’s Gold 205

That’s all interesting stuff if we’re talking about investments but what does
it have to do with game theory? Let’s relate this expression to mixed signals.
Suppose the stock you could make money on produces something you don’t
approve of (cigarettes or land mines or something like that.) If your hunch is
correct, you’ll make money on the option but your jollies might not rise; you
feel bad getting rich in certain ways. In that case, hunch or not, you would
be better off without the option.
Now, recall the student that gets accepted by Hoity-toity University. He
doesn’t have to go there; he simply has the option to do so. Let’s assume if
he’s good enough to get accepted at Hoity-toity, he also got in at many other
excellent schools. The option to attend Hoity-toity can’t hurt him, right?
Here’s where you should think about the future, uncertainty, and lots of
things that are hard to know. In game theory, we’ve known that removing
some choices today in a sequential game can make a person better off in the
long run.
The granny expression that does fit here is “be careful what you ask for,
you might get it.” Arguably, the student’s problems—at least difficult deci-
sions—started when he applied to Hoity-toity. Asking for that option puts
the student in the difficult position of having to make the correct choice
today in a game that won’t be obvious it has results for many years.
An option is just a mixed signal. Recall the student that gets accepted by
Hoity-toity University. He doesn’t have go there; he simply has the option
to do so. If he’s good enough to get accepted at Hoity-toity, he probably
also got accepted to many other excellent schools. Surely, just the option to
attend Hoity-toity can’t hurt him, right? Put another way: is having too many
choices a problem?
In game theory, the answer is yes. As we saw in some of the more
complex games with more than one choice, there’s a better chance of making
the right choice (in both the long- and short-run) when we can at least
remove some choices today. In a sequential game where so much is unknown
about a person’s future after college, having the option to attend Hoity-toity
University may not really make the student’s life easier.
The Granny expression that fits here is “be careful what you ask for, you
might get it.” Arguably, the student’s problems—in the form of difficult deci-
sions—started when he applied to Hoity-toity. Asking for that option puts
the student in the difficult position of having to make a choice (to attend or
not to attend) in a game that won’t obvious have results for many years.
Note: the current state of higher education in the US necessitates that
students apply to any and all programs they remotely qualify for. This is
a costly process, not only in time and grooming, but financially. In fact,
206 J. Prag and A. I. Prag

college admissions advisors have created a lucrative (read: exploitative) cottage


industry. We are not saying it’s better off to only apply to one or two colleges,
especially in this competitive landscape. We just suggest that if long-term
success in some unforeseen future is dependent on a decision today, it helps
to make that decision as easy as possible.

Laws, Rules, and Regulations


This book showed how we, as a society, have built structure in the form laws,
rules, and regulations. These limitations are often very little more than ways
of preventing people from making self-interested choices. When you see the
result of the unfettered hot dog stand on the beach game, you see that no
one is actually better off at the end of repeated games; some people are the
same (the stores) and others are worse off (the hot dog-consuming public).
Each game ends with players having exhausted their self-interested choices
and non-players, the rest of society, is worse off. So, government implements
laws, rules, and regulations stop the game from even starting. They remove
the option to play—to alter the competitive landscape—to keep consumers
better off.
Why do we need laws to stop people from littering? Everyone knows that if
you allow people to throw their trash on the ground, your community (/city/
state/country/planet) will soon look like a trash dump. But, that bad endgame
is the result of everyone littering; not just one person. Not just little ol’ me.
If one person throws one candy wrapper out of the window of their car as
they drive down the highway, likely no one will ever see it. Ten people do
that, then a hundred, then its bags from the burger joint, then…who knows.
So, you pass a law to stop the first candy wrapper from being tossed out of
the window. That some of the laws are disproportionate (ex, $5000 fine for a
single candy wrapper) is relevant too. As a deterrent, a city is in effect saying
“don’t start,” to avoid the slippery slope mass-littering would cause.
This also takes us back to the difference between rules and norms. A well-
accepted societal choice is often called a social norm. They are informal, often
learned behaviors taught to you by others. A rule or law, by contrast, is a
formalized, codified norm. It’s a norm that society finds so important, we
write it down and enforce it.
How do people in different cultures (or countries) greet each other? In
some places, you shake hands; in others, you kiss one cheek; still others, you
kiss both cheeks; and sometimes you bow. When you visit a new country
for the first time, you are often surprised by these differences, perhaps even
11 Granny’s Gold 207

embarrassed when you kiss a handshaker or bow to a kisser. These social


mistakes don’t end with anyone going to jail or paying a fine; sometimes the
mere embarrassment at “violating” a social norm is enough to “correct” your
behavior.
If you hurt someone while driving a car incorrectly, it could end up very
differently. There are laws that govern driving in every country. They are not
all the same and they are all codified and enforceable. So, when someone
from the US goes the Great Britain, they have to drive on the other side
of the street (drivers are on the right side in the US and on the left side in
Great Britain); if they get it wrong and cause an accident, they get a ticket.
Incidentally, calling it the “wrong side of the street” is funny when you realize
that right and wrong are just formalized social norms.
Rather famously, some countries have strictly enforced laws about things
that other countries don’t even have strict norms for. Consider Singapore’s law
against spitting in public, which is (as of this writing) enforced with a $1000
fine. There are other countries that have laws against adultery, punishable by
prison time or worse. Noncooperative choices, what we sometimes call anti-
social behavior, is governed around the world by a variety of norms, rules, and
laws. The notion that some of these seem to us like “driving on the wrong side
of the street” means that we specify jollies in different ways. Different cultures
have different ways to manage behavior that a society finds unacceptable.
There is a clear delineation between how different societies manage
noncooperative behavior, in the examples above, and repressive control mech-
anisms. Punishments for being a woman or non-binary or of a different sexual
preference are not meant to control noncooperative, anti-social behavior. On
the contrary, they are meant to remove entire groups of players to increase
control of dominant groups.

Last Clear Chance


There’s a famous, somewhat problematic cliché: “rules are meant to be
broken.” That could be the topic of an entirely different book. Let’s consider
the related possibility that rules or laws might be abused by a player.
Suppose a driver makes a right turn on red while another car is
approaching a few hundred feet away and the approaching car actually speeds
up to hit the car that turned. Why? Because the driver of the second car
saw the possibility of a lucrative lawsuit. The turning car is legally at fault;
the right-turn-on-red law says that the turning car must yield to oncoming
traffic. The approaching driver could have prevented the accident by slowing
208 J. Prag and A. I. Prag

down, and they didn’t. It doesn’t matter. The approaching car was trying to
take advantage of the law in order to sue the other driver, and the law seems
to be on their side.
The legal justice system has had to deal with this possibility so many times,
a doctrine emerged called the “Last Clear Chance” rule. Roughly speaking
(and this is a legal gray area), the ability to win such a lawsuit requires that
the approaching driver show they couldn’t have reasonably prevented the
collision.
“Gaming the system” is what we would call this, and it’s a risk anytime
a strategic move is codified into a rule or law. Laws run the risk of being
interpreted too literally. The driver who could have prevented the accident
is applying an erroneous version of “kill or be killed.” The courts can and
have applied their own doctrine to avoid abusing the letter of the law: “an
exception to the rule.” One driver made a mistake, the other driver caused an
accident on purpose. Two wrongs don’t make a right.

Ignorance Is Bliss
There are an amazing number of expressions that talk (or warn!) about infor-
mation. The world has certainly morphed into a better-informed place, with
technological innovations related to AI and the world wide web.
In a lot of ways, opportunities and information conflict with important
game-theoretic concepts like commitments. You have a mid-level employee
with a few years of experience. Your company is going through a “rough
patch,” and you need “all hands on-deck.” In past decades, you, as an
employee, ramped up your performance for a sense of loyalty to the orga-
nization, and to avoid the “pain” of a new job search. Today, on the other
hand, a list of available jobs in your industry, specific to your experience, is
right there on your phone. Now, if you get wind that your company is down-
sizing, you can weather the storm and score a new job in your pajamas over
a holiday weekend.
Concerns about disturbing the established social order with easier access
to information go back centuries. The expression “how are you going keep
them down on the farm” was a World War I song title that worried about
sending farmhands overseas. (Even then, the concern was winning the war!)
Movies, radio, and television all changed the availability of information and
there were concerns in each case about how the new information that they
provided would change people’s choices. Repressive mechanisms often rely
on restricting information—such as access to the unregulated internet—to
11 Granny’s Gold 209

control oppressed populations. Information and change go hand in hand,


whether by equipping players to make better choices or adding new players
to the game.

The Curious Case of Mixed Strategies


We’ve actively avoided decision-making based on probabilities throughout
this book. That approach is the stuff of statistics. There is an area of game
theory that has to actively combine strategic decision-making and statistics
and, complicated as that sounds, you probably already do it every day. Rock,
paper, scissors; odd or even; pick a hand: you do these games with your hands
and fingers, and they are both random and game-theoretic.
Let’s try the odd or even game. Two people will make a fist, count one, two,
three (usually with some gesticulations), then put out either one finger or two
(could also be no fingers, but let’s limit it here to two choices.) Simultane-
ously, one of the players will say odd or even. You then add up the fingers. If
you both choose the same thing, it’ll add up to an even number. If you chose
different things, it’ll be odd.
Now comes the strategy…sort of. If you always say odd, and you always
put up one finger, you will (eventually) always lose. The other player will
figure out your consistent strategy and they also always put up one finger
and, we all know, one plus one is two—even!
So, you have to mix up your call (odd or even) and your finger choices. If
you randomize your choices, statistics says you should win half of the time.
Consider how hard that is! You need to be truly random at your choices.
Random means sometimes you put up one finger four times in a row.
Sometimes you alternate. Similarly, sometimes you say odd a couple of times
in a row then alternate. Any pattern that you fall into could be figured out
and used by the other player to win more often. Being random isn’t actually
a human strong suit.
When someone says, “you’re becoming too predictable” and they aren’t
trying to be kind, they mean your pattern has become easy to figure out. Like
all of our other expressions, we need to find a context where unpredictability
in a game is useful.
Let’s go back to sports. In baseball, the hitter has an advantage if they
know what type of pitch is coming. If you have a good fast pitch, a slow
pitch (called a change-up), and a curvy pitch, you’ll have to learn how to
“mix them up” in a pattern that’s not just fast, then slow, then curvy and
repeat. Being unpredictable is a competitive advantage. In tennis, you can’t
210 J. Prag and A. I. Prag

always return the ball to the right, then to the left, and repeat (or deep, then
short, etc.). In basketball, you can’t always dribble to the right one time down
the court then to the left the next time.
In these examples, a successful strategy is a set of probabilities that lead
to scoring the most points or giving up the fewest hits. Optimizing a set of
probabilities—creating order in your randomness—is part of what’s called a
mixed strategy. In game theory class, we often create games in which we can
calculate the probabilities precisely to show how these games work. In the real
world, you are almost always determining the probabilities of choices by trial
and error.
The correct probabilities would incorporate a difference in, say, your select
choice against the other person’s choice. A pitcher’s best pitch might be his
fastball. If he throws it every time, the hitters will eventually hit it. So, he
mixes in the occasional slow or curveball. The optimal probabilities might be
80% fastballs, 10% slowballs, and 10% curveballs. The key to these mixed
strategy situations is not having a pattern even as you use your probabilities.
The pitcher might throw two slowballs in a row every once in a while just to,
as they say in baseball, keep the batter guessing.
Lest you think that mixed strategies are only for sports and simple games,
there are life circumstances that allow you to use these skills. Spontaneity in
relationships is occasionally good. So is getting out of your comfort zone
or being the designated driver. As long as it doesn’t sacrifice reliability or
predictable safety, “changing it up” is a tried and true strategy in many human
relationships.

Holding Grudges
It makes sense that you are more likely to make a cooperative choice and
expect the cooperative choice will be made in return, when the other player
is a family member, neighbor, or long-time friend. As long as the games are
Prisoner’s Dilemma style games, both of you are better off in the long run
(many repetitions) when you both make the cooperative choice. It also makes
sense that you’re much less likely to make the cooperative choice when the
other player is a total stranger and you’re only playing the game one time.
What about people that you cross paths with on occasion, off and on
during your lifetime; perhaps professional colleagues or industry peers. You
might still play the game many times, but neither of you know how many
times or when the last play might be. In cases like this, it’s not uncommon for
one player to make the noncooperative choice, getting the biggest one-period
11 Granny’s Gold 211

payoff and saddling the other player with the lowest one-period payoff. After
all, who knows when, if at all, you’ll interact again?
What would the player who got the low return do after this happens?
That’s easy: they would hold a grudge. A grudge is a comeuppance that
you hold onto until the next time you interact with the other player. There
might never be another chance for revenge, or you might “get over it.” For
some people and some indiscretions, a good grudge can last a lifetime. The
cautionary tale of game theory is this: when you think about making the self-
interested play in an intermittently played, Prisoner’s Dilemma style game,
remember—your punishment might come many years later.
[box]

Player Beware: Grudges Over Time


Holding a grudge can also lead to bad future decisions. Most people have had
the experience of a game not going their way. You want a job or a promotion
(your choice was to apply for that new position), and the other players made a
different choice, and someone else got that job. Human nature makes people
feel bad about that outcome; you feel unappreciated or inadequate. Those bad
feelings don’t always go away over time. There’s a great expression about this:
the result “sticks in your craw.” (Roughly speaking, this means something gets
stuck in your throat… we have adapted it to mean the slight sticks in your
brain for a long time.)

The problem with holding onto this type of grudge is it can lead you to make
incorrect choices in future versions of the game. Let’s say life goes on after the
decision; both you and your employer have to live with the decision. Now,
suppose things don’t go well at the firm. For whatever reason, they made the
wrong choice, and you get another bite at the apple. You’re offered that job
again. You feel vindicated; your inner voice wants your outer voice to say, “I
told you so.” Your long-festering grudge tells you to take the job now that you
have a chance. Should you?

Probably not. At the very least, be aware of the reality that this is a different
version of the game with a different payoff matrix. Getting jollies from being
right—feeling vindicated as we said—comes from getting the offer. Taking the
job means you have to deal with the state of the firm now. This is not the
company you originally applied to. Your jollies going forward (based on your
decision to accept the job) will be determined by your ability to succeed in this
new environment—not in the earlier version of the game that didn’t go your
way. We’ve used this expression before, but it applies here more than ever, “be
careful what you ask for, you might get it!”
212 J. Prag and A. I. Prag

[end box]

Follow the Leader/Blind Leading the Blind


Being a leader, it has been argued, means very little more than having
followers. That simple definition is useful in differentiating between people
with titles and power and actual leaders. There are many books on leadership
and many expressions related to leadership, but they aren’t usually connected
by way of game theory. Let’s see if we can fill that void.
Most people have heard of the expression and children’s game “follow the
leader.” In that game, the leader is chosen arbitrarily, and the point is to do
things that the followers can’t copy which eliminates them.
Grown-up leadership is exactly the opposite. The leader is more like a
first mover that takes the group down a particular path. That should sound
familiar; that’s a sequential game where the leader goes first. A good leader,
therefore, needs to be well-informed and decisive. Looking at leadership as a
sequential game lets us see many things. Leaders aren’t expected to be perfect.
Some choices that were correct given available information don’t work out.
Leaders should be expected to be human. They aren’t, however, supposed to
be random. The idea of a sequential game is to look ahead and make choices
now that should lead to the best possible future. If today’s choices are guesses,
then leaders aren’t really leading.
Consider, then, another expression: “the blind leading the blind.” The
imagery here says it all. When you’re the first mover in a sequential game
that will determine the outcome for the entire group—the organization, the
company, or even the country—you really need to make informed choices.
Bad outcomes will result if you don’t, and this could compromise your role
as a leader. The expression “the blind leading the blind” isn’t one that has
long-run success for the leader associated with it.
Think of leadership as the first mover in a sequential game that involves
information. Success can be increased if a leader has more relevant skills, a
longer education (informal or formal), and better processing of information.
Advisors also heavily influence leaders and their abilities. Once more, we can
invoke the expression “you’ll be known by the company you keep.”
11 Granny’s Gold 213

Gaslighting, Brainwashing, and “Psyching Out”


Bluffing and lying are well-known, ancient ways in which information is
manipulated to change the outcome of a game. Famous stories like the Trojan
Horse and the talking Satanic snake in Genesis in the Bible are oft told tales
that remind us to be wary of information—even from seemingly trustworthy
sources.
More modern manipulations are present in the algorithms that determine
what and when we see in news and social media. A handful of players in
modern society have an ability to control wide swaths of information available
to us. The outcome of games with information manipulations challenges our
ability to define rationality.
In the extreme, players who have been “brainwashed” or “gaslit” have, liter-
ally, a warped sense of reality and might make choices that others would not
consider rational.
What’s good, and bad, about these situations is that they do make sense
with a jollies-based payoff matrix, in the mind of the manipulated player. Like
a person using what we call mind-altering substances (many illegal drugs for
example), they believe they are getting more jollies from their choices. We
wouldn’t call their choices irrational, just very much misinformed.

On Being Fooled
Granny loved talking about fooling and fools:

Fool me once a curse on you fool me twice a curse on me.


A Fool and his money are soon parted.
Everybody plays the fool sometimes.
We won’t be fooled again.
The Greater Fool Theory.

If a game-theoretic situation does turn out the way you hoped or expected, it
is sometimes referred to as being fooled. It happens on occasion. You cannot
possibly know every person and their payoffs in every situation. That fact, by
itself, does not mean you did anything wrong; you’re human.
What we should have learned from this book is being fooled on occasion
doesn’t make you a fool. Being fooled in the same situation, by the same
person, in the same way, repeatedly does. Einstein famously said, “Doing
214 J. Prag and A. I. Prag

the same thing over and over and expecting different results is the defini-
tion of insanity.” Humbly, game theory can teach us ways to enhance our
information, learn in repeated games, and get better outcomes over time.
Being fooled might be an application of what we called asymmetric
information. The other player had information that was important to your
outcome, and they chose to withhold it or lie about it. Being fooled might
also mean you weren’t using game theory in a situation that called for it; you
believed that your choice alone determined the outcome when in fact, the
choices of other players also mattered.
You can even extend the idea of fooling players in a sequential game. If
every player makes their choice—buys something—because they were fooled,
let’s say a little, and then turned around and sold the thing in question to a
new player by fooling them a little more, the end point is usually fooling the
last buyer a lot. That’s the nature of what’s called the Greater Fool Theory.
The last person who mistakenly choses something that’s actually not the
correct choice will likely be the biggest fool in the group.
In any case, you hopefully learned from Granny, from scriptures, from
Aesop, and from Einstein the importance of thinking and acting differently
when you’re in game-theoretic situations and considering how important
accurate information is in these circumstances.

A No-Win Situation
In real-world situations, you are sometimes put in the awkward position of
being unsure about what to say; what to signal in a game-theoretic situation.
In an application of the expression “discretion is the better part of valor,” it is
sometimes better to avoid the situation rather than have to make the choice.
Consider this scenario: a colleague at your company asks you to go to
lunch and says they want to talk to you about making a professional connec-
tion for them with a close friend of yours, the CEO of a hot new tech
company. You don’t really think this colleague is a good fit (or maybe you
think even less generous things), so you don’t want to make the connection
for him. You also have to interact with him at work on occasion. What do
you do? You can lie to him or lie to your CEO friend about him or some
other unconformable thing. Or, you can politely turn down the lunch offer.
In life, there are games you don’t want to play. We often refer to these as
being in a “no-win” situation, but it’s only no win if you play.
11 Granny’s Gold 215

Information, Expertise, and Decision-Making:


“Physician Heal Thyself”
To make choices that lead to the best outcomes in sequential games, we
need accurate, useful information. This puts decision-makers in a difficult
spot. Who or what do you believe? Who do you trust? Credibility comes in
many forms: titles, experience, educational attainment, and other factors. A
decision-maker that has become a leader should have skills and competencies
that allow them to better filter information, navigate uncertainty, and make
correct choices.
Granny would have prioritized things like common sense and intuition. As
a decision-maker, perhaps a leader, what do you do when the expert advice
that you were given is at odds with your intuition or what you consider
common sense?
Consider the expression, “if it seems too good to be true, it probably is.”
This is a cautionary phrase telling you that there aren’t many easy answers to
hard questions (or in the case of game theory, hard choices). Most of what
looks “too good” has been tried or maybe just won’t work. If you’re offered
twice the market rate to take a job, dig deeper. You might find out that the
position has been held by six people in 3 years, and you should run for the
hills.
A really good expert can almost always explain things in a way that aligns
with your common sense or your intuition—and the corollary is also true.
The somewhat cynical biblical proverb “physician heal thyself ” can be inter-
preted as saying “hey doctor (or expert), why should I believe you if you can’t
even cure the disease that YOU have!” Obviously that’s not always fair—there
are incurable diseases—but when heeding advice, a person who has success-
fully followed the advice that they are giving you (in similar circumstances)
could be considered a reliable, credible resource of information.

Game Theory Goes to the Doctor


Players in sequential games have an interesting temporal problem that we
often refer to as short-sightedness: a tendency to focus too much on the
outcomes from the first choice rather than the outcome of the complete
game. That might come from human nature. We see it in a lot of places.
In fact, one of the most tried and true rules in Finance says that outcomes
in the future are not worth as much as outcomes today (everything else the
216 J. Prag and A. I. Prag

same). We prefer outcomes we can see today rather than esoteric, phantom
outcomes that might happen tomorrow.
The medical profession has given us another way to think about this
problem. Granny was talking about a difficult decision once and you heard
her say, “that’s a hard pill to swallow” (or possibly “that’s a bitter pill”). As a
concerned grandchild, you asked if she was sick and she said “no, dear.” What
Granny meant is, there are choices in the short run that are unpleasant, but
they lead to the best possible future outcome. Making the hard choice today
goes against our nature, but it may still be the right choice. All the more
reason to learn game theory.
Except, swallowing a bitter pill is not automatically a game-theoretic situ-
ation. A game needs two players. To be a one-player sequential game, the
player has to make a current and future choice. That can be a game. Take
two players: a patient and a medical doctor. The doctor tells the patient to
take a pill every day to be healthy in the future. The patient, who isn’t sick,
thinks it’s a waste of time and money. Short-run negative jollies are clouding
much higher long-run positive jollies. We should take vitamins, but few of
us do.
Ask any toddler who’s pitching a fit about taking the bad-tasting medicine
that they need to cure their illness (or better still, getting a shot). Making
the correct first choice is difficult if that choice is unpleasant. Finding ways
to make treatments less onerous is important for drug producers, doctors,
and other healthcare providers. As part of the game of healing, they need to
be aware that the negative jollies people perceive from that “hard pill” are
inhibiting, sometimes even preventing, a desirable outcome for the patient.
Consider the soft skill known as bedside manner. Having a good bedside
manner means you’re good at communicating and connecting with your
patients. You can empathize with them. You understand their negative jollies.
From the doctor’s perspective, only being right or technically trained doesn’t
always get the patient to make the right choices. The arrogant surgeon might
tell the patient, “you have to cut to cure.” She might be right, but she’s
also making the patient think much more about the cut than the cure.
A good bedside manner helps doctors understand jollies from the patient’s
perspective, helping get the patient past their fears and on the path to cure.
There’s another wise expression: “you have to take your medicine.” It refers
to accepting punishment for a mistake. Medicine as punishment seems odd,
but in earlier times, very little attention was paid to how medicine tasted
(spoiler alert: it was bitter and gross). Getting a desired outcome in a health-
care situation could be accomplished by making the medicine seem less like
a punishment; an easier pill to swallow.
11 Granny’s Gold 217

[box]

Dieting
Most adults have had the experience of trying to lose weight in a world
where the BEST foods we enjoy have a lot of calories. Food producers, our
internal jolly generators, and game theory conspire against us during most
diets. Knowing many of us have what we call “a weakness” for salty, sweet,
fried, and crunchy foods, producers fill store shelves and café menus with them.
With these readily available sources of short-run jollies at hand, we revert to
human nature. The long-run path to health is uncertain, and the ice cream is
right there! A lot of us make a short-run decision that takes us away from our
long-run desired goals.

Being “penny-wise and pound-foolish” is an expression that sounds like its


related to this behavior. (Pound, in this case, refers to British currency). It
means you’re making decisions that save you a penny at the expense of a more
lucrative pound. Making a short-run choice to cheat on your diet may feel
good today, but the sacrifice is not losing weight (and having to stay on the
diet even longer) tomorrow.

[end box].

Fairness, Future Outcomes, and the “From Here”


Problem
We observed that the rules of most games in life are invented by some of the
players somewhere along the way—and not always with fairness or equity in
mind. Human nature often leads rule-makers to bias outcomes in their favor.
When a society changes laws and rules that are biased, sexist, racist, etc.…,
these are often done in the name of increasing fairness for repressed groups.
There’s no doubt these shifts are a marked improvement over the previous
regulation. Progress is iterative, after all. The problem is where we go “from
here.” Changing available choices to make outcomes more fair and less biased
can be shown as a sequential game with some information issues. Once a rule
is changed, however, we have two problems: how to implement the change
going forward , and what to do about those currently operating under the old
law.
If you own an apartment building, you aren’t allowed to choose your
tenants based on their gender, the color of their skin, and other factors that
are unfair screens. Laws prohibiting discrimination allow many more people
218 J. Prag and A. I. Prag

to get housing and secure better future outcomes. Here’s the problem: all
rules in sequential games start with the next play. They make outcomes fairer
going forward . They don’t do anything about the past. In that sense, these
laws and rules start “from here.” You have to use more balanced criteria to
select future tenants, but you don’t have to evict the ones you already have.
One could argue that fixing these sorts of problems going forward is often
the only thing you can do. Even in the most progressive society, you can’t
(shouldn’t) force a landlord to evict people of one group to make room for
another. Making sure things are fair going forward might be as easy as not
allowing the use of irrelevant, biased screens. Fixing the imbalances that the
unfair rules already created is much harder. Using the structure of a sequen-
tial game, you would have to go back to the first time the unfair process
was applied, retrospectively change the rules, and figure out what outcomes
should be. That’s impossible in most situations.
The “from here” problem exists in a lot of places. There are many sequen-
tial games in which the starting point is arguably rather subjective or even
arbitrary. Consider how many college sports teams are ranked (which impacts
things like recruiting, NCAA funding, and their position in postseason tour-
naments). A group of “experts,”—head coaches in the sport or writers that
cover the sport—rank teams in the preseason based on what they would say
are informed guesses at future performance. After teams start to play, rank-
ings change and become more realistic. The preseason, made-up rankings are
the “from here” starting point and will affect rankings for the entire season.
Being able to say that you beat a Top 10 team matters even though the team
was not necessarily worthy of the Top 10 based on anything real.
Another, somewhat scarier version of this problem exists in asset markets
(stock markets, housing markets, etc.…). Stock prices react to news about
corporate earnings and the overall economy. Unexpected, good earnings will
make the price of the stock go up in most cases. That means the news will
make the stock price go up “from here.” What if “here” is wrong—perhaps
already too high? Finance people often talk about asset prices that have moved
too far too fast; a stock is said to be overpriced relative to its fundamentals.
Asset markets—both stock markets and housing markets—have experienced
what are called bubbles; valuations that are unsustainably high. Bubbles could
be analyzed as sequential games with a “from here” problem. Just like the
biased decision process earlier for rental housing, it’s very hard to go back
and figure out what things should be worth when an asset bubble bursts.
11 Granny’s Gold 219

The Cold Medicine Problem


Your bestie calls you clearly suffering with a cold. She says, “I took some cold
medicine, but it didn’t work.” Having learned about sequential games from
our book (and apparently feeling a bit obnoxious) you say, “you don’t know
that.” She says, “well I know how I feel, and I don’t feel any better!” So, you
choose to bring her some soup and save your friendship.
You are right, though. Your friend doesn’t know how she might have felt if
she hadn’t taken the medicine. She could feel a lot worse. The Cold Medicine
Problem shows us it’s nearly impossible to correctly say what “would have
been” had we made another choice. Once you’ve made a choice in a sequential
game, the outcomes from the other choices are unknowable. Time—along
with all the unknowns that can change a game—has moved on. Life is a
constant stream of sequential games, but on your deathbed, you can’t waste
your remaining moments saying, “if only I had made a different choice.” It’s
a phantom reality you can’t really know.

The Airport Game


Having the tools of game theory can allow us to navigate problems that aren’t
exactly games, although they share a similar structure. These are not interac-
tions with others, but they do depend on some strategic thought. Here’s a
good example:
A high-priced consultant is at a party, and you hear him say, “if you never
miss a flight, you’re spending too much time at the airport.” Upon hearing
that, you wondered if you were doing something wrong because you had
never missed a flight in your entire life. You might also wonder if this guy
has had a bit too much to drink.
You’re not doing anything wrong; this is a dollars-versus-jollies issue. You
are choosing between your time and the value of your time. It’s not a sequen-
tial game because the future “choice” is actually just two possible future
outcomes, but we can use the framework of a sequential game to explain
the difference between your travel philosophy and the consultant’s.
Suppose the consultant—Robert—makes $500/hour, and he makes all
of his decisions based on money. He also can’t do any work while at the
airport (for some reason, he never has access to wifi. The airport is complete
downtime). He looks at the problem in something like Fig. 11.2.
Robert’s decision depends on money. If the time and cost of rebooking a
flight on occasion is less than $1000, Robert goes to the airport one hour
220 J. Prag and A. I. Prag

BOARDING
TAKES...

2 -$500
HOURS (AND COST OF
RESCHEDULING)
1 HOUR
EARLY

1
ROBERT HOUR -$500
GETS TO
THE
AIRPORT
2
-$1500
HOURS
3 HOURS
EARLY
1 -$1500
HOUR

Fig. 11.2 The airport game

early and occasionally misses a flight. He’s perfectly fine playing the odds of
how long boarding takes. How often does one actually miss a flight?
You make much less per hour (sorry). Unlike Robert, you make deci-
sions based on jollies which include income lost and other things, like your
personal anxiety about missing a flight. We can convert dollar costs to jollies
and still use the same hourly rate as Robert, but we can include negative
jollies when you worry about missing the flight and a lot of negative jollies
when you actually do miss a flight (Fig. 11.3).
This is still probabilistic, but you can understand why you and Robert
make different choices. Nobody is right or wrong. It’s completely rational for
a person to pay more money to avoid a lot of anxiety.
This game also lets us end with an expression that isn’t always right: “all’s
well that ends well.” In this game, and in much of our book, we’re high-
lighting the importance of how people feel about the outcome of a game: the
jollies that they receive in the end. How you decide on a choice can and does
affect your jollies. We’ve included things like honor, empathy, fairness, and
now anxiety in the final payoff matrices. You can also include things like coer-
cion, pride, and apathy. We consider these feelings when we specify a game
for ourselves, intrinsically. We don’t always know we’re doing it, because we
11 Granny’s Gold 221

BOARDING
TAKES...

2 -3000 JOLLIES
HOURS
1 HOUR
EARLY

1
YOU HOUR -1000 JOLLIES
GET TO
THE
AIRPORT
2
-150 JOLLIES
HOURS
3 HOURS
EARLY
1 -300 JOLLIES
HOUR

Fig. 11.3 The airport game, for you

don’t always understand our motivations, but how you feel about making a
decision always affects your payoffs.

Conclusion
Game theory is tricky and complicated even if you don’t make it mathemat-
ical or overly analytical. You have to make choices in situations where other
decision-makers are also making choices and the outcome depends on both
of these decisions. These situations happen every day and by the time you
become a functioning adult, you learn to navigate them without reading a
game theory book or taking a game theory class.
Before looking to mathematics and economics to complete your game
theory education, remember that you were taught to make decisions by many
people who came before you. People who dealt with similar problems and
situations and may have learned the hard way—by getting a bad outcome—
that their approach to the situation was flawed. You have already learned from
these previous mistakes—it’s why we study history in all its forms.
Understanding the structure of a game and the determinants of payoffs
(what makes people tick) requires insight and reflection. Learning game
222 J. Prag and A. I. Prag

theory and using it well is a life-long pursuit. If you want to help those who
come after you, pass along what you’ve learned and make the lessons easy to
remember—and never underestimate the power of a good expression!

Questions

11.1. Consider the following expression: “all’s fair in love and war.” Focusing
on the choice of words in this well-known caution, what word makes
you think that this has something to do with game theory? Which
aspect of game structure is it addressing? Finally, if the expression is
literally true, and knowing what you know about war, what is it saying
about the stakes (the payoff matrix) of “the game of love?”
11.2. We discussed expressions throughout this book that are dated or have
otherwise lost their context or reference point. “A stitch in time saves
nine” and “You reap what you sow” are examples. Let’s try to connect
some modern acronyms to what we know about game theory:
• YOLO. How is this related to sequential games?
• FOMO. How is this related to understanding your player?
• IMHO. How does this defer responsibility to another player?
11.3. Our book touches on some very personal and private areas of life.
Granny wasn’t as much of a prude as some people think; to para-
phrase a bit, “she saw some stuff.” Suppose you told her that you and
another person were “friends with benefits.” What would she worry
about, aside from the biological consequences? What changed from
her time to yours that negates some of her concerns?
11.4. Game theory tries to get people to think more clearly about the
process of mutually dependent decision-making; your choice and mine
together determine our outcome. Many other things are involved in the
process of decision-making, including skills and statistics. Some people
also rely on sketchy approaches to determining outcomes—like super-
stitions. We often do silly things because they worked once: I wore
socks inside out during the World Series and we won, so I guess I’m
doing that every time. As long as you don’t take these things too seri-
ously, applying superstitions to a random process doesn’t do any harm.
Does that logic extend to game-theoretic situations? What’s wrong with
thinking, “as long as I’m wearing my lucky hat, I’ll get the best possible
outcome” in a game?
11 Granny’s Gold 223

11.5. Willingness to play a game has many components: willingness to accept


outcomes that might not be desirable, willingness to “share” the deter-
mination of that outcome, and willingness to be “screened” by other
players. This last issue is often the reason some people are unwilling to
run for political office.
Few people are willing to have their entire past brought out and put
on display. In the old days, it is argued, private lives were off limits.
If you think of this as an information acquisition issue, is that really
what’s going on? If it’s easier to find skeletons in closets, how can the
electorate adjust to these modern realities? Here’s a hint and one final
expression, “let he who has no sin cast the first stone.”
Answers to Discussion Questions

1.1. Homework and practice are ways of building skills. The end result
of these skills is better life outcomes; having the ability to excel at
something that “pays off.” As we’ll see, life is a sequential game where
choices and decisions today allow or preclude future outcomes. Parents
know, because they’ve already gotten there, that today’s skill-building
becomes tomorrow’s paycheck.
1.2. You didn’t listen to Granny about a fool and your money. The palm
reader, on the other hand, is assuming that no one will do unto her as
she has done unto you!
1.3. The biggest single problem—the one word that really makes this
expression a very bad application of decision-making strategy—is
“sight.” Considering the long-term nature of a relationship, the small
and fleeting amount of information that one can glean from the “first
sight” is insufficient to base a long-term commitment on.
1.4. “Lucky” Most of the skills you will need in that game are learned in
class on probability and statistics.
1.5. Game theory is about decision-making in situations where there are
multiple, inter-connected, decision-makers. Competitive sports often
include game-theoretic decisions, but sports are usually based on
physical skills.
2.1. Based only on things we’ve discussed so far (reputation, repeated
games, and other possibilities could have something to do with this
but we’ll talk about those things in later chapters) empathy outweighs
money for you in this case.

© The Editor(s) (if applicable) and The Author(s), under exclusive 225
license to Springer Nature Switzerland AG 2024
J. Prag and A. I. Prag, Useful Game Theory, Palgrave Executive Essentials,
https://doi.org/10.1007/978-3-031-75154-7
226 Answers to Discussion Questions

2.2. “If ” Game theory is the process of incorporating other players’ choices
into your decision-making process. If you know with certainty what
the other player is going to choose, this is much more straightforward.
2.3. In the real world, you must figure out what’s in people’s payoff matrix.
What gives them jollies. They also have to figure out your payoff
matrix! That the things with Janet work out reasonably well after twelve
years is, you’ll pardon the neighborhood pun, a two-way street.
2.4. Coordination game results require a mechanism—a shared approach—
for reaching a solution. Kids need to learn how to start games; how
to make rules that are inclusive. These things require experience with
starting and playing games. Adults help small children do this early in
life and those successes give kids experience in structuring games.
2.5. Many people start—with help from an understanding restaurant—
with choosing categories. Meat or fish or pasta or a big salad? Maybe
appetizers for the meal? Once you’ve made your category choices, you
can strategize with others at your table in hopes of getting a “taste” of
things that weren’t in your category of choice.
3.1. The couple’s alcohol-fueled commitment skipped far too many steps.
The couple did not have enough information about each other to
for the marriage commitment to be successfully maintained. Put
differently, one or both people almost certainly discovered that their
marriage partner was not well-suited as a permanent source of every-
thing that a happy marriage requires.
3.2. Obviously this question has many possible answers. Let’s assume that
they want to watch something together (so the answer is not grandpa
watches fishing on TV while grandson watches sci fi on his phone.)
The “been there, done that” approach that we discussed would say
grandpa should agree to watch something other than fishing. Perhaps
the science fiction show, perhaps a third option. But grandpa knows
what it’s like to be fourteen; grandson has no idea what it’s like being
sixty-five.
3.3. We’re going to go with learned. There are so many things that change
as a person gains experience that it’s hard to imagine a functioning
decision-maker is relying on the small number of jolly generators that
are hard-wired.
3.4. Understanding the source of the other decision makers’ jollies in no
way requires you to share that source of jollies. The key is under-
standing the source of jollies and also respecting the other person’s
tastes. We often hear the advice “love someone for who they are.”
Answers to Discussion Questions 227

Simplistic though it is, that’s a much better recipe for a happy long-
term relationship than judging everyone in your life by your tastes and
standards.
3.5. People act differently in public. Your boyfriend is less likely to
burst into tears, become verbally abusive, or many other unpleasant
responses intended, perhaps, to change your decision.
4.1. Take a picture with a smartphone and text the photo to would-be
rescuers. But this technology comes with a twist, early on in this story,
after one or two fake sittings, the kid would be told to send a picture
and without that confirmation no one would come. Heaven help the
kid if his cellphone’s battery ran out!
4.2. We have laws that penalize people (fines, imprisonment, etc.) for selling
things under falls pretense.
4.3. “Blocking and tackling” in football are important to the outcome of
a game but they are among the least fun things that team members
have to do. That reality transfers to the business world. In general,
employees don’t enjoy doing the jobs that the boss is referring to when
she says, “we need to focus on blocking and tackling.”
4.4. It isn’t fun for anyone when you lie about things like your educational
background or your legitimate skillset. No one wants to have brain
surgery from someone who lied about going to medical school.
4.5. Singles bars. Feel free to replace “singles” with any hard-to-signal
attribute and you’ll get the idea. In most cases, these places do not
require that customers are part of the group, but they provide a place
that people can willingly and knowing go to meet similar people.
5.1. Personal choice often conflicts with cooperative behavior. One can
make a personal choice to be cooperative of course, so where’s the
problem? The problem is in the word mandate. Being told, or ordered,
to be cooperative is different from choosing to be cooperative. And
when the issue is a communicable disease, the importance of coopera-
tion—either mandated or chosen—is clear. People other than yourself
can die because of your noncooperative choice. Mask mandates are
arguably less effective than vaccines, but they are visible, and they
have the ability to reinforce a sense of seriousness about controlling the
problem. In that way, mask mandates are but symbolically and at least
to some extent literally a cooperative way of controlling communicable
disease without mandating that people get a vaccine.
5.2. He could hold the door open for everyone. That seems like a reason-
able solution but how would a woman, who was offended by this
old-fashioned behavior know you do this?
228 Answers to Discussion Questions

5.3. There need to be resources available to catch and (ultimately) punish


those who are being noncooperative. If a thief knows that there aren’t
enough resources in the local legal justice system to find and punish
them, cameras don’t do much good.
5.4. Paying in advance for the right to not wait in line is giving up dollar
jollies to avoid time jollies. Paying a fine for speeding or stealing or
some other noncooperative behavior is a systematic way of taking back
some of your ill-gotten noncooperative jollies.
5.5. Any game in which competitive outcomes are important to maxi-
mizing overall results. That includes zero-sum sports games but also
economic games that depend on the best player winning.
6.1. Making a simultaneous game sequential creates a situation that’s
stronger than a commitment. The second player in a sequential game
knows with certainty what the first player’s choice is. A commitment in
a simultaneous game, as we’ve seen, could be a bluff. Agreeing to make
the game sequential and then “going first” is an ironclad commitment.
6.2. Rules that limit voting to one gender or one class of people or one
ethnicity are all good examples. And having created the situation where
only your type of people are in charge, you have the ability to control
and even abuse all other types of people in your society. Alternatively,
if your society allows everyone to vote, rules and laws are less likely to
restrict access to things like education and other social benefits to only
the chosen few.
6.3. When some of the choices are unambiguously bad. In tic-tac-toe, no
choice at the beginning of the game is guaranteed to be bad. On the
other hand, giving a small child access to anything and everything that
might want to try does include things that are clearly bad for them.
6.4. At the beginning of the second half of a game like this information
has been revealed. Each coach has been able to observe the strengths,
weaknesses, and strategies of the other team. If the point of receiving
the ball at the beginning of each half is the score points, the team that
receives the ball in the second half can approach that task with much
more information.
6.5. You could write a payoff matrix with your current and future self as
players and you could have one outcome that’s very, very bad which
leads to a Maximin approach—avoid the choice that could lead to that
outcome. The part of the problem that would seem to be impossible is
making simultaneous choices with your present and future self.
Answers to Discussion Questions 229

7.1. Cynical though it sounds, having lots of people at your wedding means
there are lots of people watching you, seeing you at the local bar
or restaurant with someone other than your spouse, and potentially
reporting their observations to your spouse.
7.2. The problem is in using the word “promise.” Situations can always
change, and promises do sometimes have to be broken but if you’re
raising your child to use the word promise correctly, you might say
you plan to take her to Disneyland instead.
7.3. Yikes! Well, a credible death threat potentially takes away all of your
future payoffs so the choice between your money or your life is a no-
brainer. Credibility is key. The implicit promise is you won’t be killed
if you give up your money. If you have reason to doubt that implicit
promise, your choice might be different.
7.4. If you give in to the bully, he will bully you forever. The advice-giver is
seeing this as a repeated game where your decision today will establish
your reputation.
7.5. In the setup of a simultaneous game we say that both players are
choosing at the same time which makes promises and threats—what
we called response rules—impossible. It has been argued that you could
somehow pre-program your choices in a simultaneous game to align
with a promise or threat. The promiser/threatener would program a
computer or something like that to automatically choose something if
the other player makes a choice. While this sounds possible, it’s hard
to find any examples in the real world.
8.1. A one-period (simultaneous) game with a stranger has no way of incor-
porating reputation or long-run benefits of cooperation that counteract
short-run benefits of self-interest. One-time-only games with strangers
usually need to be played with a lot of caution and an assumption of
noncooperation.
8.2. Assuming they knew about the worst-case scenario—the accident,
etc.—what likely happened is that they drank enough to be mentally
impaired and thus didn’t correctly incorporate the future downside.
8.3. There is a perception that there won’t be any future in which you can
be punished for bad behavior.
8.4. The officers don’t live forever. Policymakers in situations like this
often focus too much on the short run—they want to be liked or
re-elected—at the expense of long-run success.
230 Answers to Discussion Questions

8.5. No hell means no lost jollies forever, just more jollies in heaven or no
more jollies. In that case, a bad actor could try to accumulate enough
short-run jollies during their lifetime to make forever jollies in heaven
unimportant.
9.1 The frontline people are also the ones who are likely to be hurt or killed
first. While not actively wanting to start World War III, a person on the
front line will still do what’s necessary to protect themselves. The high-
level leader can limit things (or not). They can agree to let skirmishes
happen on the front lines while not allowing nuclear missiles to be
fired. That said, frontline decision-makers might know (sooner) that
things have already escalated too far. The use of things like chemical
weapons for example.
9.2. When the fuse is lit, it burns slowly but steadily toward detonation.
Pull out or cut the fuse—defuse the explosive—and there will be no
detonation. This is brinkmanship with a twist. The escalation of the
situation is based only on time. Once the lit fuse reaches the explosive,
there is no turning back. The bad outcome isn’t exactly at the end of a
slippery slope. The fuse burns steadily, and it can be cut right up until
the explosion.
9.3. This is the situation with voting for representative policy makers in
some circumstances. The voters are the frontline players, and they often
want a happy-ever-after society in which to live for a long time. Elected
officials are only in office for a few years so they might care more about
the short run (being liked or being re-elected.) One might argue that
this is a flaw with representative government.
9.4. Leaders should care about the people being led more than they care
about themselves. The egomaniacal leader, in a brinkmanship situa-
tion, is more likely to start a war because he doesn’t care about how
many people he kills.
9.5. Eventually, you would call that player dead. If there is a point, it isn’t
in the game of chicken. Weakness and strength as concepts make a
lot more sense when we think about situations where threats are being
used to determine the outcome of repeated games where one player is
trying to avoid a bad outcome every time they play.
10.1. If the city was 80% lower income, elected officials would impose most
or all the tax burden on the rich twenty percent. Here’s where we
must remember that people usually have the choice not to play the
game at all. The rich people could just leave; move to another city (we
purposely chose a city rather than a country to make this easier to do.)
Answers to Discussion Questions 231

If that happens, there would be no tax revenue and no government


spending.
If the city was 80% rich people, they would vote to lower their own
taxes. They might vote to increase taxes on the poor 20% as well but
those people don’t have very much income to tax. In this case, and
given the setup of the question, the city again wouldn’t have much
money to spend on government things. Voting for people who always
promise to tax the other guy doesn’t result in a well-funded government
sector.
10.2. No. Next question.
… Okay, a little more on this. Party politics is a game, but it isn’t
one that helps any voter in the long run. Party politics is like the brawl
that breaks out in the stands among soccer fans. The brawl will have
winners and losers, but it has absolutely nothing to do with the reason
you came to the stadium—the soccer game. The point of representative
democracy is to form a government that serves most of the people most
of the time. That doesn’t have anything to do with party affiliation.
10.3. Ignoring the rules of the Senate and the House of Representatives, the
ability to pass legislation in either body depends on getting slightly
more than half of the elected officials on your side. With that in mind,
51 (a majority of the Senate) is easier to reach than 218 (a majority of
House.) So, everything else the same, senators.
10.4. In this case, the third-party candidate is taking voters away from the
candidate that they are more closely aligned with. So, a left-leaning
third-party candidate helps the right-leaning candidate win and a right-
leaning third-party candidate helps the left-leaning candidate win.
Given that, third-party candidates usually get a lot of support from
the candidate that they are least aligned with.
10.5. This is a bit of a trick question. Looting is never a good thing. But
reasonable in this case might focus on the likelihood of being caught
and punished for taking things without paying. When you are part of
a mob looting a store, you probably feel like it’s unlikely anything will
ever happen to you, so you are getting free jollies.
11.1. Fair. Fairness often refers to the rules of games. Love, and to a lesser
extent war, are strategic not random or physical so there is an element
of game theory here. The implication is the love, like war, is a zero-sum
game—a game with a clear winner and a clear loser.
11.2. YOLO is “you only live once.” The usual application of this acronym
is to promote choices that increase current jollies while ignoring or
minimizing future jollies. The easiest pushback against this attitude
232 Answers to Discussion Questions

would be “but if you make very bad choices today, you don’t live very
long!”
FOMO is “fear of missing out.” Players often play games or make
choices because or peer pressure. A person literally loses jollies because
of their perception of what others think of them. Not being part of the
group (missing out) can make a person feel lonely or “uncool.” The
pushback against this attitude is a reminder that being part of some
groups is far worse than being left alone. There’s nothing cool about
being part of a group that hurts or abuses other people.
IMHO is “in my humble opinion.” Players may use this term to
delegate authority to another player. By transferring responsibility, the
player also defers the consequences of the action. We do it all the time
in conversation to absolve ourselves of choices made by friends and
family.
11.3. Granny knows that “benefits” are hard to keep under control. “Bene-
fits” often lead to attachments. She might say “how can you both be
sure that your benefits can stay friendly?”
11.4. Thinking that wearing your lucky has changed a probabilistic outcome
is innocuous because, well, it didn’t. At worst, you might play games of
chance too often if you follow this logic, but your superstition didn’t
actually affect the outcome. In a game-theoretic situation, your belief
that a lucky charm of some kind always leads to good outcomes might
make you less likely to figure out the game correctly and thus less likely
to make the correct, game theory based choice.
11.5. The best approach to this change is for voters to accept and contextu-
alize the flaws in those who are running for political office rather than
pretend the person has no flaws. Wishing that—like the “old days”—
no one wanted to (or could) find the skeletons in your closet allow
people with very bad “skeletons” to become political leaders.
Glossary

Agency Theory, Principal-Agent Problem The area of study that deals with problems
that arise when a decision-maker (the principal) delegates decisions to a subordi-
nate (agent) where these decisions or choices will determine outcomes for both
parties. The “problem” is how can the principal get the agent to act in the
principal’s best interest and not their own best interest.
Assurance Certainty
Assurance Game A game with many possible solutions (equilibria) in which players
have found a coordination mechanism that allows them to reach one of the
solutions.
Asymmetric Information A situation in which one player has more information than
other players.
At the Limit A holdover from mathematics, this refers to playing a game an infinite
number of times.
Balanced Equal
Binary Choice See: choices
Bluff, Bluffing Announcing a choice that you don’t actually intend to make.
Brainwashing Psychologically manipulating someone into believing things that are
not true or things that are not in alignment with a person’s existing value
structure.
Break-even Price The price that exactly equals the cost of production.
Brinkmanship Situations where the only available threat is generally considered to
be too big to be credible. Players often start down a path that could lead to
the threat happening. Different players and different decisions move the process
down what is often called a “slippery slope.”
Cartel A group of firms that coordinate their production and prices in order to
maximize group profits. A cartel is, in effect, a coordinated “group monopoly.”

© The Editor(s) (if applicable) and The Author(s), under exclusive 233
license to Springer Nature Switzerland AG 2024
J. Prag and A. I. Prag, Useful Game Theory, Palgrave Executive Essentials,
https://doi.org/10.1007/978-3-031-75154-7
234 Glossary

Cognitive Bias A thought process that is skewed, often inappropriately, toward a set
of beliefs or a particular outcome.
Choice, Binary A choice set with only two options for each player.
Choice, Non-Binary A choice set with more than two options.
Choice Process The approach that a person uses when choosing between alternatives.
Choices, Choice Set The available alternatives that a player or players must choose
between.
Cold War The period between the end of World War II and the early 1990s when
the US and USSR were building large arsenals of nuclear weapons, purport-
edly for deterrence and defense, and using the threat of nuclear annihilation to
destabilize the world.
Collective-action Games Games in which individual choices and outcomes depend
on the choices and actions of a larger group.
Collective Bargaining (Power) Often discussed in relation to labor unions, people who
agree to bargain as a group rather than individually. In many cases, the group has
better bargaining power than any one individual.
Collusion players actively coordinating their decisions or choices. Famously, busi-
nesses might do this to keep prices from falling when they actively compete.
This is called price collusion.
Commitment A choice that one player says they are going to make before a game is
played. Commitments are usually made in simultaneous games, games in which
all choices are made at the same time. Committing to a choice is saying, in
advance, what you are going to choose.
Complements Things that are used together or at the same time. Often, things that
augment each other.
Consistent Making the same choices or decisions under the exact same circum-
stances.
Context The situation under which a decision is being made.
Control Ability to determine the outcome.
Cooperation, Cooperative Games, Cooperative Solution An active alignment of the
decision-making process by all players. We generally think of cooperation as a
decision on the part of all players to make choices that are best for the group
rather than any one player.
Coordination Game A game with many possible solutions (equilibria) in which
players must find a way to reach one of the solutions. If players can find the
“coordination mechanism,” this becomes an assurance game.
Counterpart A different decision-maker who is part of the opposite player group.
Credible, Credibility A measure of believability.
Cuban Missile Crisis The thirteen days in 1962 when the US and the USSR dealt
with the Soviet attempt to put nuclear missiles in Cuba.
Culture Clash A situation in which players have different backgrounds or belief
systems and thus do not approach things in the same way.
Decision Tree See: extensive form
Delegated Authority Passing decision-making responsibility to other decision-
makers.
Glossary 235

Different as players, motivated by different extrinsic or intrinsic factors.


Dominant Strategy A choice that’s best for a player regardless of what the other
player chooses. In a game-theoretic setting, a dominant strategy is the choice a
player should always make, given the possible outcomes.
Dominated, dominated choice A choice or some other alternative that never makes
the decision-maker better off.
Duress A state of stress or extreme discomfort, sometimes caused by external
pressures.
Efficient Market In economics and finance, a market in which what’s often called
excess profits—profits above what is required to take on an asset’s risk—are
impossible in the long run because markets are composed of many participants,
all searching for excess profits. Forces of supply and demand move excess profits
to zero over time.
Empathy Concern for the well-being of others.
Equilibrium A stable stopping point for a process.
Escalation Increasing magnitude or intensity.
Extensive Form A way of presenting a decision process as diverging branches, similar
to the branches on a tree. Also, Tree Diagram, Decision Tree.
Factors Determinants.
Familiar, Familiarity A relationship where the person’s preferences are more known to
you, perhaps like a family member. Opposite: unfamiliar.
Fairness A general sense that people or players are equally well-off.
First-mover Advantage in sequential games, a setup that gives the player who chooses
first the ability to determine his best outcome.
Focal Point The coordination mechanism that allows a coordination game to reach
one of the solutions. A focal point is a shared belief, shared values, or a shared
approach to decision-making that makes a coordination game an assurance game.
Forgive, Forgiveness The act of allowing a person who has made an inappropriate
choice to play future games without any additional punishment or other penalty
related to that choice.
Free-Rider Problem In economics, the situation in which on member of a large
group can use or consume something without paying for it because the group is
assumed to have paid for it.
Frequency the timing of events or occurrences.
Game Structure In a game-theoretic situation, this is the players, the choices, the
payoffs, and the rules for making choices (simultaneous vs. sequential games,
repeated games, etc.)
Game Theory Also known as strategic decision-making, this is the concepts and
approaches for analyzing decisions and choices in situations where many players’
decisions ultimately determine each other’s outcome.
Gaslighting Similar to brainwashing, gaslighting is providing a person with false
information which ultimately causes them to question their evaluation of the
game.
Group-think A colloquialism referring to a crowd or herd mentality.
236 Glossary

Guarantee To assure something.


Happiness A generally pleasurable human condition.
Heuristics A fast, often “hands on” way of setting preferences and making choices.
A learned mental shortcut.
Honor An individual’s personal, core beliefs.
Imbalanced Unequal
Immutable Unchangeable
Information Asymmetry See asymmetric information.
Information What is known about a player, a choice, a situation, or an outcome.
Irrational For our purposes, not following a consistent line or reasoning or not using
a consistent set of preferences.
Irreversible Cannot be changed
Irrevocable See immutable.
Iterate A step-by-step process for reaching a conclusion or determining an outcome.
Jollies Happiness units. Used only metaphorically in the economics area known as
utility theory.
Last-Period Problem In many games, the reality that players always act self-
interestedly in the last period of a repeated game.
Lie Purposely and actively providing incorrect information.
Lightbulb Moment A revelation. The point in time when a person realizes or
understands something.
Limit See At the limit
Maximin Maximize the minimum possible outcome. Making a choice to actively
avoid a very bad outcome.
Median-Voter Model In political science, the idea that candidates will try to appeal
to the average voter. If candidates have a political bias (often called right or left)
and each candidate gets all of the voters in that bias group, the candidate who
gets more of the middle—the median voters—will win the election.
Mixed Signal providing information that can have more than one meaning or
interpretation.
Mixed Strategy In games that cannot be played or solved deterministically, this
approach makes choices or decisions probabilistic.
Mnemonic A catchy way to remember something.
Model In economics and other disciplines, an abstract, often mathematical represen-
tation of a “real-world” activity or problem. A model is often a set of equations,
graphs, or diagrams (like a payoff matrix in Game Theory) that captures the
essence of the problem.
Monopolist A single firm that dominates its market and has the ability to control
total market quantity and thus market price. A monopolist is able to make and
maintain excess profits because there is no entry in their market.
Mutually Beneficial Outcome A game-theoretic process that has a final outcome that
all players are happy with.
Nash Equilibrium A choice that is best for a player given what other players have
chosen. A Nash Equilibrium is attained when no player can unilaterally make
themselves better off given their current choice and all other players choices.
Glossary 237

Negotiation The situation in which players try to divide the outcome from a joint
venture.
Non-binary Choice See: Choice.
Observable Easily seen.
Original Boundaries The initial parameters or limitations of a situation or game.
Outcomes See Payoffs.
Passive Strategy The decision to not actively make choices.
Payoff, Payoff Matrix what players will get when all players have made their choices.
Perfect Competition An economics model in which there are many, small, homoge-
nous firms and easy exit and entry. No firm, in this model, is able to make excess
profits.
Players In game-theoretic situations, the decision-makers.
Polarization A situation in which players gravitate to extreme political positions.
Populus Candidate A political candidate with broad appeal to the vast majority of
the electorate.
Power The ability to exert influence over another person.
Preferences A ordering of outcomes based on a person’s desires.
Prejudice A biased choice often based on spurious information or skewed beliefs.
Price Collusion See Collusion.
Principal-Agent Problem See Agency Theory.
Prisoner’s Dilemma A well-known simultaneous game in which players must decide if
they should confess to a crime which they committed with a partner not knowing
if their partner will confess.
Promise in sequential games, a choice that the player making the second choice
says they are going to make if the player making the first choice doesn’t make a
particular choice.
Prospering In general, doing very well or getting good outcomes repeatedly.
Proxy a player who is making decisions for other players and usually with the other
players preferences in mind.
Pyrrhic Victory Winning a game but at a very high price.
Rational For our purposes, following a consistent line of reasoning or using a
consistent set of preferences.
Reinforcement Strengthening a position.
Repeated Game Any game that’s played more than one time with the same setup
(same players, choices, payoffs, etc.)
Repressive Control Mechanism Any way in which societies prevent a subgroup from
playing the game. We associate these mechanisms with rules that don’t let some
people vote.
Reputation In a general sense, what other players think about you.
Revenge Exacting retribution or imposing punishment for another player’s inappro-
priate choice.
Risk See Uncertainty.
Rollback, rollback solution in sequential games, the ability of the player choosing
first to find a path (a set of choices by both players) for achieving a particular
238 Glossary

outcome. A rollback solution requires the first player to consider what the second
player will choose under all circumstances.
Rules The procedures, usually pre-specified, for playing a game.
Rules of Thumb A colloquial expression referring to choices that work well in most
situations.
Scale of Preferences The possibility that the ordering of preferred outcomes does not
represent the full extent of preference. It is the difference between saying “better”
and saying “a lot better.”
Screen In situations where there is a less-informed and a more-informed player,
this is a mechanism for the less-informed player to learn some of the unknown
information.
Second-best Solution In economics, a solution to a problem in which a superior
solution exists.
Second-mover Advantage in sequential games, a setup that gives the player who
chooses second the ability to determine his best outcome.
Secrets information known as to some players that they are not supposed to reveal.
Self-interest Incorporating only the interests and desires of yourself.
Sequential Game A game in which there is a pre-specified order in which players
make their choices. Well-known games like chess, checkers, and Tic-Tac-Toe are
sequential games.
Setup in Game Theory, the players, choices, rules, and payoffs for a game.
Short-sightedness A tendency to focus too much on outcomes that will occur sooner
rather than later.
Signal In situations where there is a less-informed and a more-informed player, this
is a mechanism for the more-informed player to reveal some of the unknown
information.
Signaling Theory The body of work that analyzes the transfer of information from a
more-informed player to a less-informed player.
Signal-to-Noise Ratio A mathematical colloquialism that refers to the accuracy of
information. A high signal-to-noise ratio means that information is generally
accurate.
Simultaneous Game A game in which all players make their choices at the same time.
Sincere Believable and trustworthy.
Skill An inherent or learned competitive advantage.
Social Norm In any society, the way people are expected to act. Social norms typically
come from a societies culture, history, and shared beliefs.
Specify Indicate or dictate. When we specify a game, we understand the other
player’s jollies and the potential outcomes. Specifying a game correctly leads to
better outcomes. If a game does not land in your favor, or at an outcome you
didn’t anticipate, you mis-specified the game.
Stalemate At the end of a game, the situation in which neither player has a move
that allows them to win.
Strategic Advantage A situation such as superior information that allows one player
to make choices that will move the outcome of a game in their favor.
Glossary 239

Strategic Decision-Making See game theory.


Strategic Moves Saying what you’re going to choose before a game is played.
Commitments, promises, and threats are strategic moves. These are done in order
to change the choice of other players and ultimately change the outcome of a
game.
Structure See Game Structure.
Subgame A smaller game that is embedded in a larger game.
Suboptimal an outcome that does not represent the best possible outcome.
Thief A person who takes things that don’t belong to them.
Threat in sequential games, a choice that the player making the second choice says
they are going to make if the player making the first choice makes a particular
choice.
Timing in Game Theory, refers to when players make their choices relative to each
other.
Tipping Point A point beyond which a situation will irrevocably reach a conclusion.
Numerically, this is often a point beyond which a process will irrevocably reach
zero.
Transfer Portal In college sports, a virtual “door” in which athletes change the college
they are playing for.
Trust Believing in another player’s choices when direct observation is not possible.
Truthfulness Related to the act of revealing correct or accurate information.
Uncertainty Any situation in which outcomes are risky or cannot be accurately
determined in advance.
Unilateral Decision Any decision that can be made without regard to other decision-
makers.
Unilaterally On your own or by yourself.
Unstable See Equilibrium.
Any situation in which the current result isn’t going to stay the same for very
long.
Utils See Jollies.
Utility, Utility Theory In economics, the idea that the consumption of anything
makes the consumer happy or satisfies a need.
Whistleblowing Reporting to higher authorities an illegal otherwise inappropriate
behavior on the part of some decision-maker.
Zero-Sum Game A game in which there is an unambiguous winner and an unam-
biguous loser. Sports games are zero-sum games.
Index

A Brinkmanship 161–169, 171–174,


Agency Theory 18 233
Anonymity 180
Artificial intelligence (AI) 3, 74, 78,
208 C
Assurance 88 Cartel 181, 182, 233
Assurance game 233–235 Certainty 26, 49, 66, 92, 118, 233
Asymmetric information 64, 74, 88, Choice (binary) 12, 233, 234
202, 214, 233 Choice (non-binary) 234, 237
asymmetries 74 Choice process 102, 234
At the limit 142 Choices, Choice Set 10, 13, 38, 43,
76, 83, 234
Choice set 7
B Cognitive bias 66, 234
Balanced 120 Cold War 84, 85, 234
Basketball 6, 11, 15, 33, 37, 68, 82, Collective-action games 177,
94, 210 179–181, 186, 193–196, 234
Bias 12, 22, 45, 46, 67, 71, 78, 193, Collective bargaining 183
217, 236 Collective bargaining power 182
Bluff, Bluffing 65–67, 72, 77, 85, Collusion 24, 147, 149, 234
88, 126, 127, 132, 202, 204, Commitments 19, 22, 31, 68, 71,
213, 233 95, 123–129, 133–136, 138,
Boundaries 2, 8, 152 156, 208, 234, 239
Brainwashing 213, 233, 235 Communication 8, 31, 47, 70, 71,
Break-even price 181 73, 78, 88, 90, 123, 136, 152

© The Editor(s) (if applicable) and The Author(s), under exclusive 241
license to Springer Nature Switzerland AG 2024
J. Prag and A. I. Prag, Useful Game Theory, Palgrave Executive Essentials,
https://doi.org/10.1007/978-3-031-75154-7
242 Index

Complements 47, 234 E


Consistency 56 Efficient markets 178, 235
Consistent 46, 234 Empathy 26, 27, 29, 129, 155, 220,
Context 17, 234 235
Control 2, 10, 14, 51, 52, 74, 83, Equilibrium 50, 86, 125, 192, 235
92–94, 96, 98, 112, 117, 155, Escalation 161, 235
163, 167, 171, 179, 195, 196, Extensive form 103, 105, 235. See
207, 209, 213, 234, 236 also Decision tree
Cooperation 5, 34, 37, 45, 81–86,
95, 97, 121, 124, 136, 142,
146, 150, 152, 177, 234
F
cooperative games 81, 83, 234
Factors 14, 235
cooperative solution 2, 26, 32,
Fairness 59, 121, 217, 220, 235
76, 81, 84, 85, 88, 90, 91,
Familiar, Familiarity 6, 9, 10, 21, 23,
95–97, 127, 149, 152, 153,
42, 86, 97, 110, 112, 212, 235
155, 172, 234
Fatigue 54
Cooperative behavior 82
First-mover advantage 83, 106–108,
Coordination games 87, 88, 90,
110, 121, 162, 235
182, 234, 235
Focal point 87, 235
Counterpart 165, 234
Credible, Credibility 123–126, Football 80, 119
129–133, 135, 161, 162, 164, Forgive, Forgiveness 93, 94, 153,
172, 173, 203, 215, 233, 234 154, 235
Cuban Missile Crisis 162, 164–169, Free-rider problem 183, 235
234 Frequency 36, 63, 141, 153, 235
Culture clash 234

G
D Game structure. See Structure
Decision tree 108, 142, 200, 234. Game Theory 1–5, 7–12, 14–19,
See also Extensive form 21, 23–26, 28, 29, 34, 37–40,
Delegated authority 163, 234 42, 45–47, 49–51, 53–55, 60,
Disneyland 2 61, 63–65, 68, 71, 73, 78, 79,
Dominant strategy 25, 26, 30–32, 81–83, 86, 93, 95, 101, 102,
39, 41–44, 50, 51, 59, 60, 84, 104, 106, 107, 110, 111, 113,
86, 88, 91, 107, 124, 126, 116, 120, 123, 127, 134, 135,
128, 130, 138, 142, 143, 138, 146, 151, 153, 155, 156,
145–147, 235 161, 162, 164, 166, 167, 174,
Dominated 40, 235 177–179, 186, 187, 189, 190,
Dominated choice 40 192, 193, 199–205, 209–212,
Dominated strategy 44 214–217, 219, 221, 222, 235,
Duress 5, 235 236, 238, 239
Gaslighting 213, 235
God 152
Index 243

Government 2, 114, 162, 167, 171, 115, 120, 124, 127, 129,
174, 179, 180, 183, 186, 189, 132–134, 157, 158, 166–168,
191–193, 196, 206 179–181, 184, 194, 200, 202,
Group think, Group-think 177, 180, 205, 207, 211, 213, 216, 217,
235 220, 236
Guarantees 70, 126, 236

L
H Last-period problem 146, 147, 152,
Happiness 12, 22, 27, 29, 30, 34, 153, 162, 236
45, 50, 51, 60, 83, 112, 115, Leadership/Leaders 17, 18, 47, 54,
236 68, 69, 92, 94, 97, 163–166,
Heuristics 9, 46, 87, 236 168, 174, 184, 190, 192–195,
Honor 22, 23, 30–32, 35, 37, 90, 212, 215
158, 220, 236 Lies 78, 152, 201, 202, 236
Lightbulb moment 16, 236
Limit 10, 14, 91, 93, 112, 117, 145,
I 153, 158, 163, 209, 236
Imbalanced 120, 236
Immutable 130, 236
Information 5, 10, 12–14, 17, 37, M
41, 42, 44, 47, 52, 54, 59, Maximin 48–50, 93, 104, 105, 121,
63–75, 77–79, 85, 88, 91, 93, 170, 236
96, 97, 110–112, 117, 124, Median-voter model 187, 190, 191,
126, 130, 132, 134, 138, 141, 236
142, 146, 163, 167, 168, 174, Mixed signal 71, 72, 75, 77, 205,
181, 182, 193, 194, 201–204, 236
208, 209, 212–215, 217, Mixed strategy 210, 236
235–239 Mnemonics 115, 236
Information asymmetry 64, 236. See Model 1, 2, 46, 95, 112, 178, 179,
also Asymmetric information 192, 236, 237
Irrational 46, 167, 236. See also Monopoly/Monopolist 181, 182,
Rational 236
Irrationality 47, 50, 167 Mutually beneficial outcome 4, 81,
Irreversible 126, 236 87, 88, 163, 236
Irrevocable 236. See also Immutable
Iterate 86, 236
N
Nash Equilibria 28
J Nash Equilibrium 37, 86, 126, 236
Jollies 27–31, 34, 39, 40, 42, 45, Negotiation 76
46, 49–52, 54–57, 59–61, 63, Negotiations 15, 19, 26, 45, 65, 76,
65, 70, 73, 76, 77, 79, 81, 83, 77, 163, 166, 167, 174, 204,
84, 86, 92–94, 104, 105, 113, 237
244 Index

Non-binarygame 81 193–195, 202, 203, 206, 207,


209–211, 213–217, 233–239
Polarization 190, 237
O Populus candidate 191, 237
Observable 83, 125, 237 Power 17, 65, 71, 77, 91, 94, 150,
Organizational boundaries 2 165, 179, 180, 182, 192, 196,
Outcome 1–18, 21, 22, 24, 26–29, 212, 222, 234, 237
34–37, 43, 45–58, 60, 61, Preferences 5, 12, 13, 22, 27, 29,
63–65, 67, 68, 70–72, 75–83, 39, 45–47, 49, 53, 54, 61, 63,
85, 86, 88, 90, 92–97, 64, 68, 87, 115, 165, 167,
101–108, 110, 112, 115, 116, 177, 178, 184–186, 190–193,
118, 120, 121, 123, 124, 126, 207, 235–238
129, 130, 132, 134–138, Prejudice 10, 60, 61, 79, 80, 130,
141–143, 145–150, 152–154, 132, 237
156–158, 161, 162, 164–167, Price collusion 147, 178, 181, 234,
169, 170, 172, 174, 177, 178, 237
180, 182, 183, 185, 187, 189, Prices 28, 42, 75, 110, 124–126,
191, 192, 194, 196, 199, 200, 146–150, 153, 178, 181, 182,
202, 203, 211–221, 233–239 187, 204, 218, 233, 234, 236,
237
Principal-Agent Problem 18, 237
P The Prisoner’s Dilemma/Prisoner’s
Passive strategy 163, 237 Dilemma 23, 237
Payoff, Payoff matrix 5–7, 12–16, Prisoner’s Dilemma game 17, 27, 80,
18, 24–31, 33–35, 37–39, 81, 88, 96, 151
45–47, 49–51, 53–57, 59–61, Promise 19, 22, 95, 123, 124, 129,
63, 64, 70, 73, 75, 77–79, 133–136, 138, 152, 154, 162,
82–84, 86, 88–90, 95, 98, 172, 237, 239
102, 104, 107, 108, 110, 112, Prospering 150, 237
113, 118, 120, 121, 124, 126, Proxy 29, 237
128–130, 133, 135, 141–149, Pyrrhic Victory 77, 237
151, 153, 157, 158, 162,
165–167, 170, 172, 177, 178,
194, 195, 211, 213, 220, 221,
235–238 R
Perfect competition 178, 237 Rational 46, 51, 54, 165, 213, 220,
Players 2, 4–16, 21, 24–27, 29–42, 237
44–52, 54–56, 58–61, 63–70, Rationality 6
72, 73, 75–79, 81–88, 90–96, Reinforcement 98, 237
101, 103, 106–108, 110–113, Repeated game 8, 32, 37, 52, 56,
115–118, 120, 121, 123–130, 93, 114, 133, 135, 141, 142,
132, 134, 136, 138, 141–143, 148, 149, 151, 153, 157, 161,
146–149, 151, 153, 154, 157, 166, 182, 193, 202, 206, 214,
161–169, 172, 177–180, 235–237
Index 245

Repressive control mechanisms 52, Signal 5, 58, 64–72, 78, 105, 132,
237 135, 149, 194, 200, 201, 238
Reputation 30–32, 123, 135, 138, Signaling Theory 65, 238
142, 150, 157, 161, 162, 203, Signal-to-noise ratio 65, 238
237 Simultaneous game 5, 35, 44, 85,
Revenge 55, 56, 211, 237 104, 106, 110, 124–126, 128,
Risk 3, 8, 46, 49, 60, 64, 69, 116, 136, 142, 169, 202, 234, 237,
126, 148, 168, 169, 195, 208, 238
235, 237 Sincere 58, 238
Rollback, rollback solution 106, Skills 10, 11, 15, 17, 18, 34, 66, 71,
117, 237 76, 82, 98, 113, 168, 169,
Rules 2, 9–11, 16, 22, 26, 34, 35, 184, 195, 203, 210, 212, 215,
44, 50, 52, 56, 57, 78, 85, 87, 216, 238
88, 91, 94, 96–98, 106, 117, Social norm 206, 207, 238
120, 121, 127, 130, 152, 153, Specify 12, 14, 34, 45–47, 54, 61,
169–171, 183, 184, 186, 196, 63, 65, 75, 104, 152, 157,
206–208, 215, 217, 218, 235, 177, 207, 220, 238
237, 238 Stalemate 83, 238
Rules of thumb 9, 10, 238 Strategic advantage 10, 168, 238
Strategic decision making 1, 4, 169,
209, 235, 239
Strategic (game-theoretic)
S advantages 10
Scale of preferences 49, 238 Strategic moves 123, 125, 133, 135,
Screen 65–67, 70, 74, 76, 131, 132, 136, 138, 162, 203, 208, 239
157, 194, 217, 218, 238 Structure 1, 2, 4–6, 13, 21, 24, 34,
Second-best solution 77, 238 35, 37, 45, 54, 61, 78, 83, 85,
Second-mover advantage 107, 110, 91, 96–98, 111, 113, 121,
111, 238 127, 137, 142, 152, 154, 155,
Secrets 85, 201–204, 238 158, 167, 170, 171, 196, 199,
Self-interest 2, 8, 10, 26, 27, 34, 60, 203, 206, 218, 219, 221, 233,
85, 149, 151–153, 238 239
Sequential games 15, 28, 33, 35, 37, Subgame 41, 82, 83, 85, 163–165,
41, 48, 53, 101, 102, 104, 173, 239
106–108, 110, 111, 113, 114, Suboptimal 125, 239
116–119, 123, 124, 128, 135, Synergy 76
136, 142, 169, 185, 194, 201,
202, 205, 212, 214, 215,
217–219, 235, 237–239 T
Setup 7, 12, 17, 21, 23, 24, 37, 47, Thief 22, 27, 239
63, 64, 81, 90, 101–103, 116, Threat 19, 51, 79, 93, 95, 123,
125, 129, 141, 146, 147, 149, 127–130, 132–136, 138, 154,
156, 182, 189, 190, 237, 238 161–167, 169, 171–173, 186,
Short-sightedness 215, 238 187, 203, 233, 234, 239
246 Index

Timing 35–37, 69, 85, 102, 110, Unstable 192, 239. See also
111, 134, 185, 235, 239 Equilibrium
Tipping point 169, 170, 239 Utility, Utility theory 12, 22, 236,
Transfer portal 127, 239 239
Trust 17–19, 64, 84–86, 97, 142, utils 12, 27, 45, 239
202, 239
Truthfulness 9, 239
W

U
Whistleblowers 171
Uncertainty 63, 110, 113, 116, 117, Whistleblowing 171, 239
133, 168, 205, 215, 239
Unilateral choices 51
Unilateral decision 2, 239 Z
Unilaterally 36 Zero-sum game 6, 67, 81, 82, 239

You might also like