Useful Game Theory
Useful Game Theory
GAME THEORY
Fundamentals of
Decision Making
Jay Prag
Amanda Ishak Prag
Palgrave Executive Essentials
Today’s complex and changing business environment brings with it a number
of pressing challenges. To be successful, business professionals are increas-
ingly required to leverage and spot future trends, be masters of strategy, all
while leading responsibly, inspiring others, mastering financial techniques
and driving innovation.
Palgrave Executive Essentials empowers you to take your skills to the next
level. Offering a suite of resources to support you on your executive journey
and written by renowned experts from top business schools, the series is
designed to support professionals as they embark on executive education
courses, but it is equally applicable to practicing leaders and managers. Each
book brings you in-depth case studies, accompanying video resources, reflec-
tive questions, practical tools and core concepts that can be easily applied to
your organization, all written in an engaging, easy to read style.
Jay Prag · Amanda Ishak Prag
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature
Switzerland AG 2024
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse
of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by similar
or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.
This Palgrave Macmillan imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Using parables, famous songs, and classic expressions, Useful Game Theory
takes readers on an exploration of human nature and the long road of
decision-making known as “life.” Prag and Prag turn a mathematical disci-
pline into a digestible (dare we say “enjoyable”) read, with a little wit, a lot
of sarcasm, a humanist approach—and almost no math. From nuclear disar-
mament to Friday movie nights, each chapter guides you through a Game
Theoretic analysis of the decisions we make each day.
While this book will occasionally wander into the realm of controversy, our
hope is that it makes some of the world a little less confusing. Game theory
as an intellectual exercise is math-heavy, hypothetical, and full of controls
that can render it almost useless. This book takes a more practical approach,
applying the tools and lessons of game theory to the real world—a place that
is controversial, complicated, uncontrolled, and very confusing.
In these pages, you’ll find an approach that’s relatable and not heavily
analytical. Our goal is to enhance your understanding of game theory by
applying it to the decisions and problems humans face every day. Some appli-
cations in this book are uncomfortable and challenging to navigate, topics like
politics, religion, violence, and societal dysfunction. There are also unconven-
tional uses of economics, sociology, public relations, management, and other
academic disciplines. Game theory exists at the nexus of these practices, and
it is at the heart of the humanity-based decisions we make every day. We
didn’t want to show only the good stuff. After all, the human experience is
both good and bad.
ix
x Preface
For those in academia who are uncomfortable with our anecdotal adoption
of your areas of study, we offer this advice: focus on the goals of game theory.
The discipline grew from a desire to formalize how human beings (from indi-
viduals to groups to companies to countries) make decisions. If you’re so into
math you forget it’s about the human experience, you’ve missed the point.
All academic disciplines stem from casual observations of the real world. So,
focus on game theory, chill out, and remember: game theory is just another
way of thinking about situations. It exists side-by-side with other ways of
thinking. It is not a substitute; it is a complement. If applied science is a test
of empirical knowledge, let’s take the test.
For students of game theory, a bit of advice as well: we’re going for breadth,
not depth. In the pantheon of game theory games, this book focuses on a
relatively small number. There are many complex games that are, more or
less, for AI only. There are others that are less complex, but also less relevant
to the choices you make every day. We are going to analyze a dozen or so
games that resonate with your day-to-day life, and we’ll apply those results to
quite a few real-world situations.
Jay Prag
Upland, USA Amanda Ishak Prag
xi
xii Contents
10 All for One, and One for All: Collective-Action Games 177
What Collective-Action is NOT: Perfect 178
Representative Government 179
Follow the Leader Redux: Getting Lost in the Crowd 180
Cartels 181
Labor Unions 182
Fair Representation 183
Agenda Setting 185
Threats Versus Persuasion 186
The Median-Voter Model: Hot Dog Stands on the Beach 187
Third-Party Candidates and Parliamentary Governments 191
Whose Team Are You On? 194
Conclusion 196
Questions 196
11 Granny’s Gold 199
The Good Parent Game 200
Turn Signals 200
Secrets and Lies 201
Threats and Secrets 203
One Man’s Secret Is Another Man’s Lie 203
An Option Is Only Worth More Alive Than Dead,
in Finance Books! 204
Laws, Rules, and Regulations 206
Last Clear Chance 207
Ignorance Is Bliss 208
The Curious Case of Mixed Strategies 209
Holding Grudges 210
Follow the Leader/Blind Leading the Blind 212
Gaslighting, Brainwashing, and “Psyching Out” 213
On Being Fooled 213
A No-Win Situation 214
Information, Expertise, and Decision-Making: “Physician
Heal Thyself ” 215
Game Theory Goes to the Doctor 215
Fairness, Future Outcomes, and the “From Here” Problem 217
The Cold Medicine Problem 219
The Airport Game 219
Conclusion 221
Questions 222
xvi Contents
Amanda Ishak Prag is Director of Marketing for one of the largest architec-
ture and engineering conglomerates in the nation. A marketer for more than
two decades, she has helped firms streamline internal and external communi-
cations and develop structures for organizational growth. Her research focuses
on M&A integration and mass communications strategies. Amanda holds
a Ph.D. in Management from the University of California, Riverside, an
M.B.A. in Strategy from Claremont Graduate University, and Bachelor of
Arts in Economics and Bachelor of Arts in Public Relations from the Univer-
sity of Southern California. She is a coauthor of Covid-U and additional
empirical works.
Amanda and Jay reside in sunny and overpriced Southern California with
their daughter, Julianna, and their dog, Doc Brown.
xvii
List of Figures
xix
xx List of Figures
Your Granny knew game theory. She may not have called it “game theory,”
and she probably didn’t review a whiteboard of advanced calculus to make
every decision… But, Granny used it every day. We know this because she
understood people. From the nosy neighbor to the annoying aunt to the
helpful clerk at the grocery store, Granny knew how to navigate the world and
get what she needed. She did it with a lot of wit and the goal of being helpful.
We’re willing to bet you’d take your Granny’s advice over the predictions of a
mathematical model any day.
Granny really knew game theory. So did Shakespeare, Aesop, Confucius,
Machiavelli, most country singers, and the writers of every major children’s
book in history. From The Art of War to “In the Jailhouse Now,” many of the
songs, expressions, and parables we heard growing up are actually solutions
or approaches to the games we play in everyday life.
Game theory, also known as strategic decision-making, is a simple concept:
you and another person make decisions, and together your choices determine
a specific outcome. That’s really it.
So, why do game theory books have so much darn math? Well, it turns
out that all the little things matter. For example: How many people are making
decisions? Are they all choosing at the same time? Do I get to choose first and,
maybe, more than once? How often am I choosing, and with whom? These
factors (and so many more) impact how a decision turns out for you, and
math can help us quantify and model these factors to get to a predictable
outcome. Math is useful, just not in this book. Here, we’ll focus on under-
standing the basics. So, game theory helps us understand the structure and
probable outcomes for things like which freeway to take, how to get a raise,
and how to choose a movie on a Friday night.
Game theory doesn’t teach you how to win games. It teaches you how
to set them up to maximize the likelihood of your best option. What’s the
difference? Well, as much as mathematics and logic can provide humanity,
they’re not foolproof. A game theory model can do its best to include all
possible variables, but when human beings are making decisions in the real
world, things get missed. When people read a book or take a class in game
theory, they are often disappointed to discover they aren’t learning how to
win every game. There are no books or classes to teach you such strategies.
All game theory can provide is a way to understand the structure of a game
and ways to increase the likelihood of your best outcome.
Long before Brouwer, Morgenstern, von Neumann, and John Nash,
governments and societies set rules to increase the likelihood of best societal
outcomes for situations that happen every day. Laws determine how people
in modern society are allowed to play a game. When you drive up to an inter-
section with stop signs on every corner (a four-way stop), why not just always
go through first? That’s an approach to solving the game, right? Well, yes, but
it’s a unilateral decision. It doesn’t consider what other players might do (and
if everyone followed that approach, there would be a lot of crashes at that
intersection). So, the government says you have to stop at a stop sign and
look around, and society has given you a preferred way of making a choice
(usually, the first person to arrive or the person to the right—in the US—goes
first). Rules help set up a game by providing a framework for the players, and
any decision strategy is bound by those rules.
Organizations do it too. Disneyland welcomes thousands of visitors
daily—visitors from different cultures and countries with different expecta-
tions of waiting in line. The thing is, it’s Disney. You’re going to wait in
line. So, how does the mouse make sure things run smoothly? Each park sets
up clear boundaries with ropes, walls, fences, and hedges, to guide visitors
where they need to go. A trip around the Magic Kingdom shows you there
is no ambiguity about what visitors are expected to do. Compare this level of
control to your local grocery store the Wednesday before Thanksgiving and
you’ll start to see organizational boundaries in action.
Unilateral decisions, rules, organizational boundaries—now we’re getting
somewhere. Game theory is very much a byproduct of human evolution.
History, anthropology, and biology are full of examples where unilateral deci-
sions (ex, I should kill you before you kill me) gave way to more cooperative
solutions that helped our species thrive. When other people make choices
that ultimately determine an outcome, self-interest can turn out poorly for
1 What Is Game Theory? 3
everyone. It’s why Granny told us “Do unto others as you would have others
do unto you.” Granny knew game theory.
Strategic decision-making is not only limited to games here and now. On
the contrary, many of the choices we make are playing games with our “future
selves” (as Doc Brown might have said to Marty McFly: you have to think
fourth-dimensionally). Your choices today combine with your future choices
to determine an outcome later in life. The marketing slogan “What happens
in Vegas stays in Vegas” is an attempt to convince you that your current
choices do not affect your future outcomes. One trip to Sin City makes it
clear that the slogan isn’t really true. Most of the time the only thing that
stays in Vegas is the money that you gambled away.
Game theory in a classroom is rooted in mathematical theory. Strategic
decision-making, on the other hand, is the system we humans have grown
up within. You learn a lot about life through experience and daily trial-and-
error. That’s the point of many of Granny’s expressions. She’s trying to save
you from some potentially painful learning curves. On a good day, Granny
understood context and how to set up a game for her benefit. When you told
her you were going to Vegas, she might have said, “waste not, want not.” She
was warning you about the value of saving and the risks of gambling. On the
other hand, if she was really cool, she might have just asked you to put $10
on red.
Granny and all of the purveyors of worldly wisdom give us approaches to
problem-solving, but they didn’t have time to teach all of the nuances: when
does that approach work? Why is it better? Why does setup matter? This book
will fill in a few of those details.
Many games that Granny played during her lifetime look different today.
We’ll teach the game theory behind the expressions to see when, or if, they
still apply in the modern world. When Granny’s Granny taught her “be
careful what you ask for, you might get it,” there was no Internet, Facebook,
AI, or screenshotting. Granny might not know that in the current world, the
expression is cautionary in a lot of new ways.
“You can’t tell a book by its cover” is an expression that teaches us not
to overstate the importance of external factors. Arguably, this was Granny’s
way of saying “don’t be prejudiced,” but it’s going to have special meaning
in this particular book. This book’s title makes game theory seem folksy and
perhaps even easy. It is not. Game theory is complicated, subtle, and easy to
get wrong. Granny was smarter than any PhD. So, we might as well start
with her best hits.
4 J. Prag and A. I. Prag
A Four-Way Conundrum
You’re driving your new car, and you arrive at an intersection at the same
time another car arrives perpendicular to you (across from you) signaling its
intention to turn across your path (in the US, it has its left blinker on). Clap
those hands, it’s game theory time.
If you both go, you’re going to crash. Let’s assume that’s not the outcome
either of you wants.
If neither of you go, you’ll just wait there forever. That too is not, you
know… a great outcome.
You might be thinking “there are driving rules and regulations that tell us
what to do.” Are you sure both of you know the same rules and always use
them correctly?
You’re not in a hurry and you’re driving your brand-new car, so you wave
at the other driver to go. They turn out of your way first, you drive through
1 What Is Game Theory? 5
the intersection second, there are no collisions, and everyone makes it to their
destination.
You just did a lot of game theory! You and the other driver made choices—
you waited and the other driver proceeded—and that combination of choices
led to a good outcome. You and your car made it to your destination and
you’re happy with how things turned out. In the game’s payoff matrix, a list
of potential outcomes, you are both better off. The other car used a literal
signal to tell you where it needed to go. You used a signal, a wave, indicating
you would let them go first. All is well.
The structure of the four-way stop above is what we call a two-player, two-
choice, simultaneous game: you each made choices at the same time. Game
theory will help us break down other variations of this everyday choice. For
example, would you have done the same if you were in a beat-up, hand-me-
down car instead? If you were in a rush? As we will see, structure combines
with preferences to determine an outcome.
[box]
Four-Way Stop—Rush Hour Version
While the four-way stop is a nice, generally painless type real-world game, we
can complicate things by adding more information. When you and another
driver get to a four-way stop at 1:30pm on a lazy Thursday afternoon, there’s
usually a lot of cooperation and the game solves quickly and easily. If you play
the same game with the same people and at the same intersection at 7:45am
when people are trying to get to work or get their kids to school (or both!),
things are very different.
As the clock ticks toward “you’re late,” you find yourself blowing your horn at
drivers. That’s a very different type of signal. You might also coast through the
sign, rather than stopping. Your nice, generous wave changes to a not-so-nice
one-finger gesture. Time is just not on any player’s side.
There’s a deeper issue here. As we discuss our games and work through the
paths to solutions, think about how choices and outcomes change with factors
like duress. How you perceive and play games changes with the circumstances
under which the game is being played.
[end box]
6 J. Prag and A. I. Prag
Game Structure
When we talk about structure, we mean all of these nuances and factors that
can affect players, choices, and outcomes. Let’s not wander too far from our
first game, the four-way stop. What was the structure of the game? There were
two drivers, so two decision-makers; they had two choices (wait or go); and
they both had some internal desired result. They also had constraints: they
were in a hurry, or they worried about their new car. Implicit and important
to the outcome is an assumption of what’s often called rationality. There’s
more to rationality (a lot more), but for now, let’s just say neither had an
incentive to crash.
Zero-Sum Games
The payoff of a game should be familiar for most of us: it is the set of possible
outcomes for each player based on their decisions. If you’re a sports fan,
a payoff is the potential score from a play or series of games. More often,
though, it’s simply a matter of somebody winning and somebody losing.
This is called a zero-sum game, where there’s one winner and one (or
many) losers. Zero-sum games got their name by applying numbers to a win/
loss combo: a win is + 1, a loss is − 1, so, technically, the sum is zero. There is
no middle ground. There’s no possibility for everyone to come out a winner.
These definitions are perfectly fine when we’re thinking about payoffs (ex, 0/
1), but the vast majority of interpersonal games we play every day, the sort of
games we’ll be discussing, are not zero-sum games.
Arguably, zero-sum games are easier to figure out. More points = win;
not very much to unpack here. Having no middle ground between winning
and losing removes having to think in terms of “better off ” or “worse off.”
Real-world problems are not so easy to solve. When you “get your way” in a
decision with your significant other, are you both always better off? You won!
But it doesn’t always feel like a win, right? Does getting your way ever conflict
with your relationship dynamic?
Sometimes, we wish we could resolve every conflict with a game of pickup
basketball, but that’s not real life. The thought process you just went through
is precisely why zero-sum games are easier than most interpersonal games,
and why most games in life are not zero-sum.
1 What Is Game Theory? 7
What’s in a Game?
WHAT’S IN A GAME?
PLAYERS CHOICES OUTCOMES
• Preferences • Frequency • Payoff / Results
• Biases • Timing
• Needs • Location
• Capabilities • Availability
all games, and we hope the other player will be doing the same. That’s a lot
more likely to work if the other player is a friend, a family member, or just
someone you interact with on a regular basis (neighbors, co-workers, etc.). It
may not work so well with a passing stranger, an enemy, or a faceless gnat in a
comments section. It’s harder to see direct consequences if another individual
just isn’t visible or known to you.
Total strangers aren’t necessarily mean or universally self-interested, they
just don’t know you and you don’t know them. So, if the downside of
acting cooperatively when the other player is being self-interested is relatively
big, neither player will act cooperatively. As we’ll see, this outcome varies if
the players are strangers or friends, or if they’re playing the game once or
repeatedly over time.
[box]
Suppose Mommy wants Daddy to take her to her office Christmas party, but he
really needs a haircut first. Daddy isn’t prone to getting haircuts. Mommy says,
“darling, I’m swinging by the barbershop later—can I set up an appointment
for you?” Junior says, “Dad, you look like a yeti!” Mom is acting strategically;
Junior is telling the truth. So, when you hear the expression “honesty is the
best policy,” remember that in a game-theoretic context, honesty—best policy
or not—and wisdom are two different things.
[end box]
(fancy people call this instinct), rather than absolute ways to make choices.
They’re the guidelines we hone over time for how to make our decisions.
Instinct and self-interest go hand-in-hand. A baby knows herself based on
satisfaction. You grow into an adult knowing yourself first. You know what
makes you happy and sad. Over time, you get to know about those close
to you, what makes them happy and sad. This is how your rules of thumb
develop, and they help you achieve an optimal outcome pretty well when the
other player is a relative. The less familiar the other players are, however, the
less you know about them. In that case, you’ll have to rely more on set rules
for making your decisions.
Rules of thumb change games via familiarity. We should note, however,
that something unfamiliar is not the same as something different. You can
play games with rules of thumb even when the other player is different if you
have a pretty good idea of “what makes them tick.” If you are a kind, caring,
giving person, but you know the other person in the game is selfish and
uncaring, you won’t make cooperative choices even though you would like
to do so. The selfish person may be familiar, maybe a relative or somebody
known well to you; but you are diametrically opposed in how you operate.
Different and unfamiliar are not the same.
If used incorrectly, rules and rules of thumb can give you inferior outcomes
for no good reason. Such is the case with bad assessment criteria like prej-
udices. Prejudice makes you think you are familiar with a person’s choice
set, but you are often wrong. Information is key in playing a game correctly,
and an incorrect perception of another can lead to bad information and bad
outcomes.
[box]
One big example of a strategic advantage is inherent skill. Many rules are
in place to limit (or at least control) the impact of skill on a game. Rules
in competitive sports, for example, focus on leveling the playing field (get it?)
regarding athleticism, age, weight, and training time. A player or team with the
most skill is supposed to win most of the time, right? Without rules, probably
not.
1 What Is Game Theory? 11
Take basketball.
Basketball’s origins date back to the 1890s when a Massachusetts gym teacher
wanted to give people an indoor sport to play during winter months. He put
a fruit basket on the wall and gave players a point for getting a ball into the
basket.
From there, things were formalized (and improved, considering the original
basket didn’t have a hole in the bottom), and eventually we got to modern
professional basketball with a hoop with a net set ten feet above the ground.
Jump ahead (seriously, get it?) a hundred years or so, and you find many of
the successful basketball players are, well, tall. If you ignore the other rules, the
easiest way to get a point is to drop the ball directly in the hoop. All the
training in the world doesn’t matter compared to that ability. Granny says:
“you can’t teach height.”
Without rules, basketball teams would compete to recruit only the tallest indi-
viduals; not the fastest, not the strongest, not the most agile. But there are other
rules. For example, you get more points if you make a basket from far away.
Interestingly, the popularity of the game increased when they changed to “three
points beyond the free-throw line.” Shorter, skilled, agile players can succeed
if they are good at making “three pointers.” The rule made skill, training, and
agility relevant again.
game theory is concerned with how your choices can change outcomes, and
that will always depend on the rules of the game. If rules are written (more or
less) entirely to benefit a set of inherent skills, you won’t have an opportunity
to use much game theory.
Food for thought: Given the importance of training, who’s probably better
at putting together a successful basketball team—the coach or the general
manager?
[end box]
a million different ways of actually doing them (e.g., you can roll forward
slowly instead of coming to a full stop), but for ease of analysis, we’ll assume
choices are binary. This or that (e.g., everything that allows the other driver
to go first counts as “waiting” on your part, whether you’re creeping forward
or not).
We acknowledge that there is always an alternative to all of the choices in a
game: you could choose not to play. This and other options will be discussed
later, but it’s important to recognize this is almost always an option in daily
life. Sometimes, Granny advised us to “just walk away.”
Perhaps the most important lesson of growing up is how to effectively navi-
gate multiple choices. Not playing is always an option; but there are several
others (procrastination: choose later; abdication: make someone else choose;
impatience: choose quickly and get it over with) we employ. Life exists some-
where in between a two-choice, neat, straightforward game on a whiteboard
and the processing of millions of pieces of information we do every day.
Our games will be as applicable as we can make them, but there’s always
room to interpret choices for yourself.
gain or loss; it is how much happiness you gain/lose based on how the game
plays out.
It might be easier to explain it the other way: if each game’s payoff was
structured only in dollars, you wouldn’t have enough information to make
a real decision. How much do those dollars matter to each player? Is the
person money-driven or motivated by something else? Until we understand
how each player feels about possible outcomes, we can’t figure out how a game
works. Different players—different human beings—have individual choice
sets, so the outcomes depend on your ability to understand and process these
preferences in real time.
PLAYERS
PLAYER 2
CHOICES
A B
OUTCOMES
PAYOFF PAYOFF
A
PAYOFF PAYOFF
PLAYER 1
PAYOFF PAYOFF
B
PAYOFF PAYOFF
Second, choices are presented in the gray bar. As we’ve said, these are the
only things you can actively decide on in a game.
Finally, payoffs are the outcomes for each player based on potential choices,
presented in the matrix itself.
The matrix form of games (spoiler alert: there are other ways to illus-
trate games!) is one of the easiest and most straightforward ways to visualize
choices and outcomes. Don’t be fooled by the presence of payoff informa-
tion. Remember: even though you have full information, the only thing you
control in the game is choice (the gray bar). Payoffs inform your choice, but
you can’t directly choose what you get.
[box]
Skill is based on repetition, and the goal is to have it become innate, unre-
hearsed, and automatic. Athletes, coaches, and commentators use metaphors
like “muscle memory” to describe the desired outcome of practice, which
isn’t exactly to “make perfect” but rather to automate performances so they’re
not affected by the stress of the game. Similarly, you’ll hear people talk
about “clutch performance,” good performance under stress. Is being clutch
a strategic choice, or simply the result of repetitive practice and innate skill?
A good strategy, on the other hand, is adaptive. The goal of this book is to show
good examples of “think about it, then decide.” Mastering it means entering a
mental state where you are ready to make a decision. In studying, practicing,
and understanding game theory, you’ll make better choices even in stressful
situations. The choices are not automatic, but the process of thinking through
your choices is.
[end box]
ten is bigger than five is not math. It’s a simple comparison. So, assuming
you know that ten is bigger than five, this book won’t use a lot of math.
You might say “hey, you told me this didn’t have a lot of math!” The matrix
is just an illustration. This game theory book does not have calculations, it
doesn’t take derivatives, and it won’t require anything more than what most
people can do in their head. If you know 3 is less than 4, you’re all set.
There’s a whole other part of game theory that’s full of math. It works out
complicated versions of some of the problems we’ll be looking at and other
problems that are beyond the scope of this book. The math version of game
theory has its place (proving Granny right in a mathematical sense). Other
times it’s proving things that theorists like to prove for the sake of proving
them. Either way, math has its place, but not in these pages.
We’re taking a humanist approach to game theory: learn how to identify
the situation you’re in—the players, the payoffs, and the rules—and make a
strategic, beneficial choice. You already do this daily, and you learned to learn
over time. To get to a more strategic level, we’ll use a few numbers, but not
math.
Even with our approach, though, game theory is a tricky discipline. In
a classroom, we show students a bunch of games with straightforward rules
and known outcomes and illustrate how different approaches work for solving
those games. In doing this, we are practicing. We are abstracting from reality
and doing drills. We are in the batting cage, running scrimmages, practicing
our free throws, on a driving range, and hitting tennis balls shot to us by
a machine. We are doing “wax on, wax off ”—practicing the mechanics of
decision-making so we’re ready when a real-world opportunity presents itself.
If that reference made you smile, you already know what a lightbulb
moment is. For the uninitiated, this is a famous quote from the 1984 film
The Karate Kid. An old karate expert tells the teenager that he will teach him
karate, but first, he has to do a lot of chores. Clean the floor, paint the fence,
and wax the cars. In each case, there was a specific way the teenager was told
to do the task: back and forth, up and down, circles to the right then circles
to the left with each hand. Exhausted from days of chores and having seen
no karate lessons, the young man tells the old teacher he’s had enough and
he’s leaving.
The teacher says, “I have taught you karate!” He proceeds to show the boy
how each chore was training his body to do certain things. “Wax on, wax
off,” circles to the right and circles to the left with each hand, is how you
deflect your opponent’s attacks. Aha!
Once the lightbulb was on, the young man did what he was asked to do
knowing it had a point and it was teaching something important that had
1 What Is Game Theory? 17
nothing to do with waxing or painting. Why can’t we teach every skill and
lesson directly, without drills? To quote a proverb, “the journey of a thousand
miles begin with the first step.” If you told a 6-year-old, relatively new piano
student they were going to play Rachmaninov’s Concerto Number 3 and you
showed her the sheet music, she might quit on the spot. So, you start with
chopsticks and eventually, she plays a concerto.
Such is the case for most game theory examples. We aren’t doing the Pris-
oner’s Dilemma game to teach you what to do when you’re being interrogated
by the police. (hopefully, that particular situation won’t ever apply to many
of you). The setup, approach, and ultimate solution to that particular game
shows up many times and in many places, so we’ll refer to it a lot.
Often, learning game theory feels like trying to learn the unlearnable and
knowing the unknowable: what people want, believe, know, and care about. Is
it easier to play games with family members than with strangers? Sometimes.
Is it easier to do games that are one time only as opposed to repeated play
games? Sometimes. Is it easier to get a straight answer to a question from
anyone other than these authors? Okay, well this—sometimes—is what the
book is all about. Game theory is rooted in context. “When” does “what”
work—and “why.”
[box]
Consider information, the topic of Chapter 4. Followers are often closer to the
front lines and thus have critical information. Suppose that reliable information
is crucial for a leader’s ability to make correct decisions and for a situation—a
game—to reach its best outcome. Now, suppose the leader hasn’t developed
trust or has proven to be too self-interested. Will the followers tell the leader
that crucial information? Suppose earlier in the relationship, a follower did
communicate relevant information, and it wasn’t used correctly or ignored.
Will the follower try again?
18 J. Prag and A. I. Prag
Along similar lines, economics uses a concept called Agency Theory (aka, the
Principal – Agent Problem) to discuss challenges in delegating responsibility to
those under you. Leaders should ask themselves: what do your team members
(followers) want to see happen to you? If they don’t like, trust, or believe in
you, will they tell you what you need to know to allow you to succeed? This
is game-theoretic stuff. As we’ll see, there are important connections between
leadership, trust, and game theory.
[end box]
Conclusion
Game theory is everywhere! It is family dynamics, work dynamics, political
negotiations, grocery shopping, bathtime with Junior, and landing on the
moon.
So, we’ll relate many popular expressions to game theory. If your interpre-
tation of timeless wisdom is different from ours, don’t let that prevent you
from seeing the usefulness of our alternative view. We’re certainly not saying
that the only thing Granny knew was game theory.
Game theory and Granny’s wisdom—both—teach you to be considerate.
It’s why people like living where other people are nice to them, and it’s why
we are taught to trust, respect, and care about others. This book also explores
the downside of human interaction, how we treat those who are naïve and
gullible. Strategic decision-making, like all things, can be misused.
Ultimately, game theory is a process. It is an approach to situations that
makes you ask things like “what do you know?”, “what makes people tick?”,
and “what are my choices?” It will explain how, why, and when commitments,
promises, and threats work. It will make you think a little harder about those
words of wisdom that you heard your entire life and get a better sense of their
value. Hopefully, by the end of these chapters, you’ll understand when these
expressions apply and when they do not.
Questions
1.1. Parenting is all about game theory. Most of us have our first experience
with strategic decision-making when we were parented, but we didn’t
know it. Let’s try to look back and figure out our parents’ strategy:
“Do your homework! Practice your piano or your gymnastics or your
free throws! Do something comparatively boring instead of playing that
game on your smartphone.” The 12-year-old thinks that her parents
don’t understand her or love her. The opposite is actually true. How are
these commands part of a game-theoretic situation? What “advantage”
do parents have for helping their kid make better choices.
1.2. Granny gave you two bits of advice: “A fool and his money are soon
parted” and “Do unto others as you would have others do unto you.”
One day, you were swindled out of $100 by a woman who told you
(falsely) that she could tell your future by reading your palm. Which of
Granny’s tidbits of advice was not heeded by the palm reader?
20 J. Prag and A. I. Prag
We’re going to learn game theory by way of the familiar: typical expressions,
known life lessons, and easy stories. As we invoke these familiar things, we
add in some unknowns and ambiguities, helping us build an approach to
problem-solving that does two things. First, it will help you recognize game-
theoretic situations—circumstances where you alone cannot determine the
outcome. Second, it will highlight the deeply personal factors that we must
consider if we’re going to succeed at real-world, non-mathematical game
theory.
Let’s hit the ground running with the most famous game of all: The
Prisoner’s Dilemma.
The Prisoner’s Dilemma is a good starter because it gives us a set of
parameters, structure, that is relatable and applicable to real-world situations.
Structure, as we will see, is the most important part of any game. This setup
is the foundation for understanding returns, expectations, and goals of each
player. Prisoner’s Dilemma will also allow us to explain and contextualize
many oft-heard expressions in future chapters.
Before we get started…
… as suggested in the introduction, we have to get used to unpacking
expressions. The borrowed imagery of the word “unpacking” makes us think
about a travel bag or suitcase tightly stuffed with various items, each used
only after the bag is unpacked . Consider, for example, the expression “no
honor among thieves.” A good expression, like a good travel bag, this one
holds a lot of stuff that needs to be unpacked before we can put it to good
use. We need to think about each word, what goes into it, its context, and
how it helps explain a result.
Honor
Historically, societies have created different approaches to get people to follow
through on things like promises, commitments, and day-to-day life (which
we discuss in detail in Chapter 7). One approach that has endured over time
is a thing we call honor. It’s a small word but with large connotations. Let’s
unpack it.
How people feel about outcomes, what economists usually call the utility
that people get from their choices, is actually a programmed response. We
begin as babies with sensory pleasures—how things taste or feel or smell or
sound or appear—and our reactions are natural and inherent. In subsequent
years, we “develop a taste” for certain things. This is the learned behavior part
of it. Combining our inherent reactions with developed preferences, we figure
out what a certain thing really gives us. Roller coasters might not be fun as a
toddler, but your teen self loves the experience. Your utility—the benefit you
receive from doing something—changes over time. This change is a well-
accepted part of life, and it’s helped along by reinforcing good choices and
good behaviors.
The same learned-taste approach can be used to convince people that they
don’t get pleasure from some things; and, therefore, they shouldn’t do them.
This applies to things like stealing, murder, and speeding through a crosswalk.
Stealing feels wrong, intrinsically. Also, society put rules in place to reduce
people’s incentive to do it. Even if you can’t possibly be caught, you (hope-
fully) don’t steal. That stealing is “bad” is an inherent reaction, and society’s
rules reinforce it.
People are trained that the happiness one gets from consuming something
that was stolen doesn’t give you the same amount of pleasure that you get
when you acquire the same good in a more appropriate manner (buying it,
making it yourself, bartering for it). One word that we use to explain the
lack of happiness we get from consuming ill-gotten goods is honor. In other
words: acquiring things by stealing or deception is not honorable, so this
approach should not give you pleasure.
Thieves
By definition, a thief is a person who steals things. So, the expression “no
honor among thieves” tracks. Let’s unpack it even more. This expression
might mean different things to different people, based on your pre-conceived
notion and individual biases.
2 No Honor Among Thieves: The Prisoner’s Dilemma 23
Here’s a question: A 6-year-old boy takes another kid’s lunch money 1 day
in first grade. Does that make him a thief?
Before you answer, consider your own feelings on the following question.
When and how do we learn about the inappropriateness of things? Should all
first graders know right from wrong when it comes to stealing?
Let’s add more information: the boy’s teacher saw him take the lunch
money and told his father, who promptly sat the boy down on the stairs of
their house and had a calm conversation about how and why stealing is bad.
The father taught him the basic lessons of right and wrong, good and bad,
and other things we might ultimately call honor. The boy didn’t steal again
(and went on to coauthor a book on game theory). Most people would agree
the boy is not what they think of as a “thief.” He’s not the person the expres-
sion “no honor among thieves” is referring to. When we unpack Granny’s
expression and try to use them to understand game theory, we can’t be cava-
lier about the meaning of the words. We cannot assume an all-encompassing
definition of each word. We have to be deliberate about the application of
certain terms and clear about what is and isn’t relevant.
When Granny said, “No honor among thieves,” she was talking about
hardened criminals who also happen to be the basis of one of our best known
and beloved game theory games. On to the Dilemma.
If you’re a fan of police dramas, you’ll be familiar with the setup of this famous
game. A bank is robbed, and two people, Butch and Sundance, have been
arrested. There were no eyewitnesses because the perpetrators were wearing
masks. Police need to solve this case.
Hoping to get one of them to confess, detectives put Butch and Sundance
in separate rooms and offer each of them the same deal:
(a) If you confess and your partner does not, you’ll get 1 year in prison and
your partner will get 10 years.
(b) If you both confess, you’ll both get 5 years in jail.
(c) If neither of you confesses, you’ll both get 3 years because you resisted
arrest anyway. (this is a very important part of the game and its outcome!)
“If I don’t confess and my partner does, I’m going to jail for ten years. If he
does confess, I’m a lot better off confessing too. Five is less than ten!
“If he doesn’t confess, I should probably still confess anyway, though. Either
way, I’ll be going to jail for three years or one year. One is a lot less than three,
and they’re both less than ten years. I’ll confess.”
Player one, let’s say Butch, has reasoned that he’s better off confessing regard-
less of what Sundance does. Sundance, in another room, thinks the same.
So, they both confess because it’s best for them, given what the other might
do. The Prisoner’s Dilemma’s famous result is that both players confess and
get 5 years, even though they could have both gotten 3 years. There was an
outcome that made both of them better off, but they didn’t get there.
BUTCH
DO NOT
CONFESS CONFESS
BUTCH BUTCH
CONFESS
5 YEARS 10 YEARS
SUNDANCE
SUNDANCE SUNDANCE
5 YEARS 1 YEAR
BUTCH BUTCH
1 YEAR 3 YEARS
CONFESS
DO NOT
SUNDANCE SUNDANCE
10 YEARS 3 YEARS
Confound It!
The Prisoner’s Dilemma is a great way to start learning game theory. It is
well-structured, has a small number of easy-to-understand choices, a small
number of players, and the famous result that both players end up worse off
than they could have been if they both had chosen differently. Based on their
dominant strategies they will both be going to jail for 5 years, but we see from
the payoff matrix that they both could have gone to jail for 3 years. Annoying
result, but true.
26 J. Prag and A. I. Prag
Well… why didn’t they just talk about whether to confess before they
committed the crime? Aha, that’s a cooperative solution and we’ll talk about
that in Chapter 5. The implication is, if they could have cooperated, they
would have both chosen not to confess and they both would have gotten
the shorter, 3-year prison sentence… maybe. The problem: the decision to
confess is actually driven by self-interest.
Let’s prove that by loosening the rules a little bit. Suppose Butch knows
that Sundance will not confess. Maybe the police let him see the interrogation
and it’s clear he isn’t “going to break.” Butch will still confess; 1 year in jail
is still better for Butch than 3 years. You’d get the same result the other way
around, if Sundance knew with certainty that Butch was not going to confess.
As it turns out, the driving force for each player to confess is their desire
to minimize their own time in jail. The outcome is dependent on the other
person, but the choice is not.
Here’s an example: on any given day, after dinner, you can choose to sit
and watch TV—your preferred choice if your partner does the dishes—or you
can help clean up. Your partner can make the same choices and has the same
preference. Even if you don’t want to watch the same thing on TV (which
might complicate the decision process), if no one ever cleans the dishes, you’re
going to be watching TV in a pretty nasty smelling house.
We make choices like these multiple times a day, every day—usually with
similar parameters. It’s a big part of adulting. Being completely self-interested
will probably get you into couple’s therapy or in search of a new partner.
While the therapist might help you communicate better, the only real solu-
tion is to find a partner who isn’t always self-interested. Empathy promotes a
solution that is better for all parties—but everybody has to get on board with
that approach for it to be effective.
What about Butch and Sundance? Well, they’re thieves first. The point of
the expression “no honor among thieves” tells us a lot. If you’re in a game
where your choice and the ultimate outcome depends on your belief in the
honor of the other player, their tendency toward empathy matters. Granny
was saying, if you’re playing with a thief, you can safely assume self-interest
will always drive the process.
JAY
PIZZA SALAD
JAY JAY
25 JOLLIES 30 JOLLIES
PIZZA
AMANDA AMANDA
AMANDA
25 JOLLIES 60 JOLLIES
JAY JAY
50 JOLLIES 20 JOLLIES
SALAD
AMANDA AMANDA
40 JOLLIES 20 JOLLIES
side salad. He prefers a big salad with a half slice of pizza. Your payoff matrix
might look something like Fig. 2.2:
In the top left and bottom right quadrants, you’re both ordering pizza or
salad. So, nobody’s really getting the whole meal they prefer. In the top right
corner, Amanda gets to order her favorite pizza and take a few pieces of lettuce
from Jay. She’s happy. In the bottom left, Jay gets to order the super big salad
and take a small slice of pizza from Amanda. He’s happy.
It turns out, we need some help to solve this game, and it’s right there
in the setup: you can’t order at exactly the same time. This is a sequential
game (which we’ll discuss in Chapter 6). Amanda takes a look and says “If I
want to maximize my jollies, I should choose Pizza (since 60 is the highest
number on the board). If I choose Pizza, Jay is choosing between 25 and 30
jollies. He’ll choose salad (30), and we both get what we want in the top right
corner).
It works the other way too: if Jay goes first, he sees his largest number is
in Pizza (50). He knows, if he chooses Pizza, Amanda will choose salad (40)
over another Pizza (25). The game solves at the bottom left corner.
(Pro note: if you’ve taken a game theory class before, you may be saying
“hey, this game has two Nash Equilibria,” and you’d be correct. We’ll talk
about that in Chapter 5).
Jollies are just a placeholder for how much you prefer one outcome over
another. The numbers can be called anything. They are not prices or revenues
or costs, per se. They are a quantitative measure of perceived benefit in the
outcome.
2 No Honor Among Thieves: The Prisoner’s Dilemma 29
If you want to grasp the depth of this problem, ask yourself this: what is love?
This is going to be, for most people, what we call a Potter Stewart definition.
This former US Supreme Court Justice famously said, when asked to define
pornography, “I’ll know it when I see it.” Such is the case with love. You can’t
really define it, but you know it when you feel it.
Money and many other tangible possessions are at best a proxy for the things
that truly motivate people—the results and outcomes that we deal with in
game theory. Emotional motivators are ultimately more important. Trying
to motivate a player with money will not get you cooperative behavior if
you treat them with disrespect. Work-life balance has emerged as a more
influential factor for recruiting—more than base salary and additional compen-
sation. John and Paul were right, and we’ll come back to this issue frequently
throughout the rest of the book.
[end box]
Back to Prisoner’s Dilemma: Let’s restate the payoff matrix in terms of
empathy; each player now loses one jolly for every year the other has to spend
in jail.
The advantage of reimagining the payoff matrix with empathy (Fig. 2.3)
is that we no longer have to look at the other player’s outcome to make a
better choice. Butch sees: if Sundance confesses, I lose 5 jollies or 3 jollies.
30 J. Prag and A. I. Prag
BUTCH
DO NOT
CONFESS CONFESS
BUTCH BUTCH
CONFESS
-5 JOLLIES -1 JOLLY
SUNDANCE
SUNDANCE SUNDANCE
-5 JOLLIES -10 JOLLIES
BUTCH BUTCH
-10 JOLLIES -3 JOLLIES
CONFESS
DO NOT
SUNDANCE SUNDANCE
-1 JOLLY -3 JOLLIES
I’ll choose the lesser of the two evils, not to confess, and lose only 3 jollies. If
Sundance does not confess, I will either lose 10 or 3 jollies. I will not confess
and lose only 3. My dominant strategy is to not confess.
An empathetic Sundance will do the same and the game ends up with no
one confessing. Both players get the optimal, 3-year sentence.
Unfortunately, restating the payoff matrix in terms of jollies moves the
game into a much fuzzier place. It’s easy to say just make the choice that
minimizes your years in prison. It’s much harder to know how that relates
to happiness units. We assumed that every year in prison led to a decrease in
jollies of one unit. That’s just our estimate, but we can’t know that for sure.
Still, most games’ payoff matrices have to be done with jollies, and not
the years in prison, dollars, or whatever else the original game is structured
around. Jollies are what we ultimately get from our choices. Happiness is why
we, as humans, make decisions. If that makes games harder to play, so be it.
Even the most penny-pinching Grannies prioritized happiness once in awhile.
We’ve talked a lot about honor in this chapter, and we’ll talk about reputation
over and over throughout the rest of the book. Let’s add something to the
pizza story:
2 No Honor Among Thieves: The Prisoner’s Dilemma 31
JAY
PIZZA SALAD
JAY JAY
25 JOLLIES 30 JOLLIES
JAY JAY
50 JOLLIES 20 JOLLIES
SALAD
AMANDA AMANDA
40 JOLLIES 20 JOLLIES
Suppose Amanda and Jay are on their way to dinner. During the drive,
they have a chance to discuss what they’re going to get. Look at the payoff
matrix in Fig. 2.4.
Again, neither player has a dominant strategy and games like this are harder
to solve. One way that this game could be solved is by direct communi-
cation—someone simply says what they’re going to get. That should work,
right? Well, kind of. What if Amanda changes her mind when they get into
the restaurant?
Let’s add another factor. Amanda has a powerful sense of honor and she’s
always worried about her reputation. Specifically, she loses jollies if she lies
or, in this case, changes her mind. Sadly, this is not true of Jay (at least, not
when food is involved).
Amanda loses 20 jollies any time she lies about anything. She announces,
as they approach the restaurant, that she’s going to get pizza. Let’s incorporate
her honor—her potential lost jollies—into a new payoff matrix in Fig. 2.5:
Jay still doesn’t have a dominant strategy, but now Amanda does. She’s
going to order pizza. Knowing this (presumably knowing about Amanda’s
personal honor code), Jay is getting salad. This would have worked equally
well if Amanda had said she was getting salad. Salad would be her dominant
strategy and Jay would order accordingly.
Honor and reputation have real effects on our decision-making in the
real world. They are forms of commitments because they change the payoff
matrix. We’ll talk more about this in Chapter 6.
32 J. Prag and A. I. Prag
JAY
PIZZA SALAD
JAY JAY
25 JOLLIES 30 JOLLIES
PIZZA
AMANDA AMANDA
AMANDA
25 JOLLIES 60 JOLLIES
JAY JAY
50 JOLLIES 20 JOLLIES
SALAD
AMANDA AMANDA
20 JOLLIES 0 JOLLIES
Honor and reputation are not exactly the same thing. Reputation is usually
applied to situations in which you expect to interact with the same player
again at some point in the future. For Jay and Amanda, that fits nicely: they’re
married. Reputation is a factor in repeated games. Honor is more internal,
more endogenous, and immovable. Reputation asks, “what do people think
of me” and honor asks “what do I think of myself.”
Turns out, reputation and honor also affect how the game is played if a
player changes. Amanda may be more likely to impose the jolly penalty if she
lies to her best friend (reputation is the driving force) rather than a perfect
stranger she’ll never see again (honor is the driving force).
Suppose you are 12 years old, and your sibling is thirteen years old. You
share a bathroom in your family home. After you take a bath, you leave your
towel and clothes on the floor and the bathroom is a complete mess. Your
grandmother, upon seeing this, says “do unto others as you would have others
do unto you.” Being an inquisitive, polite person, you say “huh?”.
Granny explains: … you know your sibling is going to use the bathroom
next and will have to clean up your mess before they can take a bath. Your
messiness has made the bathroom experience a little better for you, but worse
for your siblings. That’s all well and good today but what about tomorrow?
Your sibling might use the bathroom before you and they will follow your
lead, leaving the bathroom in equal disarray. Tomorrow, your bathroom expe-
rience will be the one that’s diminished (apparently your grandmother talked
like an economist…).
“But Granny,” you point out, “when we were all staying in that hotel in
Orlando a while ago everyone - including you - left the towels on the floor.
Why is that different?”
This would seem to be a “gotcha” moment, but Granny can explain the differ-
ence. In that case, there’s someone who is paid to clean the bathroom. Those
people would have to clean the bathroom thoroughly whether you left it a
mess or folded every towel. That’s not the same game and you don’t play it
the same way.
Let’s think about what else mattered in this messy bathroom game, specifi-
cally the age of the players. Age matters for two reasons: you and your sibling,
both young, are going to be around awhile. Logic tells you if you act in your
own best interest in the short run, and the other player does as well, you will
both be worse off in the long run. You’re going to be sharing a bathroom with
each other or somebody else—for a long time.
Age also matters on the other end of the spectrum. Suppose the players are
both in their nineties. While you might get the same cooperative result, you
might also get a couple of people who correctly say, “what long run?” The
point: cooperative behavior is most effective (results in more good payoffs) as
a strategy when it’s adopted early in a long-term, repeated, sequential game.
[box]
Put Me In, Coach
In basketball, there are five players on each team, all trying to score more
baskets than the five players on the other team. Team A has five players of
34 J. Prag and A. I. Prag
varying skill levels who each shoot the ball to score every time they get posses-
sion. Team B has “role players.” Some are good at passing the ball, others are
good shooters, etc.
It’s not hard to imagine that Team B will score more points and win more often
than Team A. If everyone on a team gets jollies only from winning the game,
then we can see how giving (passing the ball instead of shooting the ball) on
the part of some players can lead to winning and thus to more happiness.
This particular example might feel like a bit of a reach. Game theory isn’t really
about sports, and there are a lot of strategies for success in team sports. But,
the deep logic of the Prisoner’s Dilemma is valuable. Players of different skill
learn that success comes from cooperation rather than self-interest. When the
five play as a team, rather than individual shooters, they reach their optimal
outcome.
[end box]
don’t smile at you or wave to you because you’re the richest person there.
They have to like you. So, you pick up the poop.
Timing
We’ve been playing games with different expectations of the timing of players’
choices. There are two obvious ways that the timing of players’ choices could
be made. When players choose at the same time, we call this a simultaneous
game. When they choose in a pre-established order, we call it a sequential
game. As described, the Prisoner’s Dilemma is a simultaneous game. Both
players make their choices before they know what the other player has chosen.
Sequential games are ones like checkers and chess where someone goes first
then someone goes second and that repeats until the game is done. We’ll talk
about sequential games in detail in Chapter 6. If Amanda and Jay are going
to get food, that’s a sequential game. One person orders before the other.
Timing is usually part of the structure or rules established when the game is
first created. Interestingly, changing the timing from simultaneous to sequen-
tial (or vice versa) can alter outcomes from the same payoffs. In some cases,
it can help a game turn out better for all players.
What if the Prisoner’s Dilemma was played sequentially? Recall that we were
really specific that Butch and Sundance choose at the same time in separate
rooms. It’s the reason we get to the “both-will-confess” outcome.
Or is it? Let’s play the game sequentially, and with self-interested players.
We haven’t done any sequential games yet, but this one is easy.
Suppose Butch goes first; he will choose before Sundance and his choice
will be known to Sundance. Butch is thinking about choosing “Don’t
Confess,” figuring that Sundance will then do the same and they’ll both
get 3 years. But ultimately, a self-interested Sundance will only look at his
own outcomes. So, if he knows that Butch did not confess, Sundance will
choose to confess and get a sentence of 1 year instead of 3 years. Knowing
that Sundance is likely to confess, Butch would never choose “Don’t Confess”
when he makes the first choice because he’d end up with 10 years, not three.
No honor among thieves.
Same thing happens if Sundance chooses first. (Pro Tip: run the last
paragraph through Fig. 2.6 again to see the numbers).
36 J. Prag and A. I. Prag
BUTCH
DO NOT
CONFESS CONFESS
BUTCH BUTCH
CONFESS
5 YEARS 10 YEARS
SUNDANCE
SUNDANCE SUNDANCE
5 YEARS 1 YEAR
BUTCH BUTCH
1 YEAR 3 YEARS
CONFESS
DO NOT
SUNDANCE SUNDANCE
10 YEARS 3 YEARS
If the players are self-interested, the result will be the same whether the
game is simultaneous or sequential. This result is the embodiment of Granny’s
expression “no good deed goes unpunished.” In games where other players
are entirely self-interested, any attempt on your part to reach the cooperative
outcome unilaterally will fail and you will actually end up worse off.
Repetition
Another timing factor that has to be discussed in the setup of a game is
played is frequency. Is this game going to be played just one time or will
it be repeated many times (perhaps even forever)? As we’ll see in Chapter 8,
this issue matters a lot in many games, and it can even push games like the
Prisoner’s Dilemma toward the cooperative outcome.
Why a whole chapter for repetition? Well, repetition is tricky. Will we play
the game repeatedly with the exact same players or will it be the same game
with new players? If Butch and Sundance literally play this exact same game
repeatedly and they know they will be put in the same jail together, wouldn’t
that incline them to make different choices? To communicate? To collude?
(or, just to steal better and stop getting caught?) We have to amend each
game to show new factors each time it’s played.
Prisoner’s Dilemma is a great framework for understanding the who, how,
and what of a game, and we’ll come back to it a lot in future chapters.
2 No Honor Among Thieves: The Prisoner’s Dilemma 37
Conclusion
As Granny said, a journey of a thousand miles begins with the first step.
Thus, we begin our journey into game theory. We worked through our first
game, the famous Prisoner’s Dilemma. We’ve talked about game structure—
the players, their choices, the payoff matrix, and timing. We learned how to
approach and solve a game and discussed dominant strategies, cooperation,
and honor.
We waded into deeper, harder issues like what’s really in the payoff matrix?
Is it the explicitly stated payoff, jail time in the case of the Prisoner’s
Dilemma, or the real payoff, jollies? Turns out, this game is applicable to
basketball, pizza, and walking your dog. We looked at games with many
choices and, in doing so, decided what’s for dinner.
With this foundation, we can discuss other types of games that feature
heavily in our lives, games that don’t solve out as easily. Most games have a few
possible solutions and thus need additional information to find a solution.
This type of game will allow us to define and employ the Nash Equilibrium—
a way of knowing if a particular outcome is “final.” Sequential games have a
player’s choices being made in a prespecified order and repeated games. All
of these factors increase the fun in future chapters.
Game theory, for most people most of the time, is informal. Every day, you
are put in the position of making choices knowing that others are making
their own choices, and together you determine the outcome. The point of
game theory is to expand the information we seek, see the genius in game
setups, and better evaluate the choices that determine the outcomes in your
life.
Questions
2.1. You had an opportunity to get someone fired at work, which would have
gotten you a promotion and a nice raise. This did not involve lying or
doing anything inappropriate. Still, you chose not to do it. What aspect
of your payoff matrix told you to, correctly, not make the choice that
would have given you more money for firing someone?
2.2. In searching for a dominant strategy in a game, you look at your best
outcomes under all possible choices the other player could make. You
say to yourself, “if Bill does this, my best choice is that; if Bill does
the other, my best choice is whatever….” Which word in your thought
process is, arguably, the cornerstone of game theory?
38 J. Prag and A. I. Prag
2.3. You’ve been neighbors with Janet for 12 years and you can now success-
fully navigate most game-theoretic situations with her. Janet has a lot
of dominant strategies and you’ve figured out her payoff matrix. You’ve
been butting heads a bit with the new guy who just moved into the
house on the other side of yours. Why can’t you work it out with him
as easily as you work with Janet? What’s the difference?
2.4. Coordination games exist everywhere—even on a preschool playground.
Three- and four-year-old kids need a lot of guidance and intervention
to enjoy many of the things on a playground. When they are older,
however, kids become able to play what are called “pickup games”—
spontaneous games often with total strangers. Why can you do this as
a teenager but not a toddler? (hint: think about what you need for a
coordination game).
2.5. Too Many Choices. It can be a problem in many situations (not all
game-theoretic). Think about how you approach your decision about
what to have for dinner at a restaurant that has thirty choices. Do you
do anything to narrow your options? Having done that, do you ever
proceed to make it a game-theoretic decision? How did the first step
help the second?
Sam is going to buy the protein and Dan is going to buy the vegetables.
Let’s suppose these people know each other well and they know their own
preferences as well as their partner’s preferences for all possible combinations
of protein and vegetables. Their potential payoff matrix is Fig. 2.7.
Ew, numbers, gross.
What’s your instinct when you face a complicated problem in life with
what appears to be too many choices? Well, you might try to eliminate some
of the choices you hate and make your menu smaller. Game theory takes the
same approach.
First, let’s take a look at a dominant strategy: is there a choice a player
would always make? Spoiler alert: nobody has a dominant strategy in this
game (you can run the matrix to double check, if you want). The jollies
DAN
CHICKEN BEEF FISH LIVER
DAN DAN DAN DAN
ASPARAGUS BEANS
2 3 0 -1
SAM SAM SAM SAM
-5 -5 -5 -2
4 20 2 1
SAM SAM SAM SAM
0 0 10 -20
15 4 3 2
DAN
CHICKEN BEEF FISH LIVER
DAN DAN DAN DAN
ASPARAGUS BEANS
2 3 0 -1
SAM SAM SAM SAM
-5 -5 -5 -2
4 20 2 1
SAM SAM SAM SAM
0 0 10 -20
15 4 3 2
can remove that row as well. The result of these dominated choices is shown
in Fig. 2.9.
So, we’ve been able to remove two rows (beets and beans) and one column
(liver) and make the game look and feel easier to work with. Note, this isn’t
a sequential game; they are choosing—shopping—simultaneously, but they
can certainly do these eliminations in a step-by-step way. On their own.
Dominated choices only get us so far, though. If we look at the resulting
subgame shown in Fig. 2.9, we see that Dan still doesn’t have a dominant
choice. If Sam chooses asparagus, Dan prefers fish; if Sam chooses peas, Dan
prefers beef; and if Sam chooses broccoli, Dan prefers chicken. So, let’s look
at the vegetables again and see if we can make headway there.
As it turns out, Sam now has a dominant strategy to choose asparagus (and
not a moment too soon). If Dan picks chicken, Sam is indifferent between
broccoli and asparagus (5 jollies in each case). If Dan picks beef, Sam prefers
asparagus outright. If Dan picks fish, Sam is indifferent between peas and
asparagus. Asparagus is either a win or in a tie through all three potential
combinations. Knowing Sam the way he does, Dan will figure out that Sam
is buying asparagus, Dan will buy fish, and dinner is as healthy as can be.
Notice, we were able to solve this game with only one player having a
dominant strategy (Sam) and what we called full information. Even though
DAN
CHICKEN BEEF FISH
DAN DAN DAN
ASPARAGUS
3 2 10
4 20 2
SAM SAM SAM
0 0 10
15 4 3
Some retailers have turned “too many choices” into a marketing strategy. The
retailer (or restaurant) has made an active decision to give you so many choices
that occasionally you buy the item they want you to buy. Store brands that
look almost identical to name brands are placed alongside their counterparts
on crowded shelves in the hopes that you grab the “wrong” bottle. We don’t
normally want to consider games in which one player counts on the other
player to regularly make mistakes, but the too-many-choices game might be
an exception.
2 No Honor Among Thieves: The Prisoner’s Dilemma 43
Other retailers take the opposite approach: they remove choices for you. Large
chains like Costco and Trader Joes famously have only 1 or 2 options for each
product. From an economics perspective, this means there is limited variety, so
quality must be higher to ensure consumers continue to buy. For the consumer,
though, removing too many choices in beneficial; the choice set returns to
“buy” or “don’t buy,” rather than a more complicated and stressful matrix.
[end box]
Go Up A Hill…
Having more than two choices doesn’t always mean the problem is difficult.
Consider the game in Fig. 2.10:
Jack and Jill are choosing between three items each: A, B, or C for Jill, and
X, Y, or Z for Jack. When we use the process that we learned to search for a
dominant strategy, we see that Jack does not have a dominant strategy—but
Jill does. Jill will always choose C. She is indifferent between A, B, and C
if Jack picks X; she is indifferent between B and C if Jack picks Y; and she
prefers C if Jack picks Z. The logic is tricky, but she’s always getting the best
possible outcome when she picks C.
JILL
A B C
JILL JILL JILL
5 5 5
X
Surely the fact that Jill has a dominant strategy makes this game easy to
solve. In the Shopping Game, we saw that all we need is one player to have
a dominant strategy and full information and we can find a solution. The
player that does not have full information knows what the other player is
going to choose. But this game is a little trickier.
Take a look at Fig. 2.10 again. In this case, knowing that Jill will choose C
leaves Jack indifferent between X and Y. That means the best we can do is
say this game will end up at either 4\5 or 4\10. (Jack\Jill).
We’ve actually made this game trickier still. While Jack does not have a
dominant strategy, he does have a dominated strategy: Y. Regardless of Jane’s
choice, Jack would never pick Y. If Jack’s first approach to playing this game
is to see if he has dominant then dominated strategies, he will find that he has
the latter. Then, when he discovers Jill’s dominant strategy, he will choose X.
But, suppose his first approach is to see if Jill has a dominant strategy. Then
the game could have either of two solutions 4\5 or 4\10.
On seeing this, we’re tempted to say that Jack will steer the result to 4\10
because he’s no worse off and Jane is better off. Two things work against that.
First, if we’ve specified the game correctly, everything Jack cares about is in
his number and that means he is completely indifferent. Second, if he did
look for a dominated strategy before looking for Jane’s dominant strategy,
he eliminated B and thus made 4\10 impossible. There is no rule about what
you do first when you try to solve a simultaneous game. Everyone is different.
The point of this example is to show that these formal approaches to
game-theoretic situations—dominant strategies, dominated strategies, etc.—
still have potential potholes. You can learn to do games “right” and still find
games that are hard to solve.
3
Know Thyself: The Players
Leaders, therapists, and significant others use this phrase more than
“rationality,” because we don’t expect others to be perfect. It’s a little
less analytical, but it still implies a person has an “order,” or a level of
predictability. When all is said and done, that’s all rationality means: a consis-
tent choice. It means that a person presented with the same choices and the
same circumstances will consistently make the same decisions. It does not
mean that everyone uses the same approaches or “gets jollies” from the same
thing. It does not mean that these choices are consistently bad or good. Only
that there’s a little predictability to the player that allows you to assess the
game.
Are your employees motivated by money, benefits, or mobility, or some-
thing else (like respect)? Is your significant other happy with your choice of
wall art, or is she just choosing not to fight that particular battle? Does your
boss actually like your idea, or is he placating you so you’ll stay? Commu-
nication is the key to many game-theoretic situations. It may not be verbal
(actually, it almost never is), but communication happens between players
before, during, and after a game. This transmission of information is the
lifeblood of game theory. It can reveal, conceal, reinforce, invalidate, and
directly affect the setup of a game.
What makes people tick?
Sometimes, you meet people who are motivated by things you can’t
comprehend. Remember, rational just means predictable. It doesn’t mean you
would choose the same things as another person. Their choices do not need
to seem “good” to you. Even if a person’s preferences are incomprehensible to
you, you can still play the game because you can specify it.
But, what if the person’s choices are all over the place? It’s not just that
your preferences don’t align, you don’t even know what the person wants.
Unpredictability, irrationality, a person who has no discernable sense of order,
is hard to play with. You can’t win if you can’t even figure out what it looks
like when they win.
Along these lines, a popular relationship-based cliché is “opposites attract.”
For magnets, sure. In relationships? Eh, not so much. The cliché is probably
Granny’s mistaken observation. It’s not opposites who are attracted to each
other, but complements. Players who are better described as complements are
arguably trying to reach a cooperative outcome that’s better for both parties.
Each player in the relationship is looking for a partner to fill in their gaps,
but they’re still trying to achieve similar goals.
People who are true opposites might find that a relationship is pretty
challenging. Every game-theoretic decision—and there are many in a rela-
tionship—would find the couple at odds with each other because payoffs are,
48 J. Prag and A. I. Prag
well, inverse. Interestingly, they might be able to predict each other’s choices;
exact opposites would assume the other player made the opposite choice from
theirs every time. But, it’s hard to imagine either of them would be happy
with the outcomes, even if they could play the game.
A
X Y
A A
$5 -$10,000
X
B B
$5 $6
B
A A
$6 $10
Y
B B
$8 $10
Two players, A&B, are choosing simultaneously and with full information,
meaning both players know the entire payoff matrix.
Player B has a dominant strategy to choose Y. Regardless of Player A’s
choice, Player B is better off choosing Y.
Player A doesn’t have a dominant strategy. If Player B chooses X, Player A
would choose X; if Player B chooses Y, Player A chooses Y.
With full information, Player A knows that Player B will (should) choose
Y, so Player A will also choose Y and the game should solve out with both
players getting $10, right?
Maybe.
This nice, neat, game-theoretic process puts player A in a precarious posi-
tion. They are making the choice (X) that allows the possibility of their
worst-case scenario. If Player B, for whatever reason, chooses X, then Player
A will end up − $10,000.
Game theory academics say that won’t happen; players always will and
should pick their dominant strategy. Oh, but Player B is human. It could
happen. Player B might make a mistake or not understand the game or have
a different preference for money and think $10 k isn’t that big of a loss. Any
way you slice it, Player A could lose a lot of money, even though she shouldn’t.
Whatever the situation is, you can’t say Player A should always trust Player
B just because he has a dominant strategy. People go against game-theoretic
strategy all the time. Real-world probabilities tell us there’s always a chance
the worst thing can happen.
But, Granny already knew that—and she prepared you for it: “better safe
than sorry.”
She was specifically telling you to make choices that avoid the worst-case
scenario, even if it didn’t bring you or the other person the best return.
Note: it doesn’t have to be about the other player making mistakes playing
the game or wanting to see you suffer. We just can’t know with 100%
certainty what the other player’s payoffs really are. Any outcome—however
statistically improbable—is possible.
Maybe we can get a little closer to understanding, though. Money doesn’t
incorporate all of the potential emotions that could lead to an unfavorable
outcome (like spite). But, jollies do. Jollies allow us to incorporate things
like risk aversion, caution, and sensibility. It also allows us to evaluate the
scale of preferences; how drastically different a particular outcome will make
somebody feel .
Let’s respecify the payoff matrix incorporating Player A’s fear of loss. This
will allow us to get the maximin outcome without messing up our game
theory approach.
50 J. Prag and A. I. Prag
A
X Y
A A
5 JOLLIES -10,000 JOLLIES
X B
5 JOLLIES
B
6 JOLLIES
B
A A
6 JOLLIES 5 JOLLIES
Y
B B
8 JOLLIES 10 JOLLIES
We’ve made two changes in the payoff matrix in Fig. 3.2. First, we’ve
converted dollars into jollies, and we’ve taken half of the jollies away from
player A in the bottom right corner. Why? If A chooses Y, they’ll face anxiety
(because there’s a possibility of getting − 10,000 jollies). This feeling factors
into a payoff matrix with jollies, since it accounts for the relative happiness
in each outcome.
Player A now has a dominant strategy to choose X, and Player B still has a
dominant strategy to choose Y. The game solves out at the bottom left corner,
the previous maximin solution. Player A gets 6 and Player B gets 8, and this
is an equilibrium (nobody wants to change). Correctly specifying and using
jollies, the things that make people tick, is key to using game theory correctly.
play games with people who allow her the determine the outcome. Here’s the
thing: if you’re the control freak, you don’t usually recognize you’re acting
with this motivation. Control tendencies are usually recognized from the
perspective of other players who cannot seem to affect their outcomes.
If you’re playing against a control freak, you should recognize that your
choices don’t affect the outcome. Knowing this, you might not play. A few
examples: if Mary likes driving the car once in a while, she probably won’t
end up in a relationship with Harry, a notorious control freak. If your friend
never lets you choose where to eat, you might stop going (or stop caring
where you eat). You always have the option to not play the game.
[box]
Many countries and cultures actively remove some of their citizens from the
game as a measure of control. Concepts like patriarchy, hegemony, and even
colonialization are all situations where a small group of players usurp the
decision-making ability of a large swath of the population in order to maintain
control.
Having laws and rules that don’t allow people to vote, work, drive, etc. are
systemic versions of the control freak. While the players who are allowed to
make all of the choices can tell themselves that they are taking care of everyone
else, the fact is, they are not ever allowing people to choose. As we have seen
and will see in repeated games, this policy often leads to inferior outcomes. We
will revisit this issue in Chapter 4 and connect it to information and access to
education.
[end box]
3 Know Thyself: The Players 53
An easy example of how rationality is relative can be seen when you think
of… well… your relatives. Your older relatives, to be precise.
We hope to all be there; we all get older and change. When a teenager
looks at mom trying to figure out Tiktok, they might say “I’ll never be so
incompetent.” Over time, though, there’s no way to know. Saying you “won’t
be like that” is assuming you even know what “that” is. You are who you are:
your age, your tastes, your concerns, your feelings. You probably won’t be
like “that” in the sense that you won’t be like your elders are right now, but
they—and Granny—probably said the same thing once upon a time.
Some would classify the actions of an older loved one as “irrational,”
but now we know that doesn’t fit the definition in game theory. Priori-
ties of any age, based on age, can be understood and preferences of any
predictable nature can be worked into a payoff. You might try to under-
stand the perplexing approaches or thought processes of an older relative,
but remember: they aren’t irrational. They just aren’t…you.
Here’s a fun one: try to figure out a payoff matrix when you go out to eat
with an older relative. The answers might surprise you, but the process will
enlighten you.
If game theory is applicable to seniors, why not to babies? Well, as we saw
in the introduction, babies have not yet figured out how to make decisions.
They are purely reactive beings, responding to positive and negative stimuli
without an agenda. Babies won’t respond to game theory the way a learned
adult might.
Know Thyself
When we discuss sequential games in Chapter 6, we will include games that
you play with yourself. These are not the same as unilateral, controlled games.
Rather, we mean games where your decision today can change your outcome
of a future game. In essence, you’re playing against the needs of your future
self.
When we do this in game theory, we are not simply talking about growing
up, maturing, gaining experience, or other things that happen naturally with
age. We are talking about choices today that allow for different choices in
the future. Game theory for an individual is quite literally the fork in the
54 J. Prag and A. I. Prag
road. Every choice you make today can preclude or allow a future choice. As
Granny would say: “when one door closes, another one opens.”
Consider the curious situation that you are in when you play games with
your future self. You think that of all the people on earth, you most assuredly
know your preferences and payoffs. In fact, you do not. You know your-
self today; you do not know yourself 20 or 50 years from now. You don’t
know what will matter to you, your biases, your constraints. You don’t know
what will give you jollies. You don’t really know yourself in the future in any
meaningful way.
The proof of that can be seen if you look back over your life a decade or
two. Why is it easy to give your teenage son advice about education drugs or
sex? Because, as Granny might have said, you’ve “been there and done that.”
You can (hopefully) remember your feelings, what gave you jollies, and what
mattered to you when you were his age. You know how your choices at his age
changed outcomes for you, and you can give some sage advice. “Hindsight,”
Granny said, “is 20/20.”
It is easier to predict player preferences and accurately specify a payoff
matrix if you’re already past the time in life your players are experiencing.
Giving your kids advice is easier than dealing with your aging parents. You
simply haven’t gotten there yet.
When you’re “not thinking straight,” you tend to do things that are
mentally easier. Strategic decision-making depends on analysis and evalua-
tion. So, tiredness is strategically problematic because you might mis-specify
the players, choices, and payoffs. For example, you might think that everyone
you’re dealing with is just like you (has the same payoff matrix). You might
also fail to recognize there are more than two choices or that there’s another
motivation for the other player. A reduced ability to accurately evaluate and
set up a game often leads to bad outcomes. It’s why Granny told you to “sleep
on it.” Don’t try to make difficult decisions when you’re too tired to think!
Physiological factors can affect decision-making in a couple of ways. Pain,
for example, can make people less able to process things—since it reduces
concentration—and thus less able to make correct strategic decisions. Inebri-
ation reduces the ability to process potential consequences in the short- and
long-term, also leading to suboptimal outcomes. When we focus on the
human aspects of game theory, we can’t ignore the effects of physical health.
But, going down this path needs to be done with great care. Hormones are
often associated with things like overactive sex drive and rage. Just because
the science is valid doesn’t mean we can use biological excuses for bad deci-
sions. When the correct outcomes for all players are included in a payoff
matrix, game theory specifically never allows for a “hormonal justification” for
making bad decisions. On the contrary, every punishment in a game exists to
take away jollies from bad choices and reduce the incentive to harm the other
player. Correctly specified jollies de-incentivize bad behavior. Game theory
does not allow for biology as an excuse.
[box]
Revenge
There’s a well-known, rather cryptic, expression (usually ascribed as an Italian
proverb): “revenge is a dish that’s best served cold.” What does this mean?
All of this sounds pretty ominous, but it doesn’t have to be. Revenge can be
silly. If you’re building a snowman with your sibling and out of the blue, they
hit you with a snowball and snow goes down your back (usually the worst
56 J. Prag and A. I. Prag
outcome of a snowball sneak attack), you aren’t likely to get your best revenge
if you respond immediately; their guard is up. So, you laugh it off and wait
until they relax, maybe after they turn their back on you. Revenge, literally, is
best served cold.
Granny’s expression, often mis-interpreted, that has the same meaning is “he
who laughs last, laughs best,” or simply “getting the last laugh.” This has
nothing to do with humor. This one is about repeated games. If you’re making
choices with the same player over and over, the expression tells you to exact
your revenge in the last game. Laughing is the jollies you get by taking revenge
for something, knowing the other player can never get back at you. There’s no
chance for them to take revenge on the revenge. We’ll talk more about this
motivation when we examine repeated games.
[box]
Suppose there are two restaurants close to each other with equally good
food. One focuses its resources on talent, ingredients, and ambiance that
earn it a Michelin star. The other spends the same number of resources on
marketing. Is it obvious which one will survive? Do the restaurants have the
same goal? (hint: do they have the same audience?)
Or, how about this one: success, to academics like college professors,
includes getting tenure. Roughly speaking, that means you have a job for life
at a college or university as long as the school and your department survive.
At many schools, getting tenure requires publishing several research papers
in academic journals that are peer-reviewed and highly ranked. The rules
for getting tenure were created many years ago when creative and research
freedoms needed formal protection.
Who’s the customer for a school? Students. Do students care about any of
the metrics that lead to tenure? Do they even know what they are?
What happens when the priorities of two players (in this case, a customer
and a producer) aren’t aligned?
Rules, rationality, and the payoff matrix are all interrelated. You will find
yourself in games where you are told the rules and you have to decide if you
even want to play the game. The choice can be difficult and problematic. For
better or worse, the rules were written by the people who got there first. You
can see something that looks like a lot of fun—looks like it will give you a
lot of jollies (like teaching), but the rules involve educational attainment (a
PhD in many cases) and other seemingly arcane metrics (published research)
that make the goal less fun and reduce your jollies. Knowing this, you might
become a Marketing Director and write books instead.
Historically, rules are set to establish and maintain some level of quality.
Teaching college means teaching concepts and approaches that are more
than your opinion. Aha! Light Bulb Moment: the irrelevance of your success
depends on its impact. If you teach people well and they live a better life, or
if your research solves a problem and it changes the world for the better, or
if the work team wins the Softball Championship, it may be worth it. If you
can trace the outcomes of your accomplishments to something beyond you,
then you did not spend your life in hot pursuit of an irrelevant success.
If mortality is the finish line for accomplishments, then, to paraphrase
Da Vinci: “a life well-spent brings happy death.” When all is said and done
(thanks, Granny), getting the last laugh might be the best outcome.
[box]
The hot pursuit of an irrelevant success was actually the point of a popular
song in the 1970’s. Dr. Hook and the Medicine Show was a band that achieved
moderate success during the seventies when it released the song “The Cover of
the Rolling Stone.” Written by poet Shel Silverstein, the song laments that the
group won’t feel like they are a success until they have appeared on the cover
of the then-iconic music magazine Rolling Stone.
At the time of the song’s release, that cover was considered the height of success.
But, the long-term relevance of that achievement more or less died with the
demise of print magazines. Rolling Stone is still around, but it’s mostly an online
publication. “Cover of the Rolling Stone” is a great song with diminished
cultural impact.
[box]
Sincerity
There are humanist things that affect game outcomes that can’t be easily
taught, but they are well-understood. When you’re trying to communicate
your motivations, what makes you tick, to the other player, you’re often put
in the position of saying things like “money doesn’t matter to me” or “I’m
not in this for myself.” Roughly speaking, you’re trying to get the other
player to believe that you want to make cooperative choices—as opposed to
self-interested choices.
When the other player believes you, they likely feel that you are sincere
or genuine. While these words have definitions, their application, and real-
world use is usually informal. People “pick up on things” when they talk to
you. Executive training consultants say it’s eye contact or facial expressions
or body language—all of which can be associated with sincerity. (But, those
things can be taught, learned, and therefore mimicked. They may not be the
most reliable indicators of sincerity.)
“How you come across” is a critical part of the outcome of all games that
you play. As we have more interactions, play more games, and see “what works
and what doesn’t,” we are ultimately testing signals and learning. Hopefully,
many subtle signals of things like sincerity “work for us.” And that’s the key:
what works for one person doesn’t always work for others. Communicating
sincerity is often a matter of reinforcing some aspects of your personality and
softening or removing other things that make you seem self-interested.
3 Know Thyself: The Players 59
A Sense of Fairness
Using jollies can help you put a lot of things into a payoff matrix and incor-
porate more of the human element into a game. There are concepts out there
that are more like feelings; they are things that are easy to say but hard
to define. One such concept is fairness. Being fair might be as generic as
“everyone gets to play by the same rules,” or it might be as personal as “things
turned out a lot better for me than they did for you and that makes me feel
bad.” The latter approach can be incorporated in the payoff matrix of a game.
Suppose the payoffs for a simultaneous (everyone chooses at the same
time), noncooperative (players can’t collaborate) game with full information
(everyone can see the payoff matrix) looks like Fig. 3.3:
To make this game easier, let’s assume that these players each get one jolly
for every dollar that they end up with. This game will end up with Person A
getting 3 and Person B getting 10. Person B has a dominant strategy to pick
Up; Person A does not have a dominant strategy, but they can see that Person
B will pick Up, so they pick Left.
Now suppose person B feels really bad—as in, loses jollies—whenever he
ends up a lot better off than person A. Specifically, B loses 6 jollies if his
winnings are more than twice A’s winnings. B is guilty.
That sense of fairness on the part of B will change where the game ends
up. To see that, let’s look at the payoff matrix, now only with jollies. We can
A
LEFT RIGHT
A A
3 JOLLIES 2 JOLLIES
UP
B B
10 JOLLIES 5 JOLLIES
B
A A
3 JOLLIES 4 JOLLIES
DOWN
B B
5 JOLLIES 4 JOLLIES
A
LEFT RIGHT
A A
3 JOLLIES 2 JOLLIES
UP B
4 JOLLIES
B
-1 JOLLIES
B
A A
3 JOLLIES 4 JOLLIES
DOWN
B B
5 JOLLIES 4 JOLLIES
include this self-imposed punishment in the upper right corner to see how
guilt affects B’s jollies, as in Fig. 3.4:
This change doesn’t affect Person A; he still doesn’t have a dominant
strategy (we didn’t change his payoffs). However, it does affect B. He now
has a dominant strategy to choose down, and the game will now end up with
both players getting 4. B’s guilt changed the payoff matrix and therefore the
outcome of the game.
There are people who would say B is illogical or irrational for not acting
in his own self-interest, but there’s nothing particularly wrong with basing
your jollies on things like how the other player might feel. We do this all the
time in healthy relationships. The choice was still self-interested: B chose the
option that didn’t make him feel guilty. When we use happiness, we have to
remember that it isn’t measured the same way as money, revenue, house size,
or other metrics. More/bigger isn’t necessarily “better” in everyone’s mind.
Conclusion
We run the risk of “losing” every game, making incorrect or inappropriate
choices, when we don’t understand what matters to the other players. The
easiest thing to do is assume that everyone is just like you. But game theory
isn’t easy (sorry). The very nature of doing game theory correctly involves
open-mindedness, a worldly perspective, and a lack of prejudice.
3 Know Thyself: The Players 61
(We’ll see in Chapter 7 that prejudice can dramatically alter the outcome
of a game.)
In classroom game theory, there are usually straightforward games (the
traditional version of the Prisoner’s Dilemma), and difficult, complicated,
heavily mathematical games. Straightforward games are perceived as easier.
In the classroom, that’s correct. In the real world, where you have to figure
out “what makes the other player tick,” specify their payoffs, and balance
your own preferences, you realize all game-theoretic situations are tricky.
Players in important games are more than just other ages or other genders.
They are other countries, other cultures, other belief structures. It’s very
easy to be judgmental when you do game theory out in the real world.
Feeling superior because you always do what you perceive as the coopera-
tive thing might give you jollies, but it doesn’t always give you enough (extra)
jollies to compensate for playing a game naively or mis-specifying a game.
Understanding the player is step one toward strategic success.
As it turns out, what you don’t know can hurt you. Ignorance, in game
theory, is rarely bliss.
Questions
3.1. Two people meet while partying in Las Vegas. Over the course of an
alcohol-soaked weekend, they get married at the Chapel of Love (yes,
that’s a real place). One year, 2 months, 3 weeks, and 4 days later, they
filed for divorce. Another couple is set up by their families in an arranged
marriage. Despite never dating before their wedding, they are married
for the rest of their lives. What game-theoretic information is different
between the two couples?
3.2. The point of learning game theory is to better understand and navi-
gate situations where you and another person are making choices that
together determine the outcome for each of you. Given that, it’s point-
less to study game theory if you assume everyone is just like you.
Consider this situation: a 65-year-old grandfather is taking care of his
14-year-old grandson. They decide to watch TV together. Grandpa is
only interested in a fishing show. Grandson only wants to watch a newly
released science fiction show. Neither of them has any interest in the
other’s show. How will they be able to decide which of these shows to
watch?
62 J. Prag and A. I. Prag
Granny used to say: “a little knowledge is a dangerous thing.” No, she wasn’t
telling you to drop out of school. The key to this cautionary expression is the
word “little.” Granny was saying that too little knowledge about something
might cause you to misstep in the future. You think you know enough to
make an informed decision, but you don’t.
There are probably a dozen expressions extolling the value of knowledge
and information. In game theory, “who knows what” and how information is
used almost always determines the outcome of games. Information removes
uncertainty. Full information brings you closer to a full understanding of
game setup: the other players, your choices, and your outcomes.
Uncertainty happens in the absence of reliable information, and it creates
a problem in any decision-making process. As we saw in the last chapter,
the underlying challenge in game theory is figuring out what someone else’s
payoff matrix really includes—what they really value. In the real world, it
is nearly impossible to understand other people’s jollies, what makes them
tick, fully. Information (in many forms) becomes an important tool in getting
closer to clarity.
Let’s take a look at the types of information that can be helpful, how and
when we use them, and where you can leverage information across multiple
types of games.
Information is a broad term, and as we stated before, we have to be specific
about what we mean. In a game theory context, information is anything that
helps one player better specify the potential outcomes of a game. It could
be related to the preferences or motivations of the other player, the type
and frequency of the game at hand, and the choices available to all players.
at all. On the other hand, we regularly play games that are based on bluffing,
like poker. Great poker players assume bluffing is happening, and their skill
lays not only in betting on a great hand but also leveraging bluffing players
against each other. It takes a keen eye to see a pattern (and therefore infor-
mation) in a bluffing signal. Some would say it’s the whole point of the
game.
Signals are given to change the behavior of a player. If a person sending
a signal is not bluffing and she wants the signal to change the behavior of
the recipient, she will have to decrease or eliminate the noise to increase the
validity (trustworthiness) of the signal. This can be done by including some
form of proof to back up the claim. In day-to-day interactions, proof is more
than “I swear” or “I promise.” As Granny said, “talk is cheap.” When you
apply for a loan, you can swear that you’ll repay it, but the lender will prob-
ably look at your credit score or another metric to know with reasonable
certainty whether the loan will be repaid. You provide a signal, and your
credit report verifies it’s not a bluff.
Can bluffs happen on screens? Absolutely, we do it all the time. A bluff can
inject noise into a screen easily, since the less-informed player doesn’t know
whether the screened information is correct. It could be that he doesn’t know
the other player well enough to evaluate the answers for truth. It could also
be a processing problem: did you ask the right question? The person trying
to get information might also face a cognitive bias in his interpretation of the
screened information, based on many factors (do I like how is the response
worded? Is it cold in here? How long have I been screening candidates?)
“You can’t judge a book by its cover” is one of Granny’s cautions about
what your screened information is actually worth. She means you can’t tell
how good a book is by simply looking at the front; the cover doesn’t give
you enough information to accurately assess how good the book is. Granny
is cautioning you against an unreliable screen, lest you buy the book and hate
the content.
If the game is on a board in a classroom: yes, you have all the information
you need to solve a game. In reality, information gathering takes time, skill,
patience, and the ability to see through many human factors.
“Don’t judge a book…” is talking about the misuse of spurious visual
screens—making judgments about people based on their appearance. Because
of this, screening has become rightfully regulated or outlawed in many cases.
You aren’t allowed to use screens like ethnicity or gender to make decisions
about most things. Historically, screening on those sorts of things has been
used inappropriately to make decisions on hiring, housing, compensation
and promotion, mental toughness, and college admissions. Regulations have
4 Fool Me Once: The Value of Information 67
Consider tennis. Suppose your opponent yells, “I’m going to hit the ball to
the left side next time.” Why would they tell you that if it was true? They
wouldn’t. Why would they bother lying to you at all? Because they want you
to deviate from your current strategy that has you ahead 5-0 in the final set.
First and foremost, information sharing only makes sense if the game allows
the information to actually make a difference. In poker, it takes a trained player
to filter bluffs and signals that come their way. In zero-sum, it may take one
or two bad outcomes before you stop trusting all signals altogether. Even in
games that might be win-win, information can be used to move the outcome
in your favor or against you.
[end box]
to get you to think twice about your dear friends. The expression is great for
game theory too.
Suppose a young, highly talented basketball prospect is trying to choose
between two colleges that are equal in most ways. Each college’s Athletic
Directors (AD’s) will likely tell her that she’ll get a lot of playing time, the
team will promote her in the press, and she’ll start each game. Assume both
teams kept most of their starters from last year, and the colleges finished their
conference in first and second place, respectively.
How can the prospect choose which school to attend? Both coaches are
signaling what they think is useful information, and it’s virtually the same
for each school. Both AD’s said the same things about opportunities to get
playing time and the likelihood of wins. The team makeups are almost iden-
tical, records are close, and the facilities available are state-of-the-art. The
schools are equidistant from the player and each program is offering her the
same scholarship.
What other information could the candidate use to break this tie?
What about the personality of the coach? Suppose Coach X has more
charisma, is more energetic, and seems more excited about the prospect of
getting the student on board. He calls all the time and makes a big show of
trying to recruit her.
The other coach, Coach Y, is a close friend and frequent confidant of one
of the most successful professional coaches in the country. This is not some-
thing he talks about openly, but everyone, including the prospect, knows
about it.
If the player believes that friends are a signal, and if she believes Coach Y’s
friendship is indicative of his commitment to basketball, she might choose
Coach Y’s college. The friendship is a signal, a new piece of information, and
it influences her decision and the outcomes of the game. The player is judging
the choice before her by the company each coach keeps.
Signaling and interpreting signals is complicated and personal.
We said that both coaches offered the same amount of playing time, but
an experienced player knows that outcomes—both individual and team—
depend on the amount of effort that all players put forth. Being a good
friend of someone successful is a positive signal, but so is being a charismatic,
inspirational coach. As it turns out, many of us “follow our gut” when it
comes to choices like this. We could have just as easily said the player values
enthusiasm as a force in leadership and chooses Coach X. Interpretation is
extremely personal, dependent on the player’s priorities and preferences.
There are infamous examples of charismatic, inspirational leaders who led
followers down a path to very bad outcomes. Pick your favorite repressive
4 Fool Me Once: The Value of Information 69
dictator, for example. This is not a reason to ignore charisma, but it’s a reason
to put these two signals (charisma and friends) together. Most charismatic
leaders have friends—previous followers. Before you follow a charismatic
leader, you might want to talk to, or do a little research on, other people
who have made the choice that you are about to make. Granny’s advice is
even better now: you’ll be known by the company you keep.
Diluted Signals
The most straightforward signal is to communicate directly: say it, write it,
create a PowerPoint for it, or otherwise reveal it completely. This ensures the
information is received. However, direct communication doesn’t guarantee
the information is understood . When dealing with signals between humans,
we must be cautious that any signal communicates what we are hoping to
communicate.
Once upon a time, telling someone that you loved them was a very
powerful signal. These days, when “love ya” is something people toss around
mindlessly, the word doesn’t necessarily convey a clear message from person to
person. Along the same lines, when you teach your child manners you teach
them to say please and thank you. These are good habits to be sure, but saying
please and thank you isn’t supposed to be a substitute for good behavior.
Those words do not allow you to ask too much of people. Well-mannered
words are not the same as good manners.
Granny cautioned us well to be wary of verbal signals. She said, “actions
speak louder than words,” and “talk is cheap.” Game-theoretically, these
4 Fool Me Once: The Value of Information 71
expressions are absolutely true. Notice: she isn’t saying words don’t mean
anything, just that direct communication of some things can be interpreted
in many ways. People absorb and interpret information through their own,
individual lens, with their own biases, processing capabilities, and previous
experiences. Even direct communication—the best form of signaling and
information sharing—can become diluted and misinterpreted.
[box]
Curiosity: Educational Attainment
Formal education is important to build hard and soft skills for future career and
achievement. However, there are those who argue that educational attainment
is actually a signal of something more than formal skills. Being able to say that
you have an associate’s, certificate, or bachelor’s degree doesn’t necessarily mean
you’re smarter that those who do not. It really means you were able to get into,
stay in, and graduate from a college program. The degree provides information
about your privilege, your work ethic, your commitment, or your ability to
learn. A potential employer or client takes the degree as additional screened
information, assuming the values of the college align with what they’re looking
for.
Like other signals, a formal degree has lost much of its informational power
over recent decades. Policies that make it easier to get into college and make it
harder to flunk out have skewed the information a degree provides. Advo-
cates for these policies are (rightfully) trying to improve life outcomes for
people based on data that show college graduates have higher incomes and
better achievement throughout life. There’s nothing wrong with wanting to
give everyone an equal chance. Unfortunately, these policies can also take away
part of the value of the signal, depending on how they are implemented. If a
potential employer no longer believes a degree is a major indicator of potential
and performance, she may not prioritize an applicant with the degree. Perhaps
she moves on to other indicators, like work experience or longevity with a
prior company. In this case, the policy intended to help actually removed some
opportunities, since it diluted what the degree represents. It has the opposite
effect of what policymakers may have wanted.
[end box]
Mixed Signals
“You’re sending mixed signals!” said Granny. In game theory, a mixed signal
is both a big deal and a real problem.
72 J. Prag and A. I. Prag
K.I.S.S. Strategy
4 Fool Me Once: The Value of Information 73
Granny didn’t mince words a lot. Sometimes, she said “come on, keep it
simple.” Some of us had a Granny that embellished it: “keep it simple, stupid.”
She was a peach.
Look at it from the other side. If a player tells you something in a game-
theoretic setting, why do you believe them? To use information correctly, you
have to know the game and its payoffs. If you’re at a four-way stop perpendic-
ular to a stranger and they wave for you to go first, you don’t usually question
that information. You don’t even think about this very long because, well, why
would they lie? Mind you, there are possibilities; they might work for an auto
body shop, or they might get jollies from the sound of crashing cars, but far
more likely, they just want you to go first.
Now, go back to playing tennis. You’ve been hitting the ball back and forth
with the other player a few times. It’s a competitive game; you both want to
win, and it’s zero-sum. Just before she hits the ball back to you, your opponent
yells “I’m going to hit this one to your left.” Of course you don’t listen! It
might be true, sure, but this time, the other player has every reason to lie.
Your opponent is trying to “get inside your head”; change your strategy by
giving you completely useless information. That’s the nature of that game.
[end box]
100 Euros, and he happily accepts your offer and hands you the painting.
Did you buy a Van Gogh? Nope…you probably just bought a “Frèd.”
When a transaction involves asymmetric information, specifically when a
good’s value is known to a seller but not a buyer, a buyer will often overpay
for the good. In the case of the painting, any offer that the gallery owner
accepts will be greater than or equal to the true value of the artwork. The
gallery owner—the seller—is the only one who knows the true value. It’s
usually illegal for the seller to outright lie about the quality of a good, but he
can certainly hold back on key information, forcing the buyer to guess (aka,
lie by omission).
Some states or municipalities have tried to reduce information asymme-
tries by forcing sellers and buyers to disclose information. In California,
for example, all job listings must include the salary range for the position.
Revealing such an important factor ensures that the “seller,” the potential
candidate, does not accept an undercut salary. Incidentally, the policy also
helps the “buyer,” the recruiting firm, screen candidates who want more than
the intended range.
[box]
There are other situations where a seller knows what the asset is worth to
him, but not to everyone else. Corporate acquisitions, where one company
buys another, are particularly susceptible to information asymmetries. Even
with mandated financial disclosures and legal discovery periods, the true assets
purchased are unknown until management assumes control.
Let’s say you develop and patent a new form of artificial intelligence (AI). It’s
new, it’s efficient, it’s cutting edge. You nurtured the project from start-up to
innovation to a successful first year, and now Microsoft wants to buy it. You
value your firm at $1 billion. Microsoft offers you $2 billion. You think you’ve
hit the jackpot, you sell, and Microsoft overpays for your company.
Not necessarily.
4 Fool Me Once: The Value of Information 75
[end box]
Let’s take one more example of mixed signal information that might hit
closer to home. You’re in the market for a car and you decide to peruse
Craigslist for some options. You see a great option, contact the seller, and
offer twice what the seller is asking. The seller is the only person who knows
how the car really runs, and it’s not that great. The seller laughs all the way
to the bank—what a sucker you were!
But wait, there’s more. You’re an expert in auto restoration, the daughter
of a mechanic, and a lifelong lover of cars. You transform that beat-up car
and sell it for five times what you paid. The information asymmetry, in this
case, put the seller at a disadvantage. The buyer knew more about the “true
value” after all.
The winner’s curse is a cautionary tale about the value of information in
a game. Information asymmetries can be used by either player in a game,
and they can result in a variety of outcomes. Remember, payoffs are uniquely
personal and to specify them correctly takes a lot of understanding on the
part of both players. Granny knew this. It’s why she said, even though you
prepare and think you know everything, “be careful what you wish for, you
might get it.”
76 J. Prag and A. I. Prag
Negotiations
One unique area of strategic decision-making is negotiations. Every negoti-
ation is a well-choreographed dance of signals and screens. Like many other
things we’ve discussed, the situation and setup of the game is rather specific,
and classic. You are involved in a discussion when you and another player are
trying to divvy up the end result of a cooperative solution. Your choices affect
the other player, and theirs affect you.
You and your potential employer are negotiating your salary. Salary repre-
sents two things: for the firm, it’s the value that you will be able to create
if you work for this company; for you, it’s the value of your time spent
away from doing other things. Both your resources (skills) and those of the
company (assets) are required to create this value (some strategists call this
“synergy”). Given that, how much of the value do you get, and how much
does the company get?
A negotiation is a series of choices that help set up a game. The actual
game is your final choice: to accept or decline an offer. Through negotiation,
you reach a choice set where each player sees a specified outcome. If a mutu-
ally acceptable agreement can’t be reached, you won’t take the job and all
of the value disappears. Both parties move on and try to find an acceptable
agreement somewhere else.
From here, things get tricky. If—and this is a big if—both parties agree
on the amount of value that’s being created, deciding who gets what often
comes down to negotiating from what’s called a “position of strength.” That
usually means you both have a Plan B, something you can or will do if you
don’t get an acceptable outcome from the negotiation. Negotiations coaches
would call this a fallback, the best minimum, a BATNA (Best Alternative to
a Negotiated Agreement, for our negotiations friends).
It’s usually easier to negotiate a good starting salary with a new employer
if you’re already employed somewhere else. Your Plan B, your backup plan,
is to stay where you are. This minimizes negative jollies, especially if income
is your major motivator. Similarly, your potential new employer will usually
have more than one candidate for a job opening. They’re minimizing their
potential downside as well. There’s lots of gamesmanship in negotiating, and
some of it needs to be used carefully.
Suppose you know the other candidate and you know something bad
about her (we’ll let you make the bad thing up). Do you tell the potential
employer? Do you say something in passing like, “I know you’re also talking
to Gladys, and I think it’s wonderful that you’re not holding what happened
at her last firm against her.” That might damage the company’s backup, but
4 Fool Me Once: The Value of Information 77
it might damage you as well. You look like a gossip. The employer might not
want to hire either of you.
When a labor union goes on strike because its members want higher
wages, it sometimes forms a picket line around the company entrance saying
“this company is unfair” or “this company abuses its workers.” That might
convince the company to give in to the union’s demands, but it will also
hurt the company’s business if customers don’t want to shop there anymore.
Even if the Union is paid more eventually, it might get what’s called a Pyrrhic
Victory: a win in which everyone ends up losing.
Information sharing in negotiations is particularly tricky, because one form
of mixed signal or bluff can derail the choreography. The important take-
away is this: a negotiation gets two parties to the point of making a decision.
All information shared during negotiation brings players one step closer to
specifying payoffs for everyone involved.
[box]
Our Modern Society: Fake News
One of the more problematic aspects of AI is the ambiguity it creates about
the accuracy of information. Maximizing your outcome in a game requires
accurate information, something not skewed by lies, personal interpretation,
or other biases. Easily created—and easily disseminated—false information,
through the use of AI, chat, media manipulation, etc., makes verification diffi-
cult. Even with direct communication, doubt can cause misinterpretation of
available information.
Strictly speaking, there’s nothing wrong with making a choice based on person-
ality; we do it all the time. But when personality trumps all other sources of
information, a game may be mis-specified and choices might turn out poorly.
[end box]
things to the wrong people at the wrong time. How many people look at an
entire email or text stream before adding to it and sending it to the entire
family or the entire company? In this day and age, do we realize that every
email and text is permanent?
In the next chapter, we’ll contextualize the expression “what you don’t
know can’t hurt you” when we talk about threats. As we know from the
Reply All Conundrum, what someone does know, in particular about your
feelings and opinions, can hurt you. It’s inadvertent because you didn’t mean
to share the information, but you did. It’s out there. And, as we know with
information, you can’t take it back.
Once a sibling or your boss knows what you really think about them, you
can’t play any future game the same way. Keeping some things private is crit-
ical to achieving certain outcomes. This isn’t a lie by omission. It’s keeping
information that would not lead to optimal outcomes out of the game.
Conclusion
Every game-theoretic situation depends on information. You can’t begin to
play unless you know who you’re playing against, what your choices are, and
what the payoffs could be. Furthermore, and usually trickier: you need to
have some sense of the other players’ choices and payoffs so you can leverage
information to your advantage.
The list of expressions that relate to information is long. “You can’t
kid a kidder,” means you’ll have a harder time successfully manipulating
information set if the other player is more experienced at doing that very
thing. “Backstabbing” conjures images of people saying one thing but doing
another. Doing things “behind your back” means the same thing; you can’t or
don’t see it coming. Maybe it’s because of the player, maybe it’s their payoffs,
but these strategies are the result of manipulating an information asymmetry
to your advantage.
The discussion of information in game theory also includes many of the
problems that we associate with prejudice and intolerance. When we make
bad assumptions about people based on appearance, culture, age, gender, or
demographic, we find that outcomes are often worse for everyone involved.
This decision-making problem is one that often has people aligning attributes
with different motivations: if someone doesn’t look like me they probably
don’t get jollies the same way as me. This can cause you to mis-specify a
game in a bad way. Maybe you and your much-older aunt like donuts the
same way.
80 J. Prag and A. I. Prag
If you play the Prisoner’s Dilemma game with a stranger who looks
different from you, your choice might be the self-interested choice because
you’ve assumed that’s what they will choose. You don’t know them, and they
don’t know you. Human nature says to defend yourself first. When it turns
out they weren’t as prejudice as you were and they made the cooperative
choice, they got the worst possible outcome. Going forward, how do you
think they will play the game the next time they encounter someone like you!
Questions
Learning and understanding game theory is all about figuring out what’s
possible in the realm of human interaction. Arguably, we are born mostly self-
interested. We’ve never met a baby who decided to stop crying just because
mama was a little tired. In practice, we have learned that cooperative behavior
usually leads to better outcomes for everybody. In a game-theoretic sense,
cooperative behavior means players end up better than they would have
playing the game in a self-interested way.
the game. If she’s looking to be recruited, but her teammates are not, she is
really playing a different subgame.
Cooperation is a trickier concept in game theory than it is when we
invoke the word “cooperate” every day. Making a game-theoretic coopera-
tive choice is making a choice that usually goes against any single player’s
best interest unless other players also make cooperative choices. Put another
way, cooperation only leads to good outcomes if everybody does it. To that
end, cooperative games need a fully observable set of payoffs, so everyone is in
the know. It also helps to have a way to enforce cooperation or de-incentivize
noncooperative behavior in some way.
• If one country disarms and the other does not, the remaining country
becomes a world superpower, militarily. The disarmed country’s jollies are
lower, since the dominant country can basically control the rest of the
world.
• If neither country disarms, they are equal superpowers. However, each has
to spend resources to maintain vast nuclear arsenals, and those costs are
reflected in the payoffs.
84 J. Prag and A. I. Prag
US
DON’T
DISARM DISARM
US US
8 TRILLION 10 TRILLION
DISARM
THEM THEM
8 TRILLION 1 TRILLION
THEM
US US
1 TRILLION 5 TRILLION
DISARM
DON’T
THEM THEM
10 TRILLION 5 TRILLION
• If both countries disarm, they again end up with equal jollies. PLUS,
they’re no longer spending on armaments and can put those resources to
greater economic effect. As the payoffs reflect, both nations are better off
than they were when they had a lot of nukes.
or high-altitude enough and the USSR shot one down. Not exactly coopera-
tive behavior on both sides, but we digress). Perfect observation mechanisms
weren’t technologically possible in the 1960s, and neither was a cooperative
solution to the Cold War.
Until…
The US and USSR needed something that couldn’t be shot down; some-
thing that was not in a country’s protected air space. Something like… a spy
satellite: an orbiting camera with the ability to see anything and everything
that looked like a nuclear weapon within a country, but not be shot down.
More importantly, observation could be done by both sides, but at unknown
times. With satellites, each side would assume it was being watched. By the
1980s, both countries had this capability, treaties were signed, and the Cold
War was done. Observation reinforced the trust necessary for the cooperative
solution—the treaty—to work.
The structure of most games is not carved in stone. Rules and other aspects
of the structure can be changed to allow for things like cooperation if and
when that leads to better outcomes. In the case of the Cold War, the game
was restructured so that bluffing wasn’t possible. Once each side could observe
the other, trust was in the cards.
Recall that bluffing is one way to take advantage of an information asym-
metry, for the player who knows more. For cooperation to be possible,
information needs to be shared, transparent, consistent, and available to all
parties.
The Prisoner’s Dilemma, as normally structured, isn’t a cooperative game.
Either player can say that they’ll choose Don’t Confess, but their self-interest
is better served by choosing to confess and they always choose that way. But,
we said this Cold War game was a Prisoner’s Dilemma style game. What’s the
difference? Why doesn’t the ability to observe get Butch and Sundance to the
cooperative solution?
Both Prisoner’s Dilemma and the Cold War games are structured as simul-
taneous games—both players make their choices at the same time. The nature
of the Cold War game in the real world is different because it is a series of
subgames over time. This creates a timing difference that makes long-term
cooperation possible. There’s nothing permanent about the choices Us and
Them make.
If a cooperative treaty were signed that said both countries would stop
building new weapons, it would require years of cheating before the cheater
could take advantage of their additional (secret) weapons. With modern
surveillance technology, that’s (hopefully) impossible to achieve. Even if they
did, there would be a next iteration of the game, a next decision to make and
86 J. Prag and A. I. Prag
another after that. Because the game is repeated, it allows players to iterate
to a solution that’s better for everyone. This is consistent with the real world.
We might not trust somebody enough when we first play a game, but by the
tenth round, we are willing to hear them out.
In its usual form, the Prisoner’s Dilemma is a bit more immediate. Imagine
the players are able to observe each other but they still have to choose to
confess or not simultaneously. Not knowing what the other will choose still
leads both of them to confess. The immediacy and finality of the decision
make cooperation difficult.
CALLER 1
CALLS BACK WAITS
CALLS BACK
BOTH GET TALK
CALLER 2
VOICEMAIL MORE
WAITS
The game has two Nash Equilibria: one player calls while the other player
waits. It doesn’t matter who waits and who calls as long as they don’t do the
same thing. So, the game has two possible solutions. How do we get to one
of them? How do we get to finish our conversation??
When games have multiple Nash Equilibria but no dominant strategies,
you need to do more to solve the game—to get to one of the equilibria.
Games like these are often called coordination games, and the hardest
thing about them is to find a way to get to the mutually beneficial outcome.
They need coordination. In many cases, people will revert to an under-
standing or a rule for solving the game. Something like “the person who
originally made the call should always call back.” That certainly works as
long as everyone knows the rule and everyone follows it.
These rules emerge when people have what game theorists have called a
focal point. This sounds fancy and theoretical and at some level, it is but
the idea is straightforward. Focal, like focus, means we’ve found a way of
cutting through the noise and randomness of the game. We have found an
informal rule that works for the players most of the time, the rule is easy to
remember, and the rule often springs from what we call common sense. Fancy
strategists will formalize rules into heuristics or other modes of operation. The
bottom line is this: humans draw on their own preferences and ways of doing
things to get out of a Nash situation. It is usually something intangible—
some unstated rule—that allows the conversation to continue.
That’s a phrase we say a lot and don’t really think about, but “common
sense” is two really powerful words. Common, meaning shared by most
88 J. Prag and A. I. Prag
Punishment
One way to restructure the single-game Prisoner’s Dilemma to make it a
cooperative game is to incorporate a form of punishment. Doing that means
that we have to make explicit changes to the payoff matrix to punish a player
if they confess. Recall the original Prisoner’s Dilemma game in Fig. 5.3:
Suppose both players agree that if one of them confesses and the other
doesn’t, the snitch will do something to the no-snitch in jail that would effec-
tively ruin their day (break something that takes a long time to heal). The
punishment would add 5 years to the snitch, because…you know, violence is
bad. The payoff matrix would then become Fig. 5.4.
This changes the game. Now, neither player has a dominant strategy. The
game could end up at either “both confess” or “neither confess.” Both of those
outcomes are Nash Equilibria; neither player can unilaterally make himself
better off.
Is the game now cooperative? Yes… with one caveat.
Suppose Butch says he won’t confess. If he does what he says and Sundance
also doesn’t confess, they both get 3 years. If Sundance confesses despite what
5 Do Unto Others: Cooperation in Games 89
BUTCH
DO NOT
CONFESS CONFESS
BUTCH BUTCH
CONFESS
5 YEARS 10 YEARS
SUNDANCE
SUNDANCE SUNDANCE
5 YEARS 1 YEAR
BUTCH BUTCH
1 YEAR 3 YEARS
CONFESS
DO NOT
SUNDANCE SUNDANCE
10 YEARS 3 YEARS
BUTCH
DO NOT
CONFESS CONFESS
BUTCH BUTCH
CONFESS
5 YEARS 10 YEARS
SUNDANCE
SUNDANCE SUNDANCE
5 YEARS 6 YEARS
BUTCH BUTCH
6 YEARS 3 YEARS
CONFESS
DO NOT
SUNDANCE SUNDANCE
10 YEARS 3 YEARS
Both players can now get to the cooperative solution by agreeing, upfront
and explicitly, to choose “don’t confess.” The game doesn’t have the observ-
ability that we normally require for a cooperative game, but we’ve restructured
the payoff matrix to make it a coordination game in which direct communica-
tion (just say what you’re going to do) will get both players to the cooperative
outcome.
BUTCH
DO NOT
CONFESS CONFESS
BUTCH BUTCH
CONFESS
13 YEARS 10 YEARS
SUNDANCE
SUNDANCE SUNDANCE
13 YEARS 9 YEARS
BUTCH BUTCH
9 YEARS 3 YEARS
CONFESS
DO NOT
SUNDANCE SUNDANCE
10 YEARS 3 YEARS
[box]
92 J. Prag and A. I. Prag
Automatic Punishment
Punishment is important in reaching and maintaining the cooperative
outcome, but dealing out said punishment can be hard to do. Causing pain,
for most people, is itself painful and thus unpleasant. Did Granny say it?
“This is going to hurt me more than it hurts you” implies punishment
negatively affects a person’s jollies, even if you’re on the receiving end.
Given this reality, we sometimes try to make the punishment automatic.
We remove the hesitation from the player who needs to punish because
the so-called “human element” might interfere with follow-through. Making
punishment automatic means it is no longer a choice being made by a person
who might be unwilling to “do what has to be done.” It also removes punish-
ment from the punisher’s jollies, since they are absolved from having to
personally make that decision.
In the classic movie Dr. Strangelove, the Soviet Union builds a computer-
controlled, unstoppable “doomsday machine” that would destroy the entire
world if anyone (but at the time, the US) ever launched a nuclear attack
against it. The idea: this device would end the arms race because a preemptive
nuclear attack would kill both sides with certainty.
SPOILER: unfortunately for the world, Soviet leaders didn’t tell anyone
about the doomsday machine. Nobody knew until a deranged US general
launched an attack and, as it turns out, the machine automatically retali-
ates and destroys the world (apologies to anyone who hasn’t seen this truly great
comedy—definitely go see it if you can).
In effect, automatic punishment is what some dog owners might think
of as a shock collar; a device that gives a dog an electric shock every time
the critter barks. The idea is that the dog will learn not to bark as a way
of avoiding the automatic shock. Veterinarians and animal trainers generally
agree that these devices don’t work and might be counterproductive; instead
of controlling a dog, they might make the dog more aggressive. That and Dr.
Strangelove should be a cautionary tale; automatic punishments are difficult
elements of a game since they directly affect the outcome but aren’t in the
control of either player.
Removing the human decision-maker from the punishment phase of a
game can backfire. Use of a gun in the commission of a crime (robbing people
at gunpoint) adds mandatory extra prison time in many places. That makes
sense if you’re trying to deter the use of guns, but some people believe that
if that extra time is excessive, it creates an incentive to use the gun which
means the criminal might be able to eliminate a lone witness and avoid being
caught.
5 Do Unto Others: Cooperation in Games 93
While we won’t wade into this gun control controversy, it’s easy to see how
automatic enforcement is imperfect. Here’s another thought: some countries
have automatic penalties for crimes like murder to avoid even the appearance
of selectively applying harsher punishments to different groups of people.
That is another aspect of humanity that makes punishment harder than it
appears. For punishment to work, to change the decisions of all players and
reach the cooperative outcome, punishment cannot be applied selectively.
severe punishment is to stop people from making bad choices, the result could
be more cover ups and inefficiency rather than higher performance.
Bad behavior sometimes has implications that the decision-maker doesn’t
understand. The tricky thing about jollies is that they come from a lot of
different parts of our psyche. “An eye for an eye,” which dates back thousands
of years (Hammurabi 1750 BC), is an early attempt to align the punishment
with the crime. But, as subsequent scholars have said, you end up with a
lot of people without eyes. Explicit and pre-stated punishment is interest-
ingly missing from many of the most famous rules of societal games. The Ten
Commandments and the US Constitution have lots of do’s and don’ts—but
no specific punishments. That’s left to the players to decide.
Realistically, punishment and forgiveness are sometimes not possible, and
making some bad choices even once changes all future aspects of a relation-
ship. Let’s say a professional basketball player snaps at practice 1 day and
attacks the head coach. The League may punish him with a suspension or a
fine, but that doesn’t mend the relationship. He will probably be traded the
next season.
[box]
Agreements are in place to control output and impose other restrictions that
cost short-run money to improve long-run outcomes. No problem there,
except countries are short-sighted. Large economies will take a larger hit, so
the agreement is there to make sure everyone is incentivized to participate.
5 Do Unto Others: Cooperation in Games 95
Do we also need punishment? This is where nuclear war and greenhouse emis-
sions are a little different. In the long run, sure, they’re both pretty bad,
don’t get us wrong. But, cheating on building nuclear weapons is different
than maintaining fossil-fuel consumption. The punishment for going against
a nuclear treaty might be nuclear war. This is a big deterrent. Punishments
for going against emission standards should be equally powerful: significant
economic sanctions, fines, or other costs. They also need to be larger than the
possible gains from deviating from the treaty (lest the payoff matrix incentivize
noncooperative behavior).
A viable greenhouse gas limitation treaty needs both observation and punish-
ment to succeed.
[end box]
While cooperation is a useful area of discussion in game theory and one of
the most real-world forces we see, one could argue that cooperation defeats
the purpose of game theory. Most games, most of the time, are noncoop-
erative. Having to make a choice without knowing what the other player is
doing is the whole point of game theory, right?
It turns out, some of the biggest, most impactful games need bi-lateral
cooperation to ensure a good outcome.
Noncooperative games are not malicious. As we have seen, they are not
the same as self-interested, selfish behavior, nor are they unilateral. You’re still
playing with somebody else. So, we think of cooperation as a restructuring
of a game, something with commitments, promises, threats, or other things
that help us approach a cooperative solution in the real world. We’ll examine
these three things more in Chapter 7.
a noncooperator is large, and you will abandon that strategy the first time
somebody acts in a self-interested way.
Mind you, you wouldn’t necessarily become completely self-interested. You
would try to sort the “others” in an attempt to discern who you could form an
alliance with. This might explain the human tendency to join clubs, churches,
and other groups with shared values. Familiarity becomes your method of
sorting and determining who you should trust to act cooperatively.
[box]
The more rigid the cooperative structure, the more this is true. The Amish
lifestyle, for example, is based on cooperative tenets, where community super-
sedes individualism. To choose this lifestyle, members must believe that this
approach will make them at least as happy as other lifestyle choices. It’s one
thing to be told the lifestyle is beneficial; it’s another to make the choice
voluntarily given available information.
Many Amish sects encourage individuals remove themselves from the local
community and experience the non-Amish world prior to committing to the
way of life. Called the Rumspringa, the experience allows younger members
of the community to take residence elsewhere and live under different rules. If
the individual returns, they are actively choosing to be a part of the institution,
with full information about other available choices.
That the Amish religion is still around speaks to the validity of this approach.
Very few religions take this approach. Most have a rite of passage, formal accep-
tance, or ceremony of indoctrination—sometimes taken at a young age before
informed choice is even possible. They don’t encourage young members to test
drive other religions.
Cooperation, sacrifice, and other words that mean selflessness, trust, and even
faith are associated with group outcomes. Indeed, a lot of our upbringing—
across almost all modern religions and societies—is intended to help us avoid
the noncooperative outcomes we observe in the Prisoner’s Dilemma.
[end box]
98 J. Prag and A. I. Prag
Conclusion
Learning to share, to wait your turn, and play nice with others are all
preschool lessons that train you to act in a way in which society is better
off. These skills are meant to improve the collective; they are for the benefit
of the group, not only the individual. Sometimes, the downside of noncoop-
erative behavior doesn’t occur to people until they are on the wrong side of
that behavior.
When someone butts in front of you in line, or hogs all of the toys, or
hits and pushes during playtime, you (hopefully) realize that when everyone
does these things, no one is better off. Good parenting is trying to get that
lesson ingrained without the (probably repeated) bad behavior punishments.
Bad behavior in the real world does sometimes win; bullies and crooks don’t
always get punished. But, are those individuals supported by society?
In legal parlance, the goal is to maintain individuals as “functioning
members of society.” Lessons are small, constant, and repeated by many
people. They are also reinforced by many social structures (corporate codes
of conduct, school bylaws, organized religion, legal codes). Reinforcement is
both the presence of a rule and the punishment for not following it. How
well they work, however, depends on the individual.
Granny always had something to say, especially when people evaded
capture or punishment: “you’ll get what’s coming to you.” When we see
noncooperative behavior, sometimes we wish authority were there to meter
out punishment (where’s a cop when you need one?). Granny was convinced
judgment would happen eventually, and this makes sense in a Game-theoretic
sense. In our heads, the bad guys can’t win in the long run. Without controls,
we iterate to smaller and smaller payoffs. Society becomes unpleasant and
unstable, and we are guided to suboptimal solutions.
Questions
5.2. Many behaviors that we would think of as cooperative also fall into the
category of politeness. There can be, unfortunately, a bit of a culture
clash when we think about polite behavior. Culture clash is another
word for personal interpretation. Take opening a door, for example.
A man holds a door open for a woman. This was taught as polite
behavior once upon a time, but it might now be considered a bit
dated—especially if he only holds doors open for individuals that he
perceives are female. What if someone who identifies as male wants
to maintain the custom for politeness, without judgment? What could
he do? Think about the information problem(s) with this action,
depending on the perspective of the person he’s holding the door for.
5.3. The ability to observe the actions and decisions of all players is an impor-
tant factor in getting cooperative outcomes. Today, observability is at
an all-time high. Cars, houses (doorbells), stoplights, and retailers are
among the many places where cameras see everything. This new ability
to observe all behavior, cooperative and noncooperative, hasn’t become
sufficient to decrease self-interested or illegal behavior, though. What
else—besides the ability to observe players’ choices—is required to stop
noncooperative social behavior?
5.4. Waiting in line is a classic, yet frustrating cooperative behavior. At
places where wait times are notoriously long (ex, Disneyland, a ballpark
restroom, the school pickup line), some organizations have devised ways
to have customers pay to skip the line. How is that extra fee similar to a
fine for noncooperative behavior?
5.5. Cooperative behavior is taught from an early age. Sharing and patience
are not innate; they are learned. Are there limits to cooperative behavior?
Are there games in which cooperation does not help the game reach the
best outcome?
6
A Stitch in Time…: Sequential Games
Granny may have grown up on a farm, so she had the best expressions ever.
Take the short, simple saying “you reap what you sow.” Granny was saying
that your future crop is connected to the seeds you plant today.
Well… duh, of course. It turns out this classic bit of cautionary advice
defines sequential games. Good outcome or bad, what you get in the future
will depend on your choices and actions today.
Sequential games have the same features as simultaneous games: two
people are making choices and that, together, determine their outcomes.
Sequential games simply have a different structure: rather than choose at the
same time, one player chooses before the other. Checkers, chess, Tic-Tac-
Toe, poker, and many other games are sequential. Unlike rock-paper-scissors
or the Prisoner’s Dilemma, sequential games are fully observable: each player
can see the first player’s decision and then make his own. As we’ll see, this
temporal nature of sequential games can give some players a unique advantage
or disadvantage.
Sequential games occur as part of the natural order. You spend your entire
life playing games with your future self. The “present you” makes choices that
the “future you” has to reckon with. While we don’t want to make it seem like
every choice you make is in need of a game theory deep-dive, having the tools
and approaches of game setup could help you sort out complex decisions.
PLAYER
2
CHOICE
A OUTCOME
CHOICE
X
CHOICE
B OUTCOME
PLAYER
1
CHOICE
A OUTCOME
CHOICE
Y
CHOICE
B OUTCOME
(a possibly shorter life). If you never try cigarettes, you really can’t become
addicted to them. The choices of the two players, the young you and the
grown-up you, are inexorably connected. You could start talking about prob-
abilities—you’ll probably be able to quit, don’t have an addictive personality,
etc. Yet, even from a probabilistic standpoint, one thing is true: if you don’t
start smoking, you can’t become addicted to cigarettes.
With that in mind, we can look at a different extensive-form game where
both players are you. As simplistic as it sounds, looking into your own future
and seeing outcomes that you really don’t want to happen, you can give your-
self a straightforward approach to decisions you have to make today. Let’s see
smoking in extensive form (Fig. 6.2):
Former First Lady Nancy Reagan’s famous, oft-mocked advice about using
drugs, “just say no,” applies very well to this game. The idea is simple: you
really can’t become addicted to anything if you never try it in the first place.
Note, the outcomes for both versions of “try smoking” are worse than the
only outcome possible from not trying. This is similar to other game setups
where we show one or more very bad outcomes that we simply want to try
to avoid.
In the case of the game above, we’re not including any positive aspects of
smoking. Smoking might make you feel good (chemically), or sophisticated,
FUTURE
YOU
QUIT SUNK COST
+ HEALTH
PROBLEMS
TRY
SMOKING
DON’T ONGOING
QUIT COST +
TODAY HEALTH
YOU PROBLEMS
DON’T
TRY
SMOKING
STAY
CIGARETTE
FREE
or less stressed, and these feelings increase your jollies. This isn’t a simul-
taneous game, so we have to remember there’s a temporal element. These
feelings increase your jollies in the short run. In sequence, we are able to
specify a portion of the payoffs (in this case health outcomes and money)
and still navigate the game.
Sequential games are both useful and a bit misleading. Suppose we’re doing
the game that Nancy Reagan was talking about, but instead of smoking legal
substances, we’re choosing whether to start illegal drugs like heroin or crack
cocaine. We can specify the entire sequential game that you play with your
future self when you consider trying drugs, in terms of another outcome:
careers (Fig. 6.3).
There are better and worse futures in either case. However, there is still
one, unambiguous bad outcome—being a drug addict—that only happens if
you try drugs today. At this point, we aren’t interested in the likelihood or
probability of these outcomes. The point of game theory is to look at payoffs
and, in this case, walk back a path to avoid the worst possible outcome.
From this maximin perspective, it makes sense to avoid the worst possible
outcome by never starting drugs. This reality is loosened by those who
consider the first step “experimenting.” Whatever the rationale behind the
choice to try the drug, it is a choice that changes all possible future outcomes.
An outcome that you almost certainly do not want is now possible. If you
never “experiment” with cocaine, heroin, or crack, you can’t possibly die of
an overdose from the addictive use of those drugs. From a sequential game
FUTURE
YOU
QUIT POETRY
PROFESSOR
TRY
DRUGS
DON’T DRUG
QUIT ADDICT
TODAY
YOU
COLLEGE ECONOMICS
GRAD PROFESSOR
DON’T
TRY
DRUGS
TRADE ELECTRICIAN
SCHOOL (HIGHEST PAID)
perspective, “just say no” is good advice, simplistic though it is. It removes
the possibility of the worst possible outcome, showing a maximin strategy in
an extensive form.
[box]
As a parent, you want to help your offspring navigate these tricky waters. That’s
easier said than done.
Your teenager wants his own credit card, and you agree to co-sign—become
the default payer of the card—which allows your son to get his first credit
card. This thing is magic. He can swipe or tap to his heart’s content and
quickly spend more than his allowance or after-school job is capable of paying.
What do you, the parent and co-signer, do next? If you let your kid start
missing payments, his credit history will be damaged, and he might be living
in your basement a lot longer than you had planned. If you pay off the card,
he might not learn the difference between a loan and a gift, and he’ll overspend
repeatedly.
So, his choice and yours can affect his future outcomes. The fun part of this
game—and all games in the parenting space—is that you’ll both have different
outcomes, and you have to navigate the difference between money and jollies.
Spoiler Alert: we’re not solving this game for you! Everyone plays it game differ-
ently, and there isn’t a “right” answer. Indeed, later in life, your kid will be
playing games like this with you. Adult children trying to deal with adult
parents have just as many complex conflicts. Maybe Granny didn’t say it, but
“it’s hard to raise your parents.”
[end box]
106 J. Prag and A. I. Prag
but he lays out a clear choice. There are times we have to walk away from a
no-win situation.
Sequential games do get trickier when you’re playing with another person,
in part because the final outcome is not as simple as you win or you lose. In
sequential games, potential outcomes are dependent on both players’ choices.
Depending on the circumstances, it may depend more on who goes first.
Playing Tic-Tac-Toe? We prefer to go first. This is less strategic than it is
numerical. Turns out, the source of the first-mover advantage in Tic-Tac-Toe
is actually mathematical. There are nine squares, and the first mover will get
to put their mark in five of them. We’ve heard it said that you have to be
really bad at sequential games to go first in Tic-Tac-Toe and lose. Try it, let’s
see.
If there’s a distinct advantage to go first in a sequential game, it has a first-
mover advantage. If there’s benefit in waiting and seeing what happens first,
there’s a second-mover advantage. Let’s take a look at a few examples.
SODA
DIET PEPSI PEPSI
-$3M $6M
PEPSI CO.
COKE COKE
FLAVORED
$15M -$2M
SODA
PEPSI PEPSI
$5M -$4M
Company 1 chooses one innovation, then Company 2 chooses the other, and
voila, they get to one of the solutions.
Here’s the problem with that simple approach: Coke and Pepsi both want
to go first. This game has what’s called a first-mover advantage—and you
can see it in the payoff matrix. Both companies prefer diet soda over a new
flavored soda, as long as the other company chooses the opposite. Likely, it’s
because the market for diet sodas is much larger.
Let’s show the first-move advantage with two sequential decision trees. If
Coke goes first, it looks like Fig. 6.5.
If Pepsi goes first, it looks like Fig. 6.6.
But… the outcomes are the same, right? Not really. Remember, a sequen-
tial game’s first-mover advantage has each player looking at the final outcomes
and then the possible paths to get there.
If Coke goes first, it can see its best outcome is $15 M, and this is achiev-
able if Coke chooses Diet first. Pepsi sees that its only possible outcomes
given Coke’s choice are $5 M or − $3 M. It will choose flavored, and that
gets Coke it’s $15 M.
If Pepsi goes first, it’s best possible outcome is $6 M, which would be
possible only if it chooses diet. After that first move, Coke’s only possible
outcomes would be $10 M or − $1 M. Coke would choose Flavored, and
Pepsi gets its $6 M.
If Coke goes first, it ends up with $15 M and Pepsi ends up with $5 M.
If Pepsi goes first, it ends up with $6 M and Coke gets $10 M. In this
6 A Stitch in Time…: Sequential Games 109
PEPSI
CO.
FLAVORED COKE: -$2M
SODA PEPSI: -$4M
FLAVORED
SODA
THE COCA-COLA
COMPANY
FLAVORED
SODA
game, with these payoffs, the player that goes first can steer the outcome
to its advantage. Coincidentally, this lowers the payoff of the other company,
which is consistent with competition in the soft drink market.
When the rules of a simultaneous game are not pre-established or very
clear, we often experiment with timing. “What do you want for dinner?”…
“You go first!” We try to play the game sequentially instead. In some cases, it
takes another rule or requirement, like who goes first, to make this a useful
way to get to a solution.
Second-Mover Advantages
It doesn’t surprise people, based on experience with Tic-Tac-Toe or many real-
world situations, that there are first-mover advantages in sequential games.
Are there times when it’s better not to go first? We could simply make up a
sequential game that has a second-mover advantage and prove it theoretically,
but the point of this book is to find real-world examples of game theory.
It’s 1989. You live in the Midwest. It’s a SNOW DAY, and you’re oh so
happy. Reading Rainbow is done, and now you’re in front of the TV for
some midday programming. A familiar face appears and says to the audience:
“come on down!” Life is good.
A familiar second-mover advantage (in this case, last-mover advantage)
from US television is in the game show “The Price is Right.” Contestants
will win a prize (a new jet ski! Because that’s so useful!) if they correctly guess
the price of the showcase prize. Whoever is closest without going over wins.
There are three contestants. You can hardly contain your excitement as you
pour another bowl of Fruity Pebbles.
The first person guesses $500, which you’re sure is too low. The second
person guesses $900, and you’re sure that’s also too low, but getting closer.
Here comes the third person’s guess, and it’s… yep, $901. The last contestant
has an advantage because they can simply guess $1 higher than the highest
previous guess. They get all the benefit of the second person’s guess, but now
contestant 2 can’t possibly win.
A few things about this game. First, it’s zero-sum. There is one winner
and two losers. When there is uncertainty in a game, going last allows you
to decide with a little more information than the first person had available
to them. The first mover, roughly speaking, has no idea what range to set,
how high to guess, or even the relative value of a Jet Ski. The last mover now
has a clear range, and he knows exactly how to beat at least one of the other
players.
6 A Stitch in Time…: Sequential Games 111
[end box]
We know that our choices and decisions require some thought process; an
ability to work through all of the choices, payoffs, and iterations that lead to
your best possible outcome. If you’re making choices in your home, generally
relaxed, you probably make better choices. It’s not just that you make more
cooperative or better-informed choices with more information, it’s that you
process things better in a comfortable situation.
When you’re in public, at a gathering of professionals who are mostly
strangers, or in an art museum, or at a concert, you are, quite literally, a
different person. You are subject to peer pressure, noise, discomfort, and other
external forces. You make choices that often, in retrospect (or “when you get
home”) can be seen as incorrect or inconsistent. These choices might not
have been optimal, or they didn’t fit the model of your typical choice. Peer
pressure, in particular, might be thought of as a jolly killer. You find your-
self doing things because the people around you are doing them, not because
you actually want to do them. This is best thought of as a “jollies conflict.”
The desire to be, or feel, accepted is real, but it’s also often in conflict with
other sources of happiness. One could imagine that a person makes different,
perhaps even better choices when they are able to make their choices on
“home turf.”
A good example of this difficult situation can be seen when a person goes
away—as in, away from their hometown—to college. The short-run desire
to feel accepted frequently blurs the long-run consequences of a student’s
choices. Sports fans think of home field advantage as the impact of home
comfort on players. Parents think of home field advantage as a reason to keep
kids close to home: things are more familiar and easier to control, with more
information and a shorter processing time.
[box]
Order Matters
The sequential reality of life, what we might think of as historical realities,
make us realize that order matters. Here’s a colorful example: The Beatles’ first
number one hit in the US was “I Want to Hold your Hand,” released in 1964.
It’s a bouncy, catchy, non-threatening song in every way and arguably, it helped
to cement rock and roll as a musical form. Jump ahead thirty years to Sir Mix-
a-Lot’s 1992 hip hop hit “Baby Got Back,” and you’re tempted to say popular
music has come a long way. Granny might have some strong opinions as well.
The bands that came before and pushed the limits of censorship paved the way
for today’s artists. Question: would it be possible to reverse the order? Imagine
the moral outrage, the parental protests, radio stations being shut down, etc.
if “Baby Got Back” had come first. As it is, there were protests against some
6 A Stitch in Time…: Sequential Games 113
songs and some performers, but “I Want to Hold your Hand” being sung on
the Ed Sullivan Show allowed us to, three decades later, hear “Baby Got Back.”
In sequential games, even esoteric ones, order matters.
[end box]
them could get hurt, our strategy will fail, and we will have lost the season in
the short and the long run.” Another could say “bring up our best prospects
today, train them up, and have them ready for the World Series.” A third
coach could say “bring the prospects in today, but don’t use them in the
championship.” Whatever the approach, there is a connection between the
choices today and the long-term success of the team.
Economist John Maynard Keynes once said, “In the long run, we’re all
dead.” He was saying that economic policies whose payback was expected in
the very long run wouldn’t be beneficial, because it was just too far in the
future to matter to everyday people. His statement was made in the context
of government policy during the Great Depression, but the idea is important
for all sequential games.
There is a distinction between repeated games, which could go on forever,
and sequential games, which we think of as having a last period. The
approach that we used for solving a sequential game assumed an endpoint.
Chess, checkers, Tic-Tac-Toe, and other sequential games end. You can say
that the end is far in the future, or even uncertain. Still, there is an end—and
its presence changes how you play the game.
• The only reference to the future is the number “nine.” The rest we have to
infer.
• The only way to measure the magnitude of the downside is to compare the
number of stitches—one versus nine.
6 A Stitch in Time…: Sequential Games 115
FIX NO TEAR
SMALL TO REPAIR
TEAR
TODAY
TODAY
YOU
TEAR
NEEDS 9
STITCHES
LEAVE
IT
ALONE
TEAR NEEDS
FEWER THAN
9 STITCHES
• How long is “long?” The expression doesn’t give us a way to know how
long this process takes. Will the larger tear happen in hours, days, months,
or longer?
should add up to big changes in the long run. The big ship will course-correct
slowly, but effectively.
Because the future is so long run, and because the changes are unbelievably
small, it is hard to observe whether the decisions are right or wrong. Some of
them may actually be perceived as “the wrong thing to do” in the short run.
Because decisions are happening frequently and their results compound, the
process of “righting a big ship” is inherently uncertain and risky. In the case
of corporate development, it’s the reason stakeholders want to see a strategic
plan. The plan helps everyone see the final direction of the ship, removing
some uncertainty and anxiety around incremental decisions.
In the US, legal codes include regulations that limit a decision-makers
ability to cause harm with uncertainty. The obvious reason is to save lives
and prevent injuries. The less obvious one is that suing a company, or even
a person, for causing harm that they should have been able to avoid assumes
that the victims (or their survivors) will get monetary compensation for the
harm. Companies and people have an escape clause from paying damages:
they can declare bankruptcy. Even without that, the decision-maker that
caused the harm might not have enough money to pay for the damages they
caused. As with other rules, societies can attempt to control decision-making.
Without adequate enforcement, these rules aren’t always as effective as people
choosing to be responsible.
the road, she would say, “just cross that bridge when you come to it.” She’s
telling you to focus more on the immediate choices and their outcomes, the
short-run decisions. The hope is that payoffs of later games—and therefore
their correct choices—become clearer after each step.
In a complicated, chess-like game, you are seeing a path to your desired
outcome which has twenty steps or choices over time. One of those, step
seventeen, requires you to cross a rickety old bridge that might collapse when
you’re halfway across it. That fear of Choice 17 might stop you from making
Choice 1.
Here’s where Granny will really bake your noodle: focusing on a problem-
atic choice seventeen steps down the road is a mistake because you might
never have to make it. Somewhere in choices 1–16, the possibility of crossing
the rickety bridge could go away. You should start playing the game, make
your best choices, and deal with the bridge problem when you’re more certain
that you have to cross it, IF you have to cross it at all.
Suppose when you start seeing things more clearly, your desired outcome
takes you down a path that means you will have to cross that bridge and you
are certain the bridge will collapse. Given all of that certainty, you know you
won’t make it to the end if you start down that path. This is when you follow
Plan B; look for the next best outcome and find your new path. Reaching the
end of the game alive is more important than certain death.
Granny had a term for this too. She said, “that’s a fool’s errand,” or maybe
something about snowflakes in hell. The decision-making problem involving
the rickety bridge comes down to your level of certainty about your ability to
ever make it down that path and come out on the other side.
Conclusion
Life is a sequential game; there…we’ve said it.
While there are many sequential games involving more than one player,
games like poker, chess, and Tic-Tac-Toe, you will play sequential games with
your future self more than you play with anyone else. The ability to think
ahead is a lot of what Granny was trying to get you to do for your own sake.
Somewhat ominous proverbs like “as the twig is bent, so grows the tree”
are warning us that actions today will change our future in ways that we
might not like. Sequential games are not simply the outcome of experience,
learning, or other actions. They are forks in the road; situations where one
choice changes future choices permanently. In a sequential game, each choice
eliminates some possible futures and adds others.
6 A Stitch in Time…: Sequential Games 119
Questions
6.1. Sequential games depend on time and order. In that sense, they are
different from repeated plays of simultaneous games. We’ve said that
rules (like who goes first) are imposed by players and can usually be
changed if it leads to better outcomes. Discuss how making a simulta-
neous game sequential is like making a commitment in a simultaneous
game.
6.2. Societal and cultural “norms” are often just rules or approaches to games
that were determined once upon a time and have become ingrained in
daily life. They are expectations, modus operandi, “accepted” ways of
doing things. Life is a sequential game. Discuss the “rule-writing” part
of a society as a sequential game. Can you show how some injustices
(unfavorable outcomes) or progress (favorable outcomes) are the result
of who got to have a say first?
6.3. Going first in Tic-Tac-Toe is considered an advantage because there are
nine possible places to put your X, and the first player will have a chance
to occupy five of the nine spaces. This would lead one to believe that
anytime the first player has more total choices than the second, there is
a first-mover advantage. If you always go first, are you always going to
win? Not necessarily… When is having the opportunity to make more
choices not an advantage?
6.4. When a sequential game is thought to have a first-mover advantage, we
sometimes randomize the choice of who gets to go first. For example,
in the game of (American) football, one team will start the game by
kicking the ball to the other team and the “receiving” team will therefore
have the first opportunity to score points. This switches in the second
half. The decision about which team will kick and which will receive is
determined by a coin flip where the winner of that random event gets
to decide if they will kick or receive.
Frequently, the winner of the coin flip decides to kick the ball and
defer the opportunity to receive the ball until the second half. Why
120 J. Prag and A. I. Prag
would that team give up the first-mover advantage of being able to score
first? Might there be a different first-mover advantage in the second half?
6.5. The rollback approach to decision-making in sequential games is an
important differentiator from simultaneous games. You still can’t guar-
antee your best possible outcome; the other player’s choice might
preclude it. In most cases, you can make choices that avoid the worst
possible outcomes when the sequential game is with your future self.
Sequential games allow someone to make a Maximin-type decision
without having to convert the game into jollies. Given that, could a
person make a Maximin-type decision in a simultaneous game that’s
played with her future self?
A
X Y
A A
2 1
X
B B
1000 1
B
A A
5 2000
Y
B B
5 3
At a quick glance, the game has two Nash equilibria. If the game is played
simultaneously without cooperation and with each player only interested in
maximizing their own outcome: B gets 1000 and A gets 2, OR B gets 3 and
A gets 2000.
What can we infer from the payoff matrix? For starters, the country is
Capitalist. Outcomes in any quadrant are lacking a lot in terms of “fairness,”
or what some people call a balanced income distribution. There are very rich
people and very poor people. In addition, there is no way to solve this game
beyond identifying the two possible Nash equilibria.
Can we impose a rule to guide us to a single outcome? Well, we could let
somebody choose first, make the game sequential. Turns out, the game has a
first-mover advantage. The group in society that chooses first always ends up
better off.
Suppose the prevailing culture says whenever a game with multiple Nash
equilibria can’t be solved straightforwardly, both players revert to a Maximin
strategy. A and B should avoid the choice that could give them their worst
outcome. B would avoid X and A would avoid Y. The Maximin solution
would lead to the outcome B gets 5 and A gets 3. Let’s call this the Marxist
outcome. This is the second-best outcome for both players: it is fair and
balanced—and also worse than each player’s best outcome.
How could we structure the game, make rules, that would allow for
something that feels fair, but doesn’t give up so much of our potential payoff?
What if the game is sequential, but repeated? The first-choice alternates
between the two players. This would allow each player to get their best
outcome every other play. Over the course of time, several iterations, that
would increase the average payoff considerably over the maximin result.
This should lead every society to take this approach, right? Maybe. The
win–win result depends on two critical things. First, the alternating first move
needs to be immutable and enforceable. Second, the amount of time between
plays can’t be too long. If the time between each round is, say, a hundred
years, the first-mover advantage will return. Human beings aren’t incentivized
to make decisions that make them worse off in the short run for a long run
they might never see.
We leave it to those interested in politics and economics to use this result
to explain why long-term income imbalance tends to be the norm in many
capitalist countries. The shorthand is, being “first” usually means controlling
the means of production (once upon a time, land), and no society has figured
out a way to “alternate” that advantage given the human lifespan.
7
I Solemnly Swear…: Promises,
Commitments, and Threats
Commitments
We usually associate commitments with simultaneous games and promises
with sequential games. This often means a lot more to academics than it
does to normal people, so let’s break it down logically. In the context of
decision-making, a commitment is an information we share before players
make choices. A commitment happens in what we call the “pre-game” of
a simultaneous game. It’s information, known to both parties, that changes
how players choose at the same time.
A commitment is made to push the other player to change their perceived
choice. If all goes well, the person making the commitment ends up better off
than they would have been without the commitment. Why does this matter?
A commitment needs to be credible to affect change. Think of it this way: if a
person makes a commitment to something that clearly makes them worse off,
you’d second guess them. Why would they do it? A commitment is credible
if and only if it changes the outcome of the game so the person making the
commitment gets more jollies. It should convince the other player to make a
choice that they would not otherwise have made.
Let’s look at a game involving two stores: Jane’s store, the larger of the two,
and John’s store. Both stores are trying to determine if they should cut their
prices to capture a larger share of the market. While getting more customers
is nice, they both would be better off if they both kept their prices (and thus
their profit margins) high. Here’s the payoff matrix for this game:
If you play the game in Fig. 7.1 simultaneously, without cooperation, and
with full information (everyone knows everything), Jane’s and John’s both cut
their prices.
The folks at John’s Store will see that if the owner of Jane’s Store cuts prices,
John’s Store will get either $3 M or $1 M. $3 M is better, so John’s owner
will cut prices. If Jane’s owner doesn’t cut prices, John’s owner will get either
$4 M or $5 M. $5 M is better, so the owner of John’s Store won’t cut prices.
The owner of John’s Store does not have a dominant strategy.
Jane’s owner sees that if John’s owner cuts prices, Jane’s Store will get either
$30 M or $25 M. $30 M is better so Jane’s owner cuts prices. If John’s owner
doesn’t cut prices, Jane’s owner will get either $54 M or $50 M. $54 M is
better so again, the folks at Jane’s store will cut prices. Jane’s owner has a
dominant strategy to cut prices.
Jane’s has a dominant strategy to cut prices. John’s does not. However,
with full information, the folks at John’s know that the owner of Jane’s will
cut their store’s prices, so John’s owner simply has to make the choice that’s
best in that case. John’s too will cut prices.
7 I Solemnly Swear…: Promises, Commitments … 125
JOHN’S STORE
CUT DON’T CUT
PRICES PRICES
JOHN’S JOHN’S
$3 MILLION $1 MILLION
PRICES
CUT
JANE’S STORE
JANE’S JANE’S
$30 MILLION $54 MILLION
JOHN’S JOHN’S
DON’T CUT
$4 MILLION $5 MILLION
PRICES
JANE’S JANE’S
$25 MILLION $50 MILLION
As with other games, we see that this one ends up somewhere we could
call suboptimal: both players could be better off, but the equilibrium didn’t
land there. In previous games, we’ve fixed this by allowing somebody to go
first (making a game sequential). We could also employ a commitment.
Suppose, before the game is played, the owner of Jane’s makes a commit-
ment to not cut their prices. Maybe they call the owner of John’s and state it
outright, or maybe they imply it at a community meeting somehow. Given
the setup of this game, the owner of John’s store is effectively playing “follow
the leader,” so Jane’s owner’s commitment would make John’s owner switch
choices; they would not cut prices at John’s store. The game ends with neither
of store cutting prices and both of us are better off.
Our first criteria for a commitment is met: the commitment in this case
is credible. So is the second: the player making the commitment is better
off because that strategic move convinced the other player to change their
choice. As we mentioned earlier, there are some important details that need
to be worked out. For example, there needs to be a way for the owner of
Jane’s Store to communicate this commitment in an effective way.
Students of business law are ready to jump up at this point. Legally, Jane’s
store can’t announce publicly that they won’t cut their prices. In the US, a
formal announcement of price setting is considered collusive or anti-trust
behavior. So, stores get around it by being more subtle: “we are holding the
line on prices.” This is an important part of the story, and it leads us to a third
criteria: a commitment must also be observable. The point of a commitment
is to change the behavior of the other player in a simultaneous game. There
126 J. Prag and A. I. Prag
[box]
[end box]
Threats
On its own, a threat is also a tactic employed before the first round of a game.
Where commitments are made before a simultaneous game, threats work best
in a sequential game. Before a game begins, threats are made by the second
mover to change the first player’s choice: “If you choose x, I’ll choose y.”
Let’s look at a bigger example:
While this isn’t the most fun game in the book, let’s see how a threat can
help solve what we call the Mugger Game. Suppose you’re walking through
the park and stranger walks up to you and demands your iPod Shuffle. Also,
it’s 2009. Let’s call you, owner of said Shuffle, the “muggee.” Perhaps to mini-
mize his chance of escaping, the mugger says, “if you scream, I’ll hurt you.”
This is a threat intended to change your behavior. The mugger’s idea is simple:
you don’t want to get hurt, so threaten to hurt and get the Shuffle.
The formal version of this game is shown in Fig. 7.2.
This game, by its nature, is sequential. The “muggee” will choose whether
to scream first, and then the mugger responds. And, yes, you can analyze
the game with a traditional payoff matrix instead of in tree form, the way
we usually show sequential games. The matrix actually has advantages in this
case.
In this game, neither player has a dominant strategy, and the game has two
Nash equilibria: scream and get hurt (upper left quadrant), or don’t scream
and don’t get hurt (bottom right quadrant). When we discussed games like
MUGGER
DON’T
HURT HURT
MUGGER MUGGER
30 10
SCREAM
MUGGEE MUGGEE
MUGGEE
-6 -4
MUGGER MUGGER
SCREAM
20 50
DON’T
MUGGEE MUGGEE
-8 -3
this earlier, we said there usually needs to be a way to get to one of the
equilibria—and the mugger’s threat might be just the ticket. If the threat
works, the mugger—the threat issuer—and the muggee are better off! Like
the commitments that we talked about earlier, a threat is said to be credible
if it leads to an outcome that makes the player issuing the threat better off.
In this case, it’s a win–win.
Well, not really. Because… you know… felony theft.
We still have things to establish, like how can you know the payoffs of
the other player (described in the setup as a stranger) in a game like this.
How can you know that the threat is, in fact, credible? If all goes well, the
threat gets the game to the best of the Nash equilibria for both players. Not
coincidentally, this threat can be issued as a promise, “if you don’t scream, I
won’t hurt you,” with the same result. In most cases, a threat has an implicit
promise associated with it and a promise, as we’ll see later in this chapter, has
an implicit threat tied to it.
enforcer (a human judge with a family and a career putting careers in jail)
were not aligned.
Every parent knows about this problem. An effective threat requires an
understanding of how exacting the punishment will feel on the punisher. In
coparent households, for example, you never know who will meter out the
punishment in each case. As soon as a kid learns she can get ice cream from
Daddy even when Mommy says no, all bets are off.
[box]
You tell junior to turn off the computer and do his homework or else you’ll
take away his computer time completely. He might well test the voracity of
that threat with a small pushback: “Can I just finish this one game?” or “Just
five more minutes, I’m almost done.” What Granny warned you was, “If you
give an inch, they’ll take a mile.” Not enforcing a threat even a little bit shows
the other player that you aren’t (entirely) credible.
Threats, like rules, have exceptions. The cautionary expression about inches
and miles is really about threats and credibility. Credibility isn’t always lost
all at once. Sometimes, it’s chipped away over time. Little Junior is surprised
when he gets five extra minutes the first time; but after awhile, he won’t even
take off his headphones to hear your threat.
[end box]
credible; this mugger is going to hurt you either way. So, if you encounter a
Type B mugger, you should scream.
There is no way to know which type of mugger you’ll encounter. There
is no accurate screen. Statistically speaking, let’s say 99% of muggers are the
friendly, nice Type A.
MUGGER
DON’T
HURT HURT
MUGGER MUGGER
30 10
SCREAM
MUGGEE MUGGEE
MUGGEE
-6 -4
MUGGER MUGGER
SCREAM
20 50
DON’T
MUGGEE MUGGEE
-8 -3
MUGGER
DON’T
HURT HURT
MUGGER MUGGER
30 10
SCREAM
MUGGEE MUGGEE
MUGGEE
-6 -4
MUGGER MUGGER
SCREAM
60 50
DON’T
MUGGEE MUGGEE
-8 -3
Unfortunately for you, you believe that any mugger that wears blue jeans
is a Type B. To you, jeans signal somebody mean, cruel, and an overall worse
human being. Half of all muggers wear blue jeans. So, your prejudice will
make you scream half the time and lower your jollies, for no good reason.
Based on reality—there is no good screen for a Type B mugger. Since they
only comprise 1% of the mugger population, the numbers say you should
never scream, and you will almost never get hurt. Your prejudice led you to
more negative jollies.
All this said, if you find yourself repeating this game, we should talk.
Funnily enough, the night before he came to see you, your (now former)
employee’s Granny listened to his plan and said, “be careful what you ask for,
you might get it.” She knew that he should delve a little deeper before issuing
a threat he couldn’t take back.
Promises, Promises
Most people think that the difference between a promise and a threat is just
the tone of your voice. It’s more like they’re two sides of the same coin.
Game theorists try to differentiate promises and threats more explicitly by
focusing on the person receiving the strategic move. Threats are trying to deter
134 J. Prag and A. I. Prag
a person from making a choice. Promises are trying to compel the person to
do something. The difference can be subtle.
You threaten your teenager by saying “if you don’t clean up your room, you
can’t go out with your friends today.” The promise would be “if you clean up
your room, you can go out with your friends today.” Tone of voice notwith-
standing, we see the mirror image aspect here. When you say “if you don’t
clean up your room you can’t go out with your friends,” you are implicitly
promising that you can going out with your friends if you do clean up your
room.
Similarly, if you promise your 6-year-old, “if you eat your vegetables, I’ll
take you to get ice cream,” you are implicitly threatening “if you don’t eat
your vegetables, I won’t take you to the get ice cream.” Aha, here’s a fun
different: a promise allows the player to suggest future outcomes that the
player receiving the promise didn’t know about or didn’t think about. A
promise can give more information. A promise expands the potential jollies
of a game.
[box]
In game theory, threats and promises exert influence on decision makers. The
level of influence depends on the timing. Whether you’re using commitments,
promises, or threats, you have to use them before the other player has made
a choice. That doesn’t mean that the game is already over, it means that the
other player has decided what they’re going to do. Even if we loosen that a
bit—the other player is “pretty sure” what they’re going to do, we can glean
some wisdom from the local blacksmith. It’ll be easier to influence the other
player’s choice when they haven’t thought much about it. Threats, Promises,
Commitments are most effective with the player is malleable. Strike while the
iron is hot.
[end box]
7 I Solemnly Swear…: Promises, Commitments … 135
Reputation Redux
Reputation becomes a big part of promises and threats in repeated games.
Like the parenting example earlier, if you show an ability to do something
that makes you worse off in the short run, you gain credibility and can get
better results in the long run.
Putting a sign on the door of your restaurant that says “Restroom is
for Customers Only—No Exceptions” might seem harsh. It deters non-
customers from asking. If the restaurant owner is, by nature, kind and
considerate, she doesn’t like saying “no” to anyone. The problem is, her
customers are paying for that restroom, and there’s no way of limiting non-
customer use to just a few select people. Signals and strategic moves combine
here in an interesting way. The commitment—the harsh sign—isn’t because
the restaurant owner is mean. It’s because she is too nice and needs the sign
to reinforce the policy.
Dr. Strangelove, Teddy Roosevelt, and the Biggest “To-Do” for a Viable Threat
136 J. Prag and A. I. Prag
Similarly, President Teddy Roosevelt’s famous line “speak softly and carry a big
stick” might work in a jungle, or a zoo. He was trying to deter international
aggression by saying diplomacy should be backed up by military capabilities.
That said, you don’t want to speak too quietly lest your threat (aka, the stick)
not be recognized.
Successfully communicating your desire for the other player to change actions
is key. What’s less important is the use of specific, verbal communications.
Parents and economics professors alike will tell you that a stern glare will work
just as well as an actual threat.
[end box]
Bribes
Consider the game in Fig. 7.5.
If both players make their decision simultaneously, without cooperation,
acting in their own best interest, and knowing the entire payout matrix, they
will end up with Player 1 getting 11 and Player 2 getting 15. That’s because
7 I Solemnly Swear…: Promises, Commitments … 137
PLAYER 1
A B
PLAYER 1 PLAYER 1
11 9
A PLAYER 2 PLAYER 2
PLAYER 2
15 21
PLAYER 1 PLAYER 1
9 10
B
PLAYER 2 PLAYER 2
10 12
Conclusion
We make promises, threats, and commitments all the time. Little did we
know that these things are always game-theoretic in nature. Strategic moves
are attempts to influence another player in a situation where two or more
decision-makers will determine the outcome.
Commitments depend heavily on a lot of information being easily avail-
able all of the time. Commitments always operate by moving one player away
from a dominant strategy toward a choice that is better for both players.
Dominant strategies don’t change. The player making the commitment is
always better making that choice. That’s because commitments work when
they convince the other player to change his choice simultaneously.
Among the interesting things that we discovered is that strategic moves,
especially threats, sometimes change all future games between those players.
A threat reveals information. It tells the other player “where your head is.”
So, in addition to the direct outcome of the threat, we usually have a change
in the information set for all games between these people—including future
interactions. Unlike in classroom game theory, commitments, promises, and
threats have real, long-term implications for those of us who go through life
interacting with others. Strategic moves can alter your reputation dramatically
in all walks of life.
Questions
time comes, you have to break your promise because of a family emer-
gency. She’s devastated. When you try this again for her fifth birthday,
what should you do differently?
7.3. Death threats are interesting. Ignoring everything related to the after-
life, and assuming the threatener is making you choose between your
money or your life, how would a credible death threat dramatically
change future payoffs? What is the implicit promise behind this threat
that might change your choices?
7.4. Suppose a bully says, “give me all of your money or I’ll beat you up” and
you give him your money. On seeing this, someone says, “you have to
stand up for yourself.” Aside from the fact that they weren’t the one who
was going to be beaten up, how is that advice-giver seeing your situation
differently? (hint: think about sequential games.)
7.5. Based on what we’ve discussed about game structure, why can’t you
make promises and threats in simultaneous games under normal circum-
stances? What about the fulfillment of a promise makes it impossible?
8
Life Imitating Art: Repetition and Reputation
BUTCH
DO NOT
CONFESS CONFESS
BUTCH BUTCH
CONFESS
5 YEARS 10 YEARS
SUNDANCE
SUNDANCE SUNDANCE
5 YEARS 1 YEAR
BUTCH BUTCH
1 YEAR 3 YEARS
CONFESS
DO NOT
SUNDANCE SUNDANCE
10 YEARS 3 YEARS
will wait for the other one before they play (rob a bank) again. Butch and
Sundance are besties.)
If both players are simply trying to get the shortest combined prison
sentence, each would confess every time the game is played. Confirm that
choice by working backwards through Fig. 8.2. Look at what each player
would do in the second choice and work backwards.
Once you have payoffs at the second and first choice, you can combine
them and put everything back in a new payoff matrix. Here’s what we mean:
Look at the endpoints of the tree. There are four cases in which Butch
confesses in the second game. When he does, Sundance has the following
potential payoffs:
C −→ 10 C −→ 15 C −→ 6 C −→ 8
Sundance Sundance Sundance Sundance
N −→ 15 N −→ 20 N −→ 11 N −→ 13
Once again, Sundance has the incentive to confess in all cases. So,
Sundance has a dominant strategy to confess in the second round of the game.
144 J. Prag and A. I. Prag
BUTCH: 10
CONFESS SUNDANCE: 10
CONFESS
DON’T BUTCH: 6
CONFESS SUNDANCE: 15
CONFESS BUTCH: 15
5/5 CONFESS SUNDANCE: 6
DON’T
CONFESS DON’T BUTCH: 8
CONFESS SUNDANCE: 8
BUTCH: 6
CONFESS CONFESS SUNDANCE: 15
CONFESS
DON’T DON’T BUTCH: 2
CONFESS CONFESS SUNDANCE: 20
1/10
BUTCH: 11
CONFESS SUNDANCE: 11
DON’T
CONFESS BUTCH:
DON’T 4
CONFESS SUNDANCE: 13
BUTCH BUTCH: 15
CONFESS SUNDANCE: 6
CONFESS
DON’T BUTCH: 11
CONFESS SUNDANCE: 11
CONFESS
10/1 BUTCH: 20
CONFESS SUNDANCE: 2
DON’T
DON’T CONFESS DON’T BUTCH: 13
CONFESS
CONFESS SUNDANCE: 4
BUTCH: 8
CONFESS SUNDANCE: 8
CONFESS
DON’T DON’T BUTCH: 4
CONFESS CONFESS SUNDANCE: 13
3/3
BUTCH: 13
CONFESS SUNDANCE: 4
DON’T
CONFESS DON’T BUTCH: 6
CONFESS SUNDANCE: 6
Similarly, there are four cases in which Sundance confesses in the second
game. When he does, Butch’s possible payoffs are:
C −→ 10 C −→ 6 C −→ 15 C −→ 8
Butch Butch Butch Butch
(continued)
8 Life Imitating Art: Repetition and Reputation 145
(continued)
N −→ 15 N −→ 11 N −→ 20 N −→ 13
In each case, Butch chooses to confess in the second round. He too has a
dominant strategy to confess.
Now, work backward. Since Butch and Sundance both have a dominant
strategy to confess in the second round, that limits the possible outcomes for
the first round. Of the sixteen possible outcomes in round two, only four of
them could happen.
After the second game, the only possible outcomes are:
Butch −→ 10 Butch −→ 6 Butch −→ 15 Butch −→ 8
Sundance −→ 10 Sundance −→ 15 Sundance −→ 6 Sundance −→ 8
Since they are added together, these four outcomes can become a new
payoff matrix for the two games (Fig. 8.3):
When we combine the outcomes from both periods, we see once again,
that both Sundance and Butch have a dominant strategy to confess. Knowing
BUTCH
DO NOT
CONFESS CONFESS
BUTCH BUTCH
CONFESS
10 YEARS 15 YEARS
SUNDANCE
SUNDANCE SUNDANCE
10 YEARS 6 YEARS
BUTCH BUTCH
6 YEARS 8 YEARS
CONFESS
DO NOT
SUNDANCE SUNDANCE
15 YEARS 8 YEARS
they will confess in the second round means they will also both confess in
the first round. Playing the game twice with self-interested players did not
change anything for Butch and Sundance.
We’re sorry to have put you through that, but here’s the thing: if the setup
doesn’t change in any way, playing the game five times or ten or fifteen times
wouldn’t change the choices or the outcomes. Why? The Last Game Matters:
each player always has an incentive to confess in the last period. Working
backward, that means they would confess in every period that comes before,
no matter how many games they play.
In game theory, we call this a last-period problem: self-interested players
are always self-interested in the last period, and that strategy feeds back to
earlier games.
No End in Sight
We’re all comfortable with the Prisoner’s Dilemma at this point. However, if
the payoffs are based on years in jail, the game doesn’t lend itself to a sensible
discussion about repetition.
Let’s go back to our two stores, Jane’s Store and John’s Store. For this
new discussion, assume the stores are the same size. Recall, each store is
trying to decide if it should cut prices and capture more market share. Both
store owners know the payoff matrix (aka, complete information). Figure 8.4
illustrates each store’s profits in all possible outcomes.
As with the Prisoner’s Dilemma, if you play the game once, the game will
end up with both stores getting profits of $3 M when they both could have
gotten $5 M. Not ideal. Let’s confirm that and also add some details about
the game.
Setup: Jill’s and John’s choose simultaneously, without cooperation. Each
store wants to maximize profits.
The folks at John’s store see the game like this:
• If the people at Jane’s store cut their prices, profits at our store will be either
$3 M or $1 M. $3 M is better and that happens if we also cut prices.
• If Jane’s owner doesn’t cut prices, our profits will be either $10 M or $5 M.
$10 M is better, so again, we’ll cut prices.
Regardless of what Jane’s owner does, John’s owner has a dominant strategy
to cut prices.
8 Life Imitating Art: Repetition and Reputation 147
JOHN’S STORE
CUT DON’T CUT
PRICES PRICES
JOHN’S JOHN’S
$3 MILLION $1 MILLION
PRICES
CUT
JANE’S STORE
JANE’S JANE’S
$30 MILLION $10 MILLION
JOHN’S JOHN’S
DON’T CUT
JANE’S JANE’S
$1 MILLION $5 MILLION
Spoiler alert: the owner of Jane’s store has the exact same numbers in the
payoff matrix (just remember to look up and down in the columns), so the
folks at Jane’s also have a dominant strategy to cut their prices.
If this game is played once, both stores cut prices and they both get profits
of $3 M. If neither of them cut prices, they could have $5 M each. How can
we get there?
Maybe they should arrange a meeting and agree not to cut prices? Unfor-
tunately, in many places, that’s illegal. It’s called price collusion, and you
can see why it’s banned: collusion usually keeps prices higher, and that hurts
consumers.
Given that reality (same as the Prisoner’s Dilemma), the result doesn’t
change if you play the game two, five, or ten times with this setup. Each
store should/would always cut prices in the last period; they have nothing to
lose. That choice “feeds back” to every previous round.
What if you never had a last period? That makes a huge difference.
Knowing there’s no last period means you don’t have a last-period problem.
The same mechanism that perpetuates the self-interested choice (cut prices),
also works for a cooperative outcome. If the incentive to cut prices is never
there (in a last period), the cooperative choice prevails—forever.
If John’s and Jane’s play forever, there are only two possible scenarios that
could occur every time they review prices: both stores cut forever or both
stores don’t cut forever. In repeated play, one player may deviate occasionally,
but look at the payoffs. With that kind of decline in profits after a single
game, neither player would make the same choice more than once. There
148 J. Prag and A. I. Prag
may be momentary lapses, but in aggregate, each player would make the
cooperative choice every time.
True, we can’t play the game to infinity, but we can approximate it. We
can play the game a thousand times, and our outcomes would be in billions
of dollars. The payoff matrix in Fig. 8.5 is what we call an infinity payoff
matrix. It combines “forever” numbers and “one-time” numbers to illustrate
a repeated game. Because the only options that could occur every period are
both store owners cut prices (forever) or both store owners don’t cut (forever),
each store can approach the game with those numbers.
The stores can cut prices forever and get $3B or not cut prices forever and
get $5B. The game now has two Nash Equilibria; if you don’t cut (forever);
you’ll be better off by $2B. Each store’s owner sees if they’re ever tempted to
be self-interested and cut their prices just to get one period of higher profits,
they risk losing $2B forever!
This aggregated, “infinity” payoff matrix shows how the short-run gains
from being self-interested are (far) less than the long-run loss. The key to
this result is imagining the game will be played forever and seeing any one-
time noncooperative behavior as a switch, of sorts. Multiplying the outcomes
by a thousand is a way of magnifying the results for either of the only two
outcomes that could happen forever. This is a useful abstraction.
As we said, “forever” doesn’t have to be forever. Dominant competitors
in the industry play this game quarterly, sometimes weekly—and any large
number of games can achieve this aggregated outcome. Here’s an interesting
JOHN’S STORE
CUT DON’T CUT
PRICES PRICES
JOHN’S JOHN’S
$3 BILLION $1 MILLION
PRICES
CUT
JANE’S STORE
JANE’S JANE’S
$3 BILLION $10 MILLION
JOHN’S JOHN’S
DON’T CUT
JANE’S JANE’S
$1 MILLION $5 BILLION
Price Cooperation
Legally, stores can’t cooperate to fix prices. The setup of Jane’s and John’s is
realistic in that sense: companies need to choose independently and simultane-
ously. Still, companies the world over collude all the time, since there are other
ways to signal understanding of a game and its outcomes. Jane’s and John’s
can’t directly say “here’s where we’re setting prices,” but they can put signs in
store windows that read “Price Match Guarantee.”
This works really well unless, and until, there is a last period. If one of the stores
is in distress, perhaps facing bankruptcy or closure, they give deep discounts,
knowing they won’t be around to play another day.
[end box]
150 J. Prag and A. I. Prag
While this view might be more aspirational, the expression implies that
cheaters will be noticed and singled out in a world where most people do
not cheat. Being rude, being a bully, being a jerk, anything that might be
described as an obvious noncooperator will make other players change their
behavior when they play games with you over time. Interactions with (games
played with) a cheater will have different choices and payoffs as time passes.
“Cheaters never prosper” is short and sweet and as these things go, a very good
expression. If an academic game theorist had come up with it the expression
would have been, “noncooperative choices will evoke noncooperative choices
by other players in all future games and you will get a suboptimal outcome
over your lifetime.”
Many long-term perspectives help fix the problem that mortality creates in
decision-making.
One of the most direct tactics to solve the last-period problem: promise
people an afterlife.
Most religions have tenets that help society achieve the cooperative
outcome every day. With some exceptions, religions typically have three
things in common: an omniscient, supreme being; a set of do’s and don’ts;
and an afterlife. All three can be game-theoretic.
First: do’s and don’ts specify cooperative and noncooperative behavior (aka,
sins). This helps societies and communities set boundaries for certain types
of choices and better specify daily games.
Second: an omniscient observer means you don’t need a sign in the window
or worry about direct communication; God is always watching.
Third, the afterlife: well… it completely eliminates the last-period
problem. The afterlife removes the incentive to make a “bad” choice simply
because your time on earth is ending. Bonus points for creating an afterlife
that depends on you making good choices before you get there.
It’s worth noting that the afterlife isn’t always heaven and hell. Reincarna-
tion is another form of perpetuity that allows the spirit or soul to live forever.
In religions that believe in reincarnation, how you live one life (sometimes
called your Karma) will determine your reincarnated form in the next. It’s the
presence of an afterlife, and its parameters, that rein in the human instinct to
act in self-interest, preserving the cooperative solution indefinitely.
Have you noticed that most “do’s and don’ts” in organized religions are
rather rigid? These laws aren’t presented as ambiguous, nor optional, and they
don’t allow for a lot of exceptions. Why? Allowing people to find exceptions
would defeat the purpose. Take what Granny called a “little white lie.” A
white lie is a lie perceived has having few—if any—consequences. They’re
used when lying is perceived as better than telling the truth. Understanding
that there is a case to be made for white lies, they are still lies and they can
change the outcome of future games. So, rules were written to forbid lying
outright, regardless of the circumstances.
The social structure of religion—sins, a supreme being, an afterlife—is
meant to move people away from being entirely self-interested and toward a
more cooperation. The hope is that society’s outcomes are better in the long
run, and society prospers as it grows generation after generation. Religions
don’t exist only to solve games, but they do impact the quality and longevity
of the human experience.
8 Life Imitating Art: Repetition and Reputation 153
[box]
Cleanliness is next to Godliness
A lot of what Granny was doing was modeling good behavior.
When the parents are around, hand-washing and other forms of hygiene are
usually imposed. How do you get your kid to wash their hands consistently
when they are in kindergarten? Voila, “cleanliness is next to Godliness.”
[end box]
On Mortality
As we’ve seen, game theory gives us a new way to understand structures and
relationships that exist in the world around us. At the same time, it helps
to explain how the complicated interaction between biological and economic
realities has developed over time. One such case is the “last-period” human
problem: mortality.
Even with an afterlife, there are tangible things that happen as a person
approaches death and ultimately dies. For close friends and relatives, few
things are as challenging as watching a loved one going through this process.
Those two games—the stage of the game being played by the person who
is dying, and the ongoing game being played by the survivors—are very
different. The person who is dying is approaching the end of their partic-
ipation in all games (afterlife or not), but the others around him are
not.
New, complex games arise for the survivors that again, invoke things like
an intergenerational Golden Rule: do unto the dying as I want done to
me when I’m on my way out. It’s another heuristic that tells us to provide
comfort, support, and other personal things to the dying, so that we can be
treated the same when it is our time. Empathy is the emotional, biological
version of tit-for-tat.
156 J. Prag and A. I. Prag
Let’s switch the example to something much more palatable: the death of a
Fortune 500, multi-billion-dollar, international conglomerate. The company
is floundering, people are jumping ship, investors are panicking. What should
you, as an employee, do?
For a person, there’s a direct way to close up shop when things get too bad:
a DNR (do not resuscitate) order. It is a commitment a person makes well
before a life-or-death decision needs to be made. If the person is too far to
save, a DNR says let them go; the choice is removed from all parties involved.
Unfortunately, there is no DNR for major corporations. There is no
commitment to let the company die with dignity. And because of that, the
game is often harder to solve.
You know that if you and everyone who believes the company is dying
decides to leave, the company will die faster. Before you play “stay or leave,”
ask yourself this: does staying matter? Put differently, does you staying or
leaving have an effect on whether the company survives?
This brings up an important part of the setup that we haven’t yet discussed.
Game theory is used when you and others make choices that together deter-
mine the outcome. Just the existence of other decision-makers does not
mean you are in a Game-Theoretic situation. If there are a lot of others
making choices, but no one of you makes an appreciable difference in the
8 Life Imitating Art: Repetition and Reputation 157
Conclusion
Repetition does a lot more than teach you how to play games throughout
life. It can fundamentally change the outcome of games. Pushed to the limit,
playing forever, payoffs are now the collection of all plays—the “sum total” of
how you’ve lived your life. We’ve shown that some combinations simply can’t
happen every time in repeated play. Because of that, repeating some games
can change the outcome for the better over time.
“Forever” isn’t real for any mortal decision-maker. So, we find a way to
make it real. We are taught about an afterlife, family honor, and other ways
that the benefits of short run, self-interested behavior are dwarfed by the long-
run costs. The closer you make the difference between short-run gains and
long-run losses, the more likely people will be self-interested.
For all the expressions, social structures, and other ways that we try to get
people to ignore death, there are plenty of reminders to enjoy life. “Eat drink
and be merry, for tomorrow we die,” “Gather ye rosebuds while ye may,” and
“There’s no time like the present” are all reminders that life is short and finite.
Balancing those two perspectives is left to the individual and the situation.
The occasional indulgence is sometimes considered life-affirming, an
acceptable choice of short-run jollies. Constant indulgence, on the other
hand, is one of the seven deadly sins—gluttony—which together with its
six colleagues (lust, envy, wrath, pride, greed, and sloth) are all self-interested
game-theoretic choices that we are taught to avoid.
Questions
Assume they knew about the worst-case scenario before they started
drinking and ultimately driving before they caused the accident.
8.3. Why do we often observe lawless behavior in war-torn countries? Or,
why do they always show a lot of looting in movies when an asteroid is
about to hit earth?
8.4. An interesting tweak on the theme of this chapter is repeated games
but with different players. Arguably, a company is chartered with the
intention of being in business forever. Theoretically, its officers should
make cooperative choices that allow it to thrive most of the time into
perpetuity. What’s wrong with this assessment?
8.5. Theologians have argued that hell, the place of eternal damnation, was
not originally part of the biblical story (the foundation of many reli-
gions). So, there was heaven—eternal life after death in a good place,
for the good actors. Otherwise, you just got “death,” game over, end
scene. In what way does the absence of hell make the game of life more
problematic?
9
If Everyone Jumped Off a Cliff:
Brinkmanship
US leaders had to find a way to get the sites in Cuba dismantled (to them,
the only acceptable way to neutralize the major threat) without provoking
the USSR to start World War III. Some decision-makers, mostly in the mili-
tary, advocated for an invasion of Cuba or bombing the sites. Political leaders,
including President John F. Kennedy, worried that any such provocative move
could start the very nuclear war that they were trying to stop. Armchair
academics wrote paper after paper.
The US chose to impose a naval blockade off the coast of Cuba in inter-
national waters to block further Soviet military equipment from reaching the
nuclear sites. While still risky, a blockade on neutral territory could limit the
potential for damage and loss of life. It is what’s characterized in the military
as a passive strategy. One player puts up barriers to control the other’s reac-
tion in a subgame. The other player (in this case, the USSR) would have to
make the active choice, putting all responsibility for nuclear war on them.
In decisions large and small, we often choose to abdicate to another player.
In some games, like poker, we check the bet to the next player as a way to get
information about their potential hand. If you’re taking your new girlfriend’s
parents out for dinner, you let her pick the restaurant. Delegated authority in
decision-making absolves you of any fault that comes out of the decision. If
her parents hate the food, you can always say you didn’t choose it.
This is a game employees and managers at firms across the world know
well. You and your manager know you’re not happy with your job. Both
of you have a choice: to leave/stay (for you), and to fire/keep you (for the
manager). The manager might keep you even if they want you out. Why?
If you choose to leave, it’s your decision. The manager gets what he wants
without the guilt (or cost) of firing you.
Ultimately, the blockade worked. The USSR agreed to remove the missiles
and dismantle sites in Cuba. In exchange, the US removed missile sites in
Turkey.
Whew!
Diplomacy helped secure a mutually beneficial outcome—but smooth-
talking politicians were not the reason for success. The passive blockade
subgame allowed diplomatic negotiations to commence in the first place.
When the game was simply “who launches first,” diplomacy wasn’t even an
option.
Brinkmanship is not an ideal situation; we wish we never had to discuss
a “nuclear” option in our daily interactions. Yet, it highlights an important
aspect of the decisions we make. Recall, we said we always have another
option in any game setup: we can choose not to play. Brinkmanship shows
us that we also have another option, in certain circumstances: we can change
164 J. Prag and A. I. Prag
the game. As Granny may have said, if she listened to conventional polit-
ical wisdom: “if you don’t like what’s being said about you, change the
conversation.”
The Cuban Missile Crisis also revealed a fancy logical conundrum. Having
built up massive arsenals of nuclear weapons, would the leaders of either
country ever really “push the button” and annihilate millions of people? If
they weren’t willing to do it, why build the weapons in the first place? Why
spend on an arsenal if it wouldn’t constitute a credible threat.
The Crisis has been studied in game theory for half a century, and most
academics agree: the arsenal itself wasn’t the threat. The building of the
arsenal was the game-theoretic move. It was the first action in a subgame that
could have taken the world down a really bad path. Brinkmanship doesn’t
start with the nuclear option (in this case, literally). It is a series of strategic
baby steps that iterate to a nuclear last period. The arsenal moves each player
toward the brink slowly. The last-period result is clear from the start, with
the hope a better outcome may be found by players along the way.
Brinkmanship stems from a seemingly… well… weird part of human
nature: apparently, we’re willing to entertain the possibility of annihilation
just to see if we get something better along the way. We don’t always do this,
though. Sometimes, we draw a line and agree to not cross it. When you were
being particularly whiny about dinner, Granny would say “don’t start with
me.” When we sign a contract for services, we’re limiting how much each
party can escalate, thereby protecting everyone. Humans have created ways
to manage subgames to ensure we don’t go over the cliff because of every
little thing. Maybe we don’t have a death wish as a species after all.
Layers of Players
At the heart of brinkmanship situations are “too-big threats” that have conse-
quences so bad that no responsible person would ever directly choose them.
Starting a nuclear war clearly applies. There’s another unique aspect of
brinkmanship games: each player can have a different perspective of the game.
Players can come from the highest level (world leaders, corporate execu-
tives, heads of labor unions, etc.) and the front lines (factory workers, military
personnel in the field, perhaps even everyday citizens). In the Cuban Missile
Crisis, decisions were made by political leaders with input from military
generals. The situation didn’t just impact those groups, though. History is
fraught with imagery of bare store shelves, families huddled together, and
school children required to do nuclear war drills (hiding under desks, as many
9 If Everyone Jumped Off a Cliff: Brinkmanship 165
Situations that involve brinkmanship are legitimate uses of game theory, but
there isn’t a useful purpose in simplifying these complex situations enough
to make them look like our earlier games. By now, the idea of game theory
should be well rooted; the lightbulb should be on. We can see that there
9 If Everyone Jumped Off a Cliff: Brinkmanship 167
are players, choices, and outcomes and we can analyze Brinkmanship game
without a formal structure.
Following through on the “too big” threat is something that is made possible in
a multi-layered (different players), multi-period game if things go too far. Let’s
recap the Cuban Missile Crisis and try to see the game without any formal
structure.
A tense situation between military superpowers could have escalated into the
use of nuclear weapons. One decision, and players are, all of the sudden, too
far down that slippery slope to stop the nuclear outcome. Both sides will
let the public situation get closer to the bad outcome while more reasonable
negotiations proceed.
Negotiations are where information (not all, but some) is exchanged. It’s where
players start to understand possible outcomes and the other side’s preferences.
What Granny called a “no win situation” is one where negotiations aren’t
beneficial, maybe not even possible.
World War III did not happen, but union strikes happen all the time.
Brinkmanship is always tricky, occasionally catastrophic, and not driven by
any one payoff matrix.
[end box]
Preserving human life maximized jollies for one side, but not for the other.
Protesters standing in front of tanks—even with cameras watching—were
killed.
The WGA strike in July 2023 effectively shut down all California-based
production for movies, television, and online content. One side wanted
financial strength, the other wanted longevity. In this case, the financial
strength of many production companies, networks, and streaming services
was weakened. The pie that these unions want a bigger piece of was made
smaller. Although the conflict was resolved, and work resumed, the damage
to longevity was done and the entire industry, on which the WGA depends,
has shifted.
At the heart of most brinkmanship situations is a layer of decision-makers
who have different incentives, desires, or approaches to the ongoing problem.
In the Cuban missile crisis, the leaders of the US and the USSR wanted to
avoid nuclear war while getting a strategic advantage. The risk that both sides
saw when the Kennedy administration imposed a blockade on Cuba was that
a ship captain at the blockade would feel like his life was in danger, fire a shot,
and start a conflict. The presence of different players in a single game injects
a type of uncertainty information may not be able to help. Kennedy removed
this uncertainty by bringing the interests of both sides—the human interest
in staying alive—closer together.
you’ve done it!”). Toys are taken away, voices are louder, the toddler is crying,
and Mom is… well, she’s not happy.
Did the payoff actually accomplish anything?
On one hand, your toddler, who is just now developing the skills of
strategic decision-making, knows that games can turn out very badly. Threats
of “I’ll take that toy away if…” might be more effective the next time. On
the other hand, you may have emotionally scarred your child. The “nuclear
option,” as parents put it, is almost never a risk we want to take; the downside
is just too severe.
We say “almost” because there are exceptions to every rule. Yelling damages
a child, yes. It’s probably not a great way to get her to clean up her room.
If she’s about to run into traffic, though, maybe yelling (and running and
scooping her up) is appropriate. In that case, emotional damage isn’t the worst
outcome possible in the game.
Like the straw that breaks the camel’s back, the tipping point is a point
beyond which a game changes dramatically and often for the worse. The situ-
ation and the imagery are different from the ones discussed in brinkmanship,
but the results are similar. Rather than a slippery slope where a situation slides
irrevocably toward a bad outcome, a tipping point is more like a switch or
a shift in weight that moves the situation from one side to another like a
playground see-saw.
Tipping points are described in heavy-duty mathematics and physics. A
process is stable for many possible data points, but if observations exceed a
certain level, the rules change (rules like gravity, inflation, what medication
will be effective, etc.…). If we were to combine tipping points and brinkman-
ship, we’d be trying to figure out the point beyond which sliding down the
slippery slope cannot be stopped. During the Cuban Missile crisis, both sides
recognized that any large-scale military strike would be a tipping point, and
they sought to avoid it.
Tipping point is one of several game-theoretic expressions that has become
overused and often misused. Not everything that ultimately goes to hell—
has a bad outcome—did so after passing a tipping point. One player making
the cooperative choice and the other making the self-interested choice in a
Prisoner’s Dilemma style game gives the cooperator a bad outcome. That’s just
the result. There’s no slow, iterative approach in play. It’s just what happened.
Furthermore, the mere presence of a temporal element—sequential games
or repetition of simultaneous games—doesn’t mean a tipping point exists. In
170 J. Prag and A. I. Prag
the Nancy Reagan game “Just Say No,” trying drugs allows for the possibility
of becoming a drug addict, but there’s no single point beyond which you are
officially addicted. This point depends on a range of human physiological
factors. It’s why the strategy is recommended: we can’t say for sure where the
point-of-no-return is, so just don’t start.
The key aspect of a tipping point is the dynamics of the decision-making
process. Like all of our games and all of the results that we’ve described, this
tipping point outcome depends on the structure of the game.
Avoiding this problem, staying on the correct side of the tipping point, and
fixing this problem (getting back on the right side of the tipping point after
you’ve passed it) requires more than just an understanding of tipping point
dynamics. It requires an ability to monitor the situation as it’s happening.
There needs to be a way for decision-makers to see the point coming and
understand that reversing the outcome will require more than just taking out
the last straw. Once a camel’s back is broken, the fix is much harder.
A Fine Arts graduate student is heading home with a $200,000 painting in her
folio. The curator of the museum said she could take it home for the weekend
to continue research she was doing on the piece. “Just be very careful with it
and bring it back on Monday,” she was told.
The student was exiting a subway car when the folio got caught in the auto-
matic sliding doors. Before she knew what to do, the train started moving. Still
trying to extricate the folio, she was flung into a support column in the station
which caused her grave bodily damage.
She sued the transit authority for millions of dollars. City lawyers simply said,
“You should have let go of the folio.” Her lawyers said, “Why was your subway
system designed in such a way that she had to make that difficult, split-second
decision?”
Much like a brinkmanship game, people are sometimes put into a situation
that quickly begins to spiral out of control. Correct choices are hard to make.
Lawsuits like this one are decided by juries who focus on whether the plaintiff
is reasonable or unreasonable (not right or wrong); they gauge the rationality
of the decision at hand. It’s easy for jurors to put themselves in the student’s
predicament and to say to themselves, “she shouldn’t have had to (almost) die
on that small of a hill.” The student won the lawsuit.
[end box]
Whistleblowing
You see something illegal—and possibly dangerous!—happening at your
company. When you report it to your boss, he tells you to mind your
own business. When you insist that this problem could hurt customers or
employees, he makes up a reason to fire you. The system, the structure of
many companies, allows that to happen.
But you were doing the right thing. Why did that have to be your hill?
In cases like this, we don’t want people to think that doing the right thing
is going to cost them their job. So, the US Federal Government (and many
states) decided it shouldn’t have to be. In 1989, the US government enacted
laws that protected whistleblowers from retaliatory punishment. Rules were
written to protect good actors from facing what was, to them, a too-big
threat. Society decided that doing the right thing shouldn’t be a punishable
offense.
172 J. Prag and A. I. Prag
Conclusion
“Taking something to the brink” is a phrase that people use all the time.
Brinkmanship in a game-theoretic setting is most often associated with
complex situations featuring many people, or even groups of people (like
governments and executive boards). There are layers of decisions and a poten-
tially enormous, often catastrophic outcome. The point of brinkmanship is
leaders getting their “people” geared up for some titanic move.
While the discipline originated in studies of nuclear war, Brinkmanship is
applicable to many less dramatic situations. Taking a situation to the brink,
what some people might remember as “playing chicken,” can have good
outcomes as well.
Brinkmanship is an area of game theory that explains the presence of a
too-big threat. As humans, we often wish humanity wouldn’t have to get to
a certain point. We don’t want nuclear war “on the table” in any way, shape,
or form. When a brinkmanship game fails, when the threatened action takes
place and we experience the enormous loss, we get the sense that there has
to be a better way. Your optimistic authors hope that some improvement
in humanity—better approaches to negotiations, more perspective, more
information, or a way to remember we are all in this together—could help.
Questions
9.1. Here’s a phrase you’ve probably heard: “cooler heads prevailed.” Short,
simple, and to the point, the phrase is saying that all of the decision-
makers were invested, but the outcome was determined by unemotional
players. While game theorists certainly appreciate that mentality, it
might not be entirely correct.
Return to our discussion of high-level leaders vs. frontline followers.
Why might somebody on the front lines not have a cool head? When
are high-level leaders in a better position to decide? When are frontline
followers?
9.2. Here’s another relevant expression; “diffuse the situation.” Describe a
stick of dynamite or a firecracker from the lighting of the fuse to the
end. Does that sound like a brinkmanship situation? If so, why is the
only good solution to the situation diffusing it?
9.3. You could flip the brinkmanship situation upside down. Suppose the
player with the long-term horizon is not the high-level leader, but the
9 If Everyone Jumped Off a Cliff: Brinkmanship 175
frontline player. What does this mean for the decision-making capabili-
ties or the rationality of the high-level individual? (hint: think about the
horizon of an elected official.)
9.4. Egomania, megalomania, and narcissism are ways of describing a love
of oneself beyond all reason. Self-glorification is not irrational, but it
doesn’t make for a very effective leader. Why?
9.5. In studying brinkmanship, you will hear phrases like “chickening out,”
“blinking,” and “flinching.” In all cases, the player that backed down is
portrayed as weaker. Consider the game known as Chicken in which the
players drive cars toward each other, usually at great speed, and see who
swerves first. Aside from “strong,” what else would you call a player who
never, under any circumstances, swerved? (Answer: eventually, dead!)
Given that, what’s the point of creating an image of weakness for players
who occasionally “blink?”
10
All for One, and One for All:
Collective-Action Games
What if the “other player” isn’t one player, but a whole group? game theory
principles still apply to situations where you are making choices in a game
you’re playing with an amorphous “everyone else.” Granny knew about this
problem. You said to her, “you have to let me go to the party, everyone will
be there!” She might have responded, “If everyone was jumping off a cliff,
would you jump too?”
Early in life, you are warned against mindlessly following the herd,
drinking the Kool-Aid, being the lemming that swan dives off the cliff with
the others (bringing your species to an end). Until now, we’ve focused on
more individual games, where self-interest and cooperation strategies yield
the best outcome for you. Group-based games are a bit different, because the
goal is to align interests between and within each group. This group-think
area of game theory is called collective-action games.
Game theory deals with situations where one player makes choices, another
player makes choices, and the two combine to determine an outcome.
Usually, that means there aren’t very many decision-makers. It may be easy
to understand the motivations of one or two co-decision-makers. It’s a little
harder with three or four. When there are many, many simultaneous decision-
makers, you really have no clue about each person’s preferences. In these
games, you could make choice without regard to everyone else, since there
isn’t a way to specify the game correctly anyway.
Or is there?
Collective-action games are ones where you explicitly consider the choices
of everyone else when making your decision. These games often show up
when the outcomes, the payoffs, are societal. It could be something as simple
dollar bill on the sidewalk.” She knew someone else already found it and
picked it up!
Economics, Mathematics, and Finance use tightly controlled models to
predict human behavior. These disciplines are allowed to use words like “per-
fect” and “efficient,” because they control the parameters in mathematical
models. Game theory, as an exploration of human behavior, does not. We
have many examples of situations where an individual does, in fact, take the
choices of the group into account when they make their own decisions. Firms
do make profits, people make money in the market, and there is such a thing
as a premium brand.
Representative Government
In some cases, collective-action games are ones where people have found ways
to remove the irrelevance of single players. Put another way, they have figured
out how to row in the same direction to maximize effect. A representative
republic is one such system. In the US, we’ve been doing this for so long that
we don’t really think about what it means. In theory, one person represents
the interest of thousands (or millions), and decision-making gets a whole lot
simpler.
Creating this super-entity, the government, by choosing representatives to
make decisions for the group gives a small number of people a lot of power.
They are supposed to find a way (usually a very un-analytical way) to maxi-
mize the jollies of the people they represent. We give these people titles to
remind them they work on behalf of thousands of people, not just themselves.
[box]
The democratic ideals that Abraham Lincoln invoked in his brilliant speech,
“The Gettysburg Address” are deeper and more subtle than we imagine. In
using the word “people” three times in short order, he was certainly high-
lighting the notion that we elect one of our own when we decide who will
govern us.
180 J. Prag and A. I. Prag
[end box]
besides cheer at your TV. The athletes did all the work, but you’re part of
their “group,” you’re after the same goal, and you find jollies in their success.
The same mechanism is at work in political elections—for better or worse.
You can be politically apathetic (which, statistically, most people in the US
are), completely abdicating decision-making jollies to elected officials. Or,
you can feel empowered as part of a political party. Sometimes, as trends
change and information becomes more available, individuals find themselves
misaligned with the choices of the groups they consistently voted for. In
recent presidential elections, for example, many voters have stated they are
disenfranchised with all available candidates and want something “new.”
Cartels
Let’s step away from the political grandstanding (get it?) for a moment. There
are plenty of business or economic examples of collective-action games too.
In Economics, we learn that a monopolist—a single dominant firm in an
industry—has the ability to change market output and thus market price.
Monopolists can alter supply by producing fewer units which leads to higher
market prices. In most circumstances, monopolies see higher profits than
firms in more competitive industries.
A cartel is a group of companies (or countries, in the case of OPEC)
that collude with each other to set prices for the market. They communi-
cate directly, so they don’t have to suffer lower prices on the open market.
For this reason, Economists sometimes call them a “group monopoly”.
Forget for a moment that price collusion is illegal in most places. Interna-
tional law is a little fuzzy on this topic.
Suppose there are five equal-sized firms, each producing one thousand
pencils a day. Current market demand says, together, the firms can sell five
thousand pencils at $5 each. At this point, in a competitive market, we say
this means $5 is the break-even price: no firm is making a profit. $5 covers
the cost of producing each pencil, and no firm makes excess money above the
cost of production.
To make excess profits, the industry needs to reduce output to (say) 4500
pencils in total, across all 5 firms. This would increase the market price to $12
per pencil, which represents higher revenues and lower costs (because they
are decreasing production). This approach would certainly maximize industry
profits. So, each firm agrees to produce 1/5 of 4500—or 900 units—and
evenly split the new industry profits.
Does the game end there? Do all firms make profits forever? Nope!
182 J. Prag and A. I. Prag
It turns out, while this level of production might represent the maximum
profits for the industry and the firms split profits evenly, it does not repre-
sent the maximum profits for each firm. Consider Firm #3. It would say:
“if I alone increase my output a just little—from 900 to 910—that wouldn’t
decrease the market price because I’m such a teeny part of the whole industry,
BUT, I can sell ten more units and get slightly higher profits.” Firm 3
increases production this “negligible” amount.
Then firms 4, 5, 1, and 2 follow. They all see the same thing. This is an
individual versus group problem. When all five firms produce “a little” more,
prices and profits fall for the whole group, and the entire cartel is worse off.
Every individual firm has incentives to do things that are good for them and
bad for the group.
Most collective-action situations have this problem. Individual incentives
are not well-aligned with group outcomes. Notice: when these individual
incentives have negative group outcomes, even the individual is worse off
in the long run. The individual can make a choice that impacts the entire
group by starting a chain reaction. Cartels operate in repeated games.
With a completely different setup, we have effectively recreated the Pris-
oner’s Dilemma result. When individuals think too much about themselves
and choose in a self-interested way, the group—society—is worse off.
Labor Unions
Once upon a time, a remote mountain town’s largest employer was a coal
mine operator. Because it was large and the only shop in town, it could take
advantage of dominance in the local labor market to keep wages low. “If you
don’t take this job at what we’re offering, there’s a dozen other people who
will.” With no other choice, this approach kept many laborers poorer than
they had to be (and working in terrible conditions). The coal mine operator
was a monopoly, and it could set prices (for labor, meaning wages) wherever
it wanted.
Then 1 day, someone came along and said: “hey, join a labor union and
get better wages for all of you.” Basically, if all qualified mine workers band
together, they reclaim power to set wages. They can tell the mine operator
what to pay them, or else. They now have what’s called collective bargaining
power.
Forming and acting like a coordinated group changes power balances in a
game. Coordination games have different distributions of power and informa-
tion which can change the outcomes. Power shifts are not calculated, though;
10 All for One, and One for All: Collective-Action Games 183
Fair Representation
We are not going to talk about the headline-grabbing election issues that
have plagued the US and other countries. Instead, we want to note some
interesting, game-theoretic aspects of election procedures. To do this, we have
to go way back… to the origins of modern governments.
People owned land and farmed/grew everything they needed. Then, they
started to interact. I’ll trade you a chicken for a few ears of corn. Humans
interacting can cause problems sometimes, and they would deal with those
problems like human beings: killing each other, imprisoning people, acting
animalistically. Still, societies grew. Skip forward a millennium or so, and we
have fiat money: currency, dollar bills, and pieces of paper we use instead of
exchanging a chicken for wrapping paper. Things cost money. We also deal
with things a little better… call the police if somebody breaks into your store.
At this point in evolution, people start demanding things that are good
for society: better roads, safe places to gather, ways to make sure food is safe.
People elect representatives to make decisions for an entire group because,
well, the group is pretty large at this point. They elect a single official (or
team officials known as public servants) to do the research and come to a
beneficial outcome. People do this so they can focus on their own lives and
livelihoods, delegating authority to others to make informed decisions for the
greater good.
People expect these things—rules, parks, standards, whatever—to be avail-
able to everyone. So, you ask the people: “how much are you willing to pay
for these things?” Everyone says: “well, nothing.” Maybe taxes, but even those
have an acceptable limit. How much do we want to pay? Nothing.
In a large society, we observe what’s called a free-rider problem: individuals
want the benefits of government choices without paying for it. Problem is, if
no individual pays for it, there’s no money and there aren’t any government
programs.
We solve this problem by electing representatives who know that they
speak for many. We give them money, and they—having agreed to abide by
the choices and decisions of their electorate—decide where/when to spend
that money. Perfect! What could go wrong?
184 J. Prag and A. I. Prag
You’re running the election. You assign each candidate a number—4 for
somebody listed first, 3 when somebody’s listed second, etc., and you add
up the numbers. The final results are:
Ed = 4 + 1 + 1 + 4 = 10
Fred = 3 + 3 + 3 + 3 = 12
Pat = 2 + 2 + 4 + 1 = 9
Wimbley = 1 + 4 + 2 + 2 = 9
Fred wins! The fact that no voting group actually liked Fred the most is irrel-
evant! Put differently, Fred doesn’t actually represent anyone the best. He’s
everyone’s second choice.
What does this system give a society? Well, in ranked systems like this,
candidates learn that they should not reveal things about themselves that
would be problematic for any one voter. It encourages more middle-of-
the-road, less-polarizing policies that are digestible by more groups in the
population. In basic math, more “higher” numbers sum greater because
extremely “low” offsets extremely “high.”
10 All for One, and One for All: Collective-Action Games 185
Ranking systems have not made their way into the mainstream, as elec-
tions go. While they seem to be more “fair,” they assume that the difference
in magnitude between the candidates is straightforward—one point for each
difference in the rankings. Voters might not want to give any points to some
candidates, or they want to give all of their points to another. That’s been
tried as well, and it has the same potential flaws. Suppose each voting group
had a strong preference for a different candidate. They would each get four
points, and no one would win!
There’s also a problem with primary elections. Suppose the system says
Ed and Fred will face off in one primary and Pat and Wimbley in another
other, with the winners of each going head-to-head to determine the overall
winner. To make this interesting, suppose these are the voter’s preferences for
all possible head-to-head combinations (recalling that these preferences could
be based on anything):
Ed will win his primary. So will Wimbley. Then, Wimbley will win the final
election. Fred could have beaten either Pat or Wimbley. but he couldn’t
(didn’t) win his primary. Elections with primaries are, in effect, two-step,
sequential games. Voters who are being strategic need to think about winning
the final election (the last period), not the primary. But, history has shown
us that’s easier said than done.
Agenda Setting
If someone is in charge of the order of head-to-head choices, that person can
set the agenda—the timing of choices—to get the final result that they prefer.
If Fred faced Pat in the primary, we may have a different outcome.
Let’s do a homey example of this kind of gamesmanship.
Suppose the family—Mom, Dad, Timmy, and Betty—are going to watch
a movie together. They’ve narrowed the choices down to four possibilities:
Star Trip, Undersea Adventure, Cowboy Times, and Baseball Blues.
186 J. Prag and A. I. Prag
Mom is in charge of the final selection, and she knows two things: her
personal preference (Undersea Adventure), and the head-to-head preferences
for the family. The head-to-head preferences are:
Mom is going to ask the family to make pairwise choices which will elim-
inate one movie and bring the other on to the final round. Mom is really
competitive. She sees that the only thing that could beat Undersea Adventure
is Star Trip. So, she sets the first pairwise choice between Baseball Blues and
Star Trip. Once she’s done that, Undersea Adventure can beat Cowboy Time
and Baseball Blues. If she starts with any other pair, she won’t get to see her
movie (and Mom, like Granny, knows game theory).
Run pairwise eliminations on the combinations above to see how each
movie could “win.”
are structured to try to get people to care about something other than
themselves—not because they have to, because it’s the right thing to do.
These threats and forms of persuasion are intended to shift cultures from
self-interested to cooperative outcomes.
DAY 1
STAND 1 STAND 2
¼-MILE ¾-MILE
MARKER MARKER
0-MILE 1-MILE
MARKER MARKER
STAND 1 STAND 2
MARKET CAPTURE MARKET CAPTURE
DAY 2
STAND 1 STAND 2
MOVES NEXT DOOR TO STAND 1
¼-MILE
MARKER
0-MILE 1-MILE
MARKER MARKER
STAND 1 STAND 2
MARKET CAPTURE MARKET CAPTURE
DAY 3
STAND 2 STAND 1
MOVES TO THE
OTHER SIDE OF
STAND 2
0-MILE 1-MILE
MARKER MARKER
STAND 2 STAND 1
MARKET CAPTURE MARKET CAPTURE
FINAL
STAND 1 & STAND 2
½-MILE
MARKER
0-MILE 1-MILE
MARKER MARKER
STAND 1 STAND 2
MARKET CAPTURE MARKET CAPTURE
We could do this process for a couple more nights, but we can also jump
ahead and ask: is there anywhere they could be located where they would not
want to move again? (What do we call this? That’s right: a Nash Equilibrium!
Whoo hoo!)
The Nash would be right in the middle of the beach. Benny and Jenny
both locate at the ½-mile marker. Once again, they split the market and have
equal profits—and they do not have an incentive to move (Fig. 10.4).
Hang on… When we started the game, Jenny and Benny equally split the
market and profits. At the end, they also equally split the market and profits.
What’s the difference? Well, beachgoers now, on average, have to walk farther
to get a hot dog. When the stands were at the ¼ mile location and the ¾
mile location, the most anyone had to walk for a hot dog was ¼ mile. In the
final version, some customers have to walk ½ mile. That’s unfortunate and to
many readers, it probably seems unfair. Game theory doesn’t pass judgment.
Given the setup of the game, the final outcome might not be fair for those
not playing. We set up the game initially in the interest of beachgoers, but…
they’re not the decision-makers. They delegated hot dog-making authority,
but the hot dog stands played their own game.
Maybe we can do better, though. Perhaps there’s a surf break or restroom
beachgoers congregate around, so they’re not evenly distributed along the
beach…. Actually, the hot dog stand result can be “fixed” with something:
a regulation. The government can pass a law that requires the stands stay at
the ¼-mile and ¾-mile markers; the stands are no worse off and customers
are better off. If you think this type of regulation is far-fetched, ask your-
self why restrooms are evenly spaced at most beaches. If you think the whole
game is far-fetched, as yourself why food courts exist in most malls. Auntie
Anne’s and Cinnabon are usually right next to each other.
190 J. Prag and A. I. Prag
The hot dog stand problem is a game theory classic, and it explains a lot
of things. Countries around the world elect their leaders in many ways. The
US, like many, popularly elects its President. In most years, that comes down
to a vote between a nominee of the Democratic Party and a nominee of the
Republican Party. Historically, when people compare the political platforms
of these candidates, they find a lot more similarities than differences. The
candidates have, like the hot dog stands, tried to locate in the middle.
This is sometimes called the median-voter model.
The median voter is the voter who falls, statistically, in the middle of the
distribution of political preferences. In modern politics, we use the terms
liberal and conservative; the electorate on the far right are very conservative,
and on the far left very liberal. In the middle is the majority of voters, and
we sometimes call them centrist. Winning an election usually requires the
candidate who can “win the middle;” that is, sway median voters toward a
particular side.
Let’s revisit one other option for the hot dog game: polarization. What if,
instead of gravitating toward the middle, the stands went the opposite way:
one at the zero-mile marker and one at the 1-mile marker, like Fig. 10.5
If the game setup holds and people go to the stand closest to them, the
result would be the same: each stand splits the market and profits equally…
on Day 1. On Day 2, Jenny (Stand 2) figures out she can capture much more
of the market if she moves just inside Stand 1, to the 0.01-mile marker. Over
time, the game iterates to the middle the same way it did before.
[box]
STAND 1 STAND 2
0-MILE 1-MILE
MARKER MARKER
STAND 1 STAND 2
MARKET CAPTURE MARKET CAPTURE
When does this approach—the appeal to the median voter—not work? Quite
simply: if you don’t need them. If voter turnout is low in a two-party system,
the winning party only needs to get more voters than the other. Put another
way: the median-voter model assumes there are more voters toward the center
of the political spectrum. If those with median political preferences don’t vote,
they are removed from the game, and they don’t really matter to the candidate.
A polarized candidate could win if her far-right/far-left voters are the only ones
who vote.
[end box]
1/3 2/3
0-MILE -MILE -MILE 1-MILE
MARKER MARKER MARKER MARKER
1/3 2/3
0-MILE -MILE -MILE 1-MILE
MARKER MARKER MARKER MARKER
When they move around, trying to get additional customers, the two
stands on the ends will move toward the center, getting about half of the
market. This lasts for exactly one day, after which the stand in the middle
moves to one end or the other and re-captures some of the market. Then,
stands 1 and 3 follow. Stand 2 returns to the middle, and the process repeats
without end. Check out Fig. 10.7 for the dynamics.
For students of game theory, the game shown in Fig. 10.7 has no equi-
librium. The stands never stop moving around. In a purely game-theoretic
setting, we would say this version of the model is unstable. In a mathematical,
classroom game, there is no solution.
How does this translate to political parties? Imagine the spectrum above
isn’t a pleasant stretch of sand, but political leanings (left–right, liberal-
conservative). The winner in a three-party race never has to appeal to
anything at all. They would randomly move around between sets of polit-
ical preferences; each party would switch platforms on political issues from
election to election. Why wouldn’t they adhere to a set of principles? Their
goal is to be elected. In any system where elections happen more frequently
than other outcomes (i.e., elections happen every 4 years, but economic poli-
cies take six or more years to affect change), you won’t see the preferences of
society represented well at all. You might see voter preferences represented in
each election, but candidates will shift based on their whims. In a three-party
(or more) system, this flip-flopping is amplified.
Generally speaking, the same results occur when the leader of a country
is chosen from the party that wins a majority in the representative body:
a parliamentary system. If there are two parties, the government in charge—
the majority party—usually has time to fix things and solve problems because
they have appealed to about half of the population. If there are many, simi-
larly sized parties (some countries have ten or more such groups), the group
in charge must form a coalition to gain any power and enact any change.
10 All for One, and One for All: Collective-Action Games 193
This is the situation facing many representative parliaments that stipulate the
parliament has to represent every individual faction of society, usually by reli-
gion or caste. True, there’s more accurate representation of the populous, but
there are too many voices in the room for consensus building, policymaking.
From a game theory perspective only, this could hinder societal progress.
Aha, we invoked time. Can that help find a solution?
Well, no. Playing the game repeatedly to infinity is assumed in elections,
since no society anticipates or plans for its demise.
Here’s an interesting thought… We showed in the hot dog game that a
three-party system iterates too much for voters. It’s unstable because candi-
date and party leanings change so much. We can’t say how much these things
change. If the shifts cause wide swings in each election, each year, or multiple
times a year, this could destabilize the whole of government. If shifts in prefer-
ences happen over decades or more—if the iteration is much slower compared
to human life—the effect would not be as destabilizing. Indeed, they could
simply represent fluctuations in society. Believe it or not, in the US, there was
a time when the Republican Party stood for environmental preservation and
the Democratic Party wanted to uphold segregation. Times change.
[box]
Boards are representatives of shareholders, and the exist to act in their interests.
In the case of a corporation, they represent the owners—the stockholders. In
not-for-profit organizations, they represent all future beneficiaries of the orga-
nization’s mission. In either case, Boards must navigate the short-run problems
and decisions as stewards of the company, maximizing the chance that the orga-
nization is passed on to future generations. A Board is playing a long-term,
repeated game into infinity. They are elected or appointed officials charged
with making decisions on behalf of followers.
All of this becomes tricky when you consider a Board’s primary source of
information about the organization is its current leaders and employees. Those
players are critical to the organization in the short run, but they have a clear
short-run bias when it comes to information: employees want to keep their
jobs. They aren’t likely to reveal bad information that the Board might need
for maintaining the long-term health of the organization, especially if that
information threatens their job.
194 J. Prag and A. I. Prag
Human nature is embedded into complex sequential games, often with infor-
mation asymmetry. Boards need to act on reliable info, which the employee
has but probably doesn’t want to reveal. Boards have to rely on screens, signals,
and numbers to inform each decision. If that all sounds hard to figure out,
welcome to the world of the Board of Directors. [end box]
payoff matrix. “Being a member of the winning team” is, hopefully, a major
motivator, but knowing you are an important member of that team win or
lose can be a stronger one.
In professional sports, a player’s ability to change teams—to sell their
services to the highest bidder through free-agency—is often viewed as a
dollars-and-cents thing (players being motivated by money). Many free agents
will tell you it is usually about control. The desire to change teams might
come from motivators like respect, independence, mobility, or a degree of
personal glory. The decision may result in more money, but it is rooted in an
individual’s ability to choose for themselves.
In the workplace, free-agent movers might be looking for an organi-
zation that better understands the many teams that the player is on in
addition (cough*remote work*cough), rather than higher pay. An organiza-
tion’s responsibility is to consider the skills it needs to grow, the ability of
each player, and the player’s payoff matrix. In almost all cases where a team
member can choose whether to play for you or someone else, success depends
on the ability to understand all of the teams that the player is on.
[box]
Team Conflict
When a person’s job regularly requires them to risk her life, she often finds
herself having to make decisions that conflict with other teams she is on.
For the US general public, the national divorce rate remains at about 50%.
The rate for First Responders (police, fire, fire-medic) is closer to 75%
( firstrespondersinitiative.org ). As much as we wish otherwise, a person who
is “dedicated to family” is going to find it difficult to risk her life every day in
a risky profession.
As a team leader, you don’t have to be part of every other team that your
players are part of to understand that you need to include motivations that
exist in your players’ payoff matrix. A team leader might be single and have
no children while the players are all married with kids; that works fine as long
as the complicated choices and dynamics that affect players with families are
things the leader can understand. Collective-action games often overlap and
even collide. This is when we have to remember that all groups are composed
of people, and human constraints usually win over other motivators in the long
run.
[end box]
196 J. Prag and A. I. Prag
Conclusion
Collective-action games are different to be sure. The individual, in many
cases, finds out that his choices and decisions do not change the outcome.
Sometimes that means you can “hide among the crowd,” be one member of
the mob, the herd, or the throng that’s acting inappropriately, figuring you
won’t get caught. Other times, it means you can feel powerless: your vote
alone doesn’t count or won’t change the outcome, so why vote?
To mitigate self-interested outcomes, we’ve created societal norms, rules,
and laws to dissuade the individual from many of these self-interested beliefs.
Rules and laws were created when people saw the advantage of inventing
governments to take the place of individual controls. Standards—generally
agreed upon—moved societies away from a more lawless, random existence.
In the modern world, we aggregate individuals who are themselves too small
to have any power into a social superstructure that we imbue with power.
Regardless of how it operates—and regardless of its political philosophy—
a government is an invention of people created to solve problems for people,
and many of those problems come from game-theoretic situations. Govern-
ment isn’t a thing unto itself. It is people who work for the government that
creates the structure of government, and those people have human, often self-
interested, approaches to problem-solving. They are also just people: people
who have gas, a hangnail, or their sock sliding down in their shoe. As much
as we wish the employees of the government were perfect or superhuman or
completely benevolent, they are just people after all.
Questions
10.1. There’s a lot of gamesmanship in taxation. At any given time, there are
things that the government wants to do or to buy, hopefully because
those expenditures make society better off. Governments must pay for
things and that usually means we all have to pay taxes. As it turns
out, we don’t all have to pay taxes; the government can and usually
does impose a different tax burden on different citizens for a variety of
reasons.
Suppose the citizens of a city elect their representatives democrat-
ically and the city is comprised of lower-income people and upper-
income people (two groups with no one in between). Representatives
always try to lower or eliminate taxes for the people who elected them.
What would go wrong if the population was comprised of 80% lower
income and 20% upper-income voters? What would happen if the city
10 All for One, and One for All: Collective-Action Games 197
was 80% upper income and 20% lower income? Given these results,
should anyone vote for a candidate who promises to only tax the “other
person?”
10.2. Party politics are policies that one party supports specifically to annoy
(or hurt) the other party (as opposed to simply having different
approaches to better society). Assuming we all want a better society,
should we vote for candidates who support party policies?
10.3. Voting is one of those things that we take for granted in the US. We
don’t always think hard about what we’re voting for as much as who.
In current media, personalities rule.
The US also includes a bi-cameral legislature. Fifty states get two
senators each, plus a number of house representatives based on state
population. The House of Representatives number hovers around 435
members; the Senate has a simple 100 members. One part of the
legislature is rooted in equity between states; the other is rooted in
equity within each state. Laws need to be passed by both bodies of
Congress. So: who should you care about more when you vote: your
house representative or your senator? Why?
10.4. Consider the Hot Dog Stand on the Beach game as applied to elections
and suppose people are distributed along the beach (left, center, and
right). If there are candidates from the two major parties (let’s call them
Left and Right) who, during the national campaign, have gravitated to
the political middle and a third-party candidate enters the race, who
would win if the candidate was left leaning? If the candidate was right
leaning? Given these results, which national party would support which
third-party candidate?
10.5. Mob mentality is often a justification for bad collective actions, like
rioting. It is difficult to be a voice of reason if your voice can’t be heard.
We’ve talked about carefully defining the term irrational . Focusing on
looters: are they actually acting irrationally?
11
Granny’s Gold
Granny was really smart. She, along with all the creators of parables, songs,
and sayings, gave us a lot of good advice.
In these pages, we tried to formalize and contextualize this timeless wisdom
and illustrate just how much game theory there is in the world around
us. Game theory is a process, a way of looking at situations—arguably,
most decision-making situations that you find yourself in—and systemati-
cally figuring out the structure of the interaction. What do you know about
the other decision-makers? What makes them tick? Do you know the possible
outcomes of the game? Are they best represented by something tangible like
money or something intangible like jollies? What are the rules? Who choses
what and when? Are there ways to manipulate the game and its outcomes?
Can we use commitments, promises, or threats?
We also looked at the ways human nature influences how we play games.
How do you play differently when you’re tired, hangry, or in a hurry? How
do these very human feelings lead us to inferior outcomes?
Game theory is just a way to formalize what Granny was trying to teach
us all those years ago.
Once we open this door, we find many more game-theoretic sayings and
lessons in the world around. In this chapter, we’ll visit just a few more.
Turn Signals
One of the best examples of a game theory signal is an actual signal: a car’s
turn signal. On a good day, the device lets you know what the car ahead of
you is about to do so that you can plan accordingly. That’s the whole idea.
Think about how frustrated (and angry!) you get when someone leaves
their turn signal on for an inordinate amount of time. You’re driving down a
city street or zooming down the freeway and the person in front of you has
11 Granny’s Gold 201
FUTURE
KID
LEARN ON EXTREMELY
THE SPOT HOMESICK
TODAY
YOU
LEARN ESTABLISHED
EVEN & HAPPY
GIVE KID MORE
CHORES
STAY A LITTLE
CAPABLE HOMESICK
had their left turn signal on for ten miles. You’re probably showing your irri-
tation by saying inappropriate things about the other driver’s obliviousness.
What’s the real problem? The person’s signal isn’t telling you anything! A turn
signal signals action, but since you have no idea of their intentions (when and
where they’re going to turn), it’s not helpful information at all.
Do you believe the signal and slow down? Or, do you avoid passing? Do
you ignore it, like we said before, and call it a diluted signal? If you have to
guess what a signal means, it’s unreliable.
someone’s birthday celebration. You buy the presents and invite people to the
party today, and you don’t tell the birthday girl. You keep it a secret. You are
actively trying to give her more jollies (through surprise) on her special day.
You often have to lie; “no, I didn’t get you anything special and we aren’t
having a surprise party,” because the game is sequential. And your jollies in
the future are connected to the jollies that you’ve been able to create for the
birthday girl!
Similarly, you write a recommendation letter for a colleague who is
“secretly” looking for a new job. You keep the secret (let’s assume you do)
because….? Again, there’s a sequential game here in which you want to be
thought of as both a reliable friend and trusted colleague. You might need to
ask for a letter as yourself someday in the future, and you can now ask that
friend because they know they can trust you. Secrets are just another form
of asymmetric information in sequential games that you don’t reveal because
revealing the information would lead to a bad outcome for you in the future.
Lies are also forms of asymmetric information. They can change outcomes
in both sequential and simultaneous games. The innocuous “white lie” and
the acceptable bluff are intended to change the other players’ choices in a
way that might even make the other player better off in the long run. Telling
a small child a white lie about where grandma had gone (forever) might be
better for him that telling him the truth, simply because he’s too young to
comprehend death. Bluffing is lying to get something for yourself—a raise
at work, the pot in a poker game—but both parties know that’s part of the
game and it doesn’t always work.
Less innocuous lies are far more problematic. They can change the outcome
of games, often in your favor and at the expense of others—but over time,
they tag you as untruthful and unreliable. In repeated games, lying will likely
make you worse off in the long run. Aesop knew the difference and tried to
communicate it in the fable “The Boy Who Cried Wolf.” What a child is
being taught in this (slightly horrifying) story: if you outright lie for no good
reason whatsoever, you might get eaten by a wolf.
When the child who hears this story grows up and says to mommy, “what’s
the difference between the little boy who cried wolf and you when you lied
to me about Santa Claus,” what will you tell them? You probably won’t use
these words, but what you’ll tell them is something like “the situation, the
game, is different.”
The lightbulb moment referred to in the introduction is upon us: game
theory is everywhere, even without the specific numbers and formal aspects of
a mathematical game. If this book has worked, you may be running through
a game theory analysis for simple things like the question above.
11 Granny’s Gold 203
Your colleague asked your mutual boss for a reasonable raise and, as part
of the negotiation, said she would have to start looking for another job if the
raise was turned down. Thinking this was a bluff, your mutual boss turned
her down, she did start looking, and found this potential new job. That’s why
she’s asking you for the secret recommendation letter.
Losing your colleague will be bad for the company and maybe even you,
from a “team” perspective. If we look back on the game, we can see everyone
had an opportunity to respond to accurate, full information—they just made
bad choices. What do you do? If you tell your boss it wasn’t a bluff and
he’s about to lose this important person, you might make a difference. Then
again, your colleague will know you can’t keep a secret…
Now, before you throw up your hands and say this is a no-win situation,
think about all of the choices you could have made. You could have tried
to tell your boss you don’t think the colleague was bluffing before you knew
she found another job. You could have told your colleague that you don’t
want to write the recommendation letter because it puts you in a difficult
position. It’s true, you’re caught in the middle, but that also means you have
more choices in front of you. Classroom game theory doesn’t give you these
chances to “make something right,” but the real world does.
That’s all interesting stuff if we’re talking about investments but what does
it have to do with game theory? Let’s relate this expression to mixed signals.
Suppose the stock you could make money on produces something you don’t
approve of (cigarettes or land mines or something like that.) If your hunch is
correct, you’ll make money on the option but your jollies might not rise; you
feel bad getting rich in certain ways. In that case, hunch or not, you would
be better off without the option.
Now, recall the student that gets accepted by Hoity-toity University. He
doesn’t have to go there; he simply has the option to do so. Let’s assume if
he’s good enough to get accepted at Hoity-toity, he also got in at many other
excellent schools. The option to attend Hoity-toity can’t hurt him, right?
Here’s where you should think about the future, uncertainty, and lots of
things that are hard to know. In game theory, we’ve known that removing
some choices today in a sequential game can make a person better off in the
long run.
The granny expression that does fit here is “be careful what you ask for,
you might get it.” Arguably, the student’s problems—at least difficult deci-
sions—started when he applied to Hoity-toity. Asking for that option puts
the student in the difficult position of having to make the correct choice
today in a game that won’t be obvious it has results for many years.
An option is just a mixed signal. Recall the student that gets accepted by
Hoity-toity University. He doesn’t have go there; he simply has the option
to do so. If he’s good enough to get accepted at Hoity-toity, he probably
also got accepted to many other excellent schools. Surely, just the option to
attend Hoity-toity can’t hurt him, right? Put another way: is having too many
choices a problem?
In game theory, the answer is yes. As we saw in some of the more
complex games with more than one choice, there’s a better chance of making
the right choice (in both the long- and short-run) when we can at least
remove some choices today. In a sequential game where so much is unknown
about a person’s future after college, having the option to attend Hoity-toity
University may not really make the student’s life easier.
The Granny expression that fits here is “be careful what you ask for, you
might get it.” Arguably, the student’s problems—in the form of difficult deci-
sions—started when he applied to Hoity-toity. Asking for that option puts
the student in the difficult position of having to make a choice (to attend or
not to attend) in a game that won’t obvious have results for many years.
Note: the current state of higher education in the US necessitates that
students apply to any and all programs they remotely qualify for. This is
a costly process, not only in time and grooming, but financially. In fact,
206 J. Prag and A. I. Prag
down, and they didn’t. It doesn’t matter. The approaching car was trying to
take advantage of the law in order to sue the other driver, and the law seems
to be on their side.
The legal justice system has had to deal with this possibility so many times,
a doctrine emerged called the “Last Clear Chance” rule. Roughly speaking
(and this is a legal gray area), the ability to win such a lawsuit requires that
the approaching driver show they couldn’t have reasonably prevented the
collision.
“Gaming the system” is what we would call this, and it’s a risk anytime
a strategic move is codified into a rule or law. Laws run the risk of being
interpreted too literally. The driver who could have prevented the accident
is applying an erroneous version of “kill or be killed.” The courts can and
have applied their own doctrine to avoid abusing the letter of the law: “an
exception to the rule.” One driver made a mistake, the other driver caused an
accident on purpose. Two wrongs don’t make a right.
Ignorance Is Bliss
There are an amazing number of expressions that talk (or warn!) about infor-
mation. The world has certainly morphed into a better-informed place, with
technological innovations related to AI and the world wide web.
In a lot of ways, opportunities and information conflict with important
game-theoretic concepts like commitments. You have a mid-level employee
with a few years of experience. Your company is going through a “rough
patch,” and you need “all hands on-deck.” In past decades, you, as an
employee, ramped up your performance for a sense of loyalty to the orga-
nization, and to avoid the “pain” of a new job search. Today, on the other
hand, a list of available jobs in your industry, specific to your experience, is
right there on your phone. Now, if you get wind that your company is down-
sizing, you can weather the storm and score a new job in your pajamas over
a holiday weekend.
Concerns about disturbing the established social order with easier access
to information go back centuries. The expression “how are you going keep
them down on the farm” was a World War I song title that worried about
sending farmhands overseas. (Even then, the concern was winning the war!)
Movies, radio, and television all changed the availability of information and
there were concerns in each case about how the new information that they
provided would change people’s choices. Repressive mechanisms often rely
on restricting information—such as access to the unregulated internet—to
11 Granny’s Gold 209
always return the ball to the right, then to the left, and repeat (or deep, then
short, etc.). In basketball, you can’t always dribble to the right one time down
the court then to the left the next time.
In these examples, a successful strategy is a set of probabilities that lead
to scoring the most points or giving up the fewest hits. Optimizing a set of
probabilities—creating order in your randomness—is part of what’s called a
mixed strategy. In game theory class, we often create games in which we can
calculate the probabilities precisely to show how these games work. In the real
world, you are almost always determining the probabilities of choices by trial
and error.
The correct probabilities would incorporate a difference in, say, your select
choice against the other person’s choice. A pitcher’s best pitch might be his
fastball. If he throws it every time, the hitters will eventually hit it. So, he
mixes in the occasional slow or curveball. The optimal probabilities might be
80% fastballs, 10% slowballs, and 10% curveballs. The key to these mixed
strategy situations is not having a pattern even as you use your probabilities.
The pitcher might throw two slowballs in a row every once in a while just to,
as they say in baseball, keep the batter guessing.
Lest you think that mixed strategies are only for sports and simple games,
there are life circumstances that allow you to use these skills. Spontaneity in
relationships is occasionally good. So is getting out of your comfort zone
or being the designated driver. As long as it doesn’t sacrifice reliability or
predictable safety, “changing it up” is a tried and true strategy in many human
relationships.
Holding Grudges
It makes sense that you are more likely to make a cooperative choice and
expect the cooperative choice will be made in return, when the other player
is a family member, neighbor, or long-time friend. As long as the games are
Prisoner’s Dilemma style games, both of you are better off in the long run
(many repetitions) when you both make the cooperative choice. It also makes
sense that you’re much less likely to make the cooperative choice when the
other player is a total stranger and you’re only playing the game one time.
What about people that you cross paths with on occasion, off and on
during your lifetime; perhaps professional colleagues or industry peers. You
might still play the game many times, but neither of you know how many
times or when the last play might be. In cases like this, it’s not uncommon for
one player to make the noncooperative choice, getting the biggest one-period
11 Granny’s Gold 211
payoff and saddling the other player with the lowest one-period payoff. After
all, who knows when, if at all, you’ll interact again?
What would the player who got the low return do after this happens?
That’s easy: they would hold a grudge. A grudge is a comeuppance that
you hold onto until the next time you interact with the other player. There
might never be another chance for revenge, or you might “get over it.” For
some people and some indiscretions, a good grudge can last a lifetime. The
cautionary tale of game theory is this: when you think about making the self-
interested play in an intermittently played, Prisoner’s Dilemma style game,
remember—your punishment might come many years later.
[box]
The problem with holding onto this type of grudge is it can lead you to make
incorrect choices in future versions of the game. Let’s say life goes on after the
decision; both you and your employer have to live with the decision. Now,
suppose things don’t go well at the firm. For whatever reason, they made the
wrong choice, and you get another bite at the apple. You’re offered that job
again. You feel vindicated; your inner voice wants your outer voice to say, “I
told you so.” Your long-festering grudge tells you to take the job now that you
have a chance. Should you?
Probably not. At the very least, be aware of the reality that this is a different
version of the game with a different payoff matrix. Getting jollies from being
right—feeling vindicated as we said—comes from getting the offer. Taking the
job means you have to deal with the state of the firm now. This is not the
company you originally applied to. Your jollies going forward (based on your
decision to accept the job) will be determined by your ability to succeed in this
new environment—not in the earlier version of the game that didn’t go your
way. We’ve used this expression before, but it applies here more than ever, “be
careful what you ask for, you might get it!”
212 J. Prag and A. I. Prag
[end box]
On Being Fooled
Granny loved talking about fooling and fools:
If a game-theoretic situation does turn out the way you hoped or expected, it
is sometimes referred to as being fooled. It happens on occasion. You cannot
possibly know every person and their payoffs in every situation. That fact, by
itself, does not mean you did anything wrong; you’re human.
What we should have learned from this book is being fooled on occasion
doesn’t make you a fool. Being fooled in the same situation, by the same
person, in the same way, repeatedly does. Einstein famously said, “Doing
214 J. Prag and A. I. Prag
the same thing over and over and expecting different results is the defini-
tion of insanity.” Humbly, game theory can teach us ways to enhance our
information, learn in repeated games, and get better outcomes over time.
Being fooled might be an application of what we called asymmetric
information. The other player had information that was important to your
outcome, and they chose to withhold it or lie about it. Being fooled might
also mean you weren’t using game theory in a situation that called for it; you
believed that your choice alone determined the outcome when in fact, the
choices of other players also mattered.
You can even extend the idea of fooling players in a sequential game. If
every player makes their choice—buys something—because they were fooled,
let’s say a little, and then turned around and sold the thing in question to a
new player by fooling them a little more, the end point is usually fooling the
last buyer a lot. That’s the nature of what’s called the Greater Fool Theory.
The last person who mistakenly choses something that’s actually not the
correct choice will likely be the biggest fool in the group.
In any case, you hopefully learned from Granny, from scriptures, from
Aesop, and from Einstein the importance of thinking and acting differently
when you’re in game-theoretic situations and considering how important
accurate information is in these circumstances.
A No-Win Situation
In real-world situations, you are sometimes put in the awkward position of
being unsure about what to say; what to signal in a game-theoretic situation.
In an application of the expression “discretion is the better part of valor,” it is
sometimes better to avoid the situation rather than have to make the choice.
Consider this scenario: a colleague at your company asks you to go to
lunch and says they want to talk to you about making a professional connec-
tion for them with a close friend of yours, the CEO of a hot new tech
company. You don’t really think this colleague is a good fit (or maybe you
think even less generous things), so you don’t want to make the connection
for him. You also have to interact with him at work on occasion. What do
you do? You can lie to him or lie to your CEO friend about him or some
other unconformable thing. Or, you can politely turn down the lunch offer.
In life, there are games you don’t want to play. We often refer to these as
being in a “no-win” situation, but it’s only no win if you play.
11 Granny’s Gold 215
same). We prefer outcomes we can see today rather than esoteric, phantom
outcomes that might happen tomorrow.
The medical profession has given us another way to think about this
problem. Granny was talking about a difficult decision once and you heard
her say, “that’s a hard pill to swallow” (or possibly “that’s a bitter pill”). As a
concerned grandchild, you asked if she was sick and she said “no, dear.” What
Granny meant is, there are choices in the short run that are unpleasant, but
they lead to the best possible future outcome. Making the hard choice today
goes against our nature, but it may still be the right choice. All the more
reason to learn game theory.
Except, swallowing a bitter pill is not automatically a game-theoretic situ-
ation. A game needs two players. To be a one-player sequential game, the
player has to make a current and future choice. That can be a game. Take
two players: a patient and a medical doctor. The doctor tells the patient to
take a pill every day to be healthy in the future. The patient, who isn’t sick,
thinks it’s a waste of time and money. Short-run negative jollies are clouding
much higher long-run positive jollies. We should take vitamins, but few of
us do.
Ask any toddler who’s pitching a fit about taking the bad-tasting medicine
that they need to cure their illness (or better still, getting a shot). Making
the correct first choice is difficult if that choice is unpleasant. Finding ways
to make treatments less onerous is important for drug producers, doctors,
and other healthcare providers. As part of the game of healing, they need to
be aware that the negative jollies people perceive from that “hard pill” are
inhibiting, sometimes even preventing, a desirable outcome for the patient.
Consider the soft skill known as bedside manner. Having a good bedside
manner means you’re good at communicating and connecting with your
patients. You can empathize with them. You understand their negative jollies.
From the doctor’s perspective, only being right or technically trained doesn’t
always get the patient to make the right choices. The arrogant surgeon might
tell the patient, “you have to cut to cure.” She might be right, but she’s
also making the patient think much more about the cut than the cure.
A good bedside manner helps doctors understand jollies from the patient’s
perspective, helping get the patient past their fears and on the path to cure.
There’s another wise expression: “you have to take your medicine.” It refers
to accepting punishment for a mistake. Medicine as punishment seems odd,
but in earlier times, very little attention was paid to how medicine tasted
(spoiler alert: it was bitter and gross). Getting a desired outcome in a health-
care situation could be accomplished by making the medicine seem less like
a punishment; an easier pill to swallow.
11 Granny’s Gold 217
[box]
Dieting
Most adults have had the experience of trying to lose weight in a world
where the BEST foods we enjoy have a lot of calories. Food producers, our
internal jolly generators, and game theory conspire against us during most
diets. Knowing many of us have what we call “a weakness” for salty, sweet,
fried, and crunchy foods, producers fill store shelves and café menus with them.
With these readily available sources of short-run jollies at hand, we revert to
human nature. The long-run path to health is uncertain, and the ice cream is
right there! A lot of us make a short-run decision that takes us away from our
long-run desired goals.
[end box].
to get housing and secure better future outcomes. Here’s the problem: all
rules in sequential games start with the next play. They make outcomes fairer
going forward . They don’t do anything about the past. In that sense, these
laws and rules start “from here.” You have to use more balanced criteria to
select future tenants, but you don’t have to evict the ones you already have.
One could argue that fixing these sorts of problems going forward is often
the only thing you can do. Even in the most progressive society, you can’t
(shouldn’t) force a landlord to evict people of one group to make room for
another. Making sure things are fair going forward might be as easy as not
allowing the use of irrelevant, biased screens. Fixing the imbalances that the
unfair rules already created is much harder. Using the structure of a sequen-
tial game, you would have to go back to the first time the unfair process
was applied, retrospectively change the rules, and figure out what outcomes
should be. That’s impossible in most situations.
The “from here” problem exists in a lot of places. There are many sequen-
tial games in which the starting point is arguably rather subjective or even
arbitrary. Consider how many college sports teams are ranked (which impacts
things like recruiting, NCAA funding, and their position in postseason tour-
naments). A group of “experts,”—head coaches in the sport or writers that
cover the sport—rank teams in the preseason based on what they would say
are informed guesses at future performance. After teams start to play, rank-
ings change and become more realistic. The preseason, made-up rankings are
the “from here” starting point and will affect rankings for the entire season.
Being able to say that you beat a Top 10 team matters even though the team
was not necessarily worthy of the Top 10 based on anything real.
Another, somewhat scarier version of this problem exists in asset markets
(stock markets, housing markets, etc.…). Stock prices react to news about
corporate earnings and the overall economy. Unexpected, good earnings will
make the price of the stock go up in most cases. That means the news will
make the stock price go up “from here.” What if “here” is wrong—perhaps
already too high? Finance people often talk about asset prices that have moved
too far too fast; a stock is said to be overpriced relative to its fundamentals.
Asset markets—both stock markets and housing markets—have experienced
what are called bubbles; valuations that are unsustainably high. Bubbles could
be analyzed as sequential games with a “from here” problem. Just like the
biased decision process earlier for rental housing, it’s very hard to go back
and figure out what things should be worth when an asset bubble bursts.
11 Granny’s Gold 219
BOARDING
TAKES...
2 -$500
HOURS (AND COST OF
RESCHEDULING)
1 HOUR
EARLY
1
ROBERT HOUR -$500
GETS TO
THE
AIRPORT
2
-$1500
HOURS
3 HOURS
EARLY
1 -$1500
HOUR
early and occasionally misses a flight. He’s perfectly fine playing the odds of
how long boarding takes. How often does one actually miss a flight?
You make much less per hour (sorry). Unlike Robert, you make deci-
sions based on jollies which include income lost and other things, like your
personal anxiety about missing a flight. We can convert dollar costs to jollies
and still use the same hourly rate as Robert, but we can include negative
jollies when you worry about missing the flight and a lot of negative jollies
when you actually do miss a flight (Fig. 11.3).
This is still probabilistic, but you can understand why you and Robert
make different choices. Nobody is right or wrong. It’s completely rational for
a person to pay more money to avoid a lot of anxiety.
This game also lets us end with an expression that isn’t always right: “all’s
well that ends well.” In this game, and in much of our book, we’re high-
lighting the importance of how people feel about the outcome of a game: the
jollies that they receive in the end. How you decide on a choice can and does
affect your jollies. We’ve included things like honor, empathy, fairness, and
now anxiety in the final payoff matrices. You can also include things like coer-
cion, pride, and apathy. We consider these feelings when we specify a game
for ourselves, intrinsically. We don’t always know we’re doing it, because we
11 Granny’s Gold 221
BOARDING
TAKES...
2 -3000 JOLLIES
HOURS
1 HOUR
EARLY
1
YOU HOUR -1000 JOLLIES
GET TO
THE
AIRPORT
2
-150 JOLLIES
HOURS
3 HOURS
EARLY
1 -300 JOLLIES
HOUR
don’t always understand our motivations, but how you feel about making a
decision always affects your payoffs.
Conclusion
Game theory is tricky and complicated even if you don’t make it mathemat-
ical or overly analytical. You have to make choices in situations where other
decision-makers are also making choices and the outcome depends on both
of these decisions. These situations happen every day and by the time you
become a functioning adult, you learn to navigate them without reading a
game theory book or taking a game theory class.
Before looking to mathematics and economics to complete your game
theory education, remember that you were taught to make decisions by many
people who came before you. People who dealt with similar problems and
situations and may have learned the hard way—by getting a bad outcome—
that their approach to the situation was flawed. You have already learned from
these previous mistakes—it’s why we study history in all its forms.
Understanding the structure of a game and the determinants of payoffs
(what makes people tick) requires insight and reflection. Learning game
222 J. Prag and A. I. Prag
theory and using it well is a life-long pursuit. If you want to help those who
come after you, pass along what you’ve learned and make the lessons easy to
remember—and never underestimate the power of a good expression!
Questions
11.1. Consider the following expression: “all’s fair in love and war.” Focusing
on the choice of words in this well-known caution, what word makes
you think that this has something to do with game theory? Which
aspect of game structure is it addressing? Finally, if the expression is
literally true, and knowing what you know about war, what is it saying
about the stakes (the payoff matrix) of “the game of love?”
11.2. We discussed expressions throughout this book that are dated or have
otherwise lost their context or reference point. “A stitch in time saves
nine” and “You reap what you sow” are examples. Let’s try to connect
some modern acronyms to what we know about game theory:
• YOLO. How is this related to sequential games?
• FOMO. How is this related to understanding your player?
• IMHO. How does this defer responsibility to another player?
11.3. Our book touches on some very personal and private areas of life.
Granny wasn’t as much of a prude as some people think; to para-
phrase a bit, “she saw some stuff.” Suppose you told her that you and
another person were “friends with benefits.” What would she worry
about, aside from the biological consequences? What changed from
her time to yours that negates some of her concerns?
11.4. Game theory tries to get people to think more clearly about the
process of mutually dependent decision-making; your choice and mine
together determine our outcome. Many other things are involved in the
process of decision-making, including skills and statistics. Some people
also rely on sketchy approaches to determining outcomes—like super-
stitions. We often do silly things because they worked once: I wore
socks inside out during the World Series and we won, so I guess I’m
doing that every time. As long as you don’t take these things too seri-
ously, applying superstitions to a random process doesn’t do any harm.
Does that logic extend to game-theoretic situations? What’s wrong with
thinking, “as long as I’m wearing my lucky hat, I’ll get the best possible
outcome” in a game?
11 Granny’s Gold 223
1.1. Homework and practice are ways of building skills. The end result
of these skills is better life outcomes; having the ability to excel at
something that “pays off.” As we’ll see, life is a sequential game where
choices and decisions today allow or preclude future outcomes. Parents
know, because they’ve already gotten there, that today’s skill-building
becomes tomorrow’s paycheck.
1.2. You didn’t listen to Granny about a fool and your money. The palm
reader, on the other hand, is assuming that no one will do unto her as
she has done unto you!
1.3. The biggest single problem—the one word that really makes this
expression a very bad application of decision-making strategy—is
“sight.” Considering the long-term nature of a relationship, the small
and fleeting amount of information that one can glean from the “first
sight” is insufficient to base a long-term commitment on.
1.4. “Lucky” Most of the skills you will need in that game are learned in
class on probability and statistics.
1.5. Game theory is about decision-making in situations where there are
multiple, inter-connected, decision-makers. Competitive sports often
include game-theoretic decisions, but sports are usually based on
physical skills.
2.1. Based only on things we’ve discussed so far (reputation, repeated
games, and other possibilities could have something to do with this
but we’ll talk about those things in later chapters) empathy outweighs
money for you in this case.
© The Editor(s) (if applicable) and The Author(s), under exclusive 225
license to Springer Nature Switzerland AG 2024
J. Prag and A. I. Prag, Useful Game Theory, Palgrave Executive Essentials,
https://doi.org/10.1007/978-3-031-75154-7
226 Answers to Discussion Questions
2.2. “If ” Game theory is the process of incorporating other players’ choices
into your decision-making process. If you know with certainty what
the other player is going to choose, this is much more straightforward.
2.3. In the real world, you must figure out what’s in people’s payoff matrix.
What gives them jollies. They also have to figure out your payoff
matrix! That the things with Janet work out reasonably well after twelve
years is, you’ll pardon the neighborhood pun, a two-way street.
2.4. Coordination game results require a mechanism—a shared approach—
for reaching a solution. Kids need to learn how to start games; how
to make rules that are inclusive. These things require experience with
starting and playing games. Adults help small children do this early in
life and those successes give kids experience in structuring games.
2.5. Many people start—with help from an understanding restaurant—
with choosing categories. Meat or fish or pasta or a big salad? Maybe
appetizers for the meal? Once you’ve made your category choices, you
can strategize with others at your table in hopes of getting a “taste” of
things that weren’t in your category of choice.
3.1. The couple’s alcohol-fueled commitment skipped far too many steps.
The couple did not have enough information about each other to
for the marriage commitment to be successfully maintained. Put
differently, one or both people almost certainly discovered that their
marriage partner was not well-suited as a permanent source of every-
thing that a happy marriage requires.
3.2. Obviously this question has many possible answers. Let’s assume that
they want to watch something together (so the answer is not grandpa
watches fishing on TV while grandson watches sci fi on his phone.)
The “been there, done that” approach that we discussed would say
grandpa should agree to watch something other than fishing. Perhaps
the science fiction show, perhaps a third option. But grandpa knows
what it’s like to be fourteen; grandson has no idea what it’s like being
sixty-five.
3.3. We’re going to go with learned. There are so many things that change
as a person gains experience that it’s hard to imagine a functioning
decision-maker is relying on the small number of jolly generators that
are hard-wired.
3.4. Understanding the source of the other decision makers’ jollies in no
way requires you to share that source of jollies. The key is under-
standing the source of jollies and also respecting the other person’s
tastes. We often hear the advice “love someone for who they are.”
Answers to Discussion Questions 227
Simplistic though it is, that’s a much better recipe for a happy long-
term relationship than judging everyone in your life by your tastes and
standards.
3.5. People act differently in public. Your boyfriend is less likely to
burst into tears, become verbally abusive, or many other unpleasant
responses intended, perhaps, to change your decision.
4.1. Take a picture with a smartphone and text the photo to would-be
rescuers. But this technology comes with a twist, early on in this story,
after one or two fake sittings, the kid would be told to send a picture
and without that confirmation no one would come. Heaven help the
kid if his cellphone’s battery ran out!
4.2. We have laws that penalize people (fines, imprisonment, etc.) for selling
things under falls pretense.
4.3. “Blocking and tackling” in football are important to the outcome of
a game but they are among the least fun things that team members
have to do. That reality transfers to the business world. In general,
employees don’t enjoy doing the jobs that the boss is referring to when
she says, “we need to focus on blocking and tackling.”
4.4. It isn’t fun for anyone when you lie about things like your educational
background or your legitimate skillset. No one wants to have brain
surgery from someone who lied about going to medical school.
4.5. Singles bars. Feel free to replace “singles” with any hard-to-signal
attribute and you’ll get the idea. In most cases, these places do not
require that customers are part of the group, but they provide a place
that people can willingly and knowing go to meet similar people.
5.1. Personal choice often conflicts with cooperative behavior. One can
make a personal choice to be cooperative of course, so where’s the
problem? The problem is in the word mandate. Being told, or ordered,
to be cooperative is different from choosing to be cooperative. And
when the issue is a communicable disease, the importance of coopera-
tion—either mandated or chosen—is clear. People other than yourself
can die because of your noncooperative choice. Mask mandates are
arguably less effective than vaccines, but they are visible, and they
have the ability to reinforce a sense of seriousness about controlling the
problem. In that way, mask mandates are but symbolically and at least
to some extent literally a cooperative way of controlling communicable
disease without mandating that people get a vaccine.
5.2. He could hold the door open for everyone. That seems like a reason-
able solution but how would a woman, who was offended by this
old-fashioned behavior know you do this?
228 Answers to Discussion Questions
7.1. Cynical though it sounds, having lots of people at your wedding means
there are lots of people watching you, seeing you at the local bar
or restaurant with someone other than your spouse, and potentially
reporting their observations to your spouse.
7.2. The problem is in using the word “promise.” Situations can always
change, and promises do sometimes have to be broken but if you’re
raising your child to use the word promise correctly, you might say
you plan to take her to Disneyland instead.
7.3. Yikes! Well, a credible death threat potentially takes away all of your
future payoffs so the choice between your money or your life is a no-
brainer. Credibility is key. The implicit promise is you won’t be killed
if you give up your money. If you have reason to doubt that implicit
promise, your choice might be different.
7.4. If you give in to the bully, he will bully you forever. The advice-giver is
seeing this as a repeated game where your decision today will establish
your reputation.
7.5. In the setup of a simultaneous game we say that both players are
choosing at the same time which makes promises and threats—what
we called response rules—impossible. It has been argued that you could
somehow pre-program your choices in a simultaneous game to align
with a promise or threat. The promiser/threatener would program a
computer or something like that to automatically choose something if
the other player makes a choice. While this sounds possible, it’s hard
to find any examples in the real world.
8.1. A one-period (simultaneous) game with a stranger has no way of incor-
porating reputation or long-run benefits of cooperation that counteract
short-run benefits of self-interest. One-time-only games with strangers
usually need to be played with a lot of caution and an assumption of
noncooperation.
8.2. Assuming they knew about the worst-case scenario—the accident,
etc.—what likely happened is that they drank enough to be mentally
impaired and thus didn’t correctly incorporate the future downside.
8.3. There is a perception that there won’t be any future in which you can
be punished for bad behavior.
8.4. The officers don’t live forever. Policymakers in situations like this
often focus too much on the short run—they want to be liked or
re-elected—at the expense of long-run success.
230 Answers to Discussion Questions
8.5. No hell means no lost jollies forever, just more jollies in heaven or no
more jollies. In that case, a bad actor could try to accumulate enough
short-run jollies during their lifetime to make forever jollies in heaven
unimportant.
9.1 The frontline people are also the ones who are likely to be hurt or killed
first. While not actively wanting to start World War III, a person on the
front line will still do what’s necessary to protect themselves. The high-
level leader can limit things (or not). They can agree to let skirmishes
happen on the front lines while not allowing nuclear missiles to be
fired. That said, frontline decision-makers might know (sooner) that
things have already escalated too far. The use of things like chemical
weapons for example.
9.2. When the fuse is lit, it burns slowly but steadily toward detonation.
Pull out or cut the fuse—defuse the explosive—and there will be no
detonation. This is brinkmanship with a twist. The escalation of the
situation is based only on time. Once the lit fuse reaches the explosive,
there is no turning back. The bad outcome isn’t exactly at the end of a
slippery slope. The fuse burns steadily, and it can be cut right up until
the explosion.
9.3. This is the situation with voting for representative policy makers in
some circumstances. The voters are the frontline players, and they often
want a happy-ever-after society in which to live for a long time. Elected
officials are only in office for a few years so they might care more about
the short run (being liked or being re-elected.) One might argue that
this is a flaw with representative government.
9.4. Leaders should care about the people being led more than they care
about themselves. The egomaniacal leader, in a brinkmanship situa-
tion, is more likely to start a war because he doesn’t care about how
many people he kills.
9.5. Eventually, you would call that player dead. If there is a point, it isn’t
in the game of chicken. Weakness and strength as concepts make a
lot more sense when we think about situations where threats are being
used to determine the outcome of repeated games where one player is
trying to avoid a bad outcome every time they play.
10.1. If the city was 80% lower income, elected officials would impose most
or all the tax burden on the rich twenty percent. Here’s where we
must remember that people usually have the choice not to play the
game at all. The rich people could just leave; move to another city (we
purposely chose a city rather than a country to make this easier to do.)
Answers to Discussion Questions 231
would be “but if you make very bad choices today, you don’t live very
long!”
FOMO is “fear of missing out.” Players often play games or make
choices because or peer pressure. A person literally loses jollies because
of their perception of what others think of them. Not being part of the
group (missing out) can make a person feel lonely or “uncool.” The
pushback against this attitude is a reminder that being part of some
groups is far worse than being left alone. There’s nothing cool about
being part of a group that hurts or abuses other people.
IMHO is “in my humble opinion.” Players may use this term to
delegate authority to another player. By transferring responsibility, the
player also defers the consequences of the action. We do it all the time
in conversation to absolve ourselves of choices made by friends and
family.
11.3. Granny knows that “benefits” are hard to keep under control. “Bene-
fits” often lead to attachments. She might say “how can you both be
sure that your benefits can stay friendly?”
11.4. Thinking that wearing your lucky has changed a probabilistic outcome
is innocuous because, well, it didn’t. At worst, you might play games of
chance too often if you follow this logic, but your superstition didn’t
actually affect the outcome. In a game-theoretic situation, your belief
that a lucky charm of some kind always leads to good outcomes might
make you less likely to figure out the game correctly and thus less likely
to make the correct, game theory based choice.
11.5. The best approach to this change is for voters to accept and contextu-
alize the flaws in those who are running for political office rather than
pretend the person has no flaws. Wishing that—like the “old days”—
no one wanted to (or could) find the skeletons in your closet allow
people with very bad “skeletons” to become political leaders.
Glossary
Agency Theory, Principal-Agent Problem The area of study that deals with problems
that arise when a decision-maker (the principal) delegates decisions to a subordi-
nate (agent) where these decisions or choices will determine outcomes for both
parties. The “problem” is how can the principal get the agent to act in the
principal’s best interest and not their own best interest.
Assurance Certainty
Assurance Game A game with many possible solutions (equilibria) in which players
have found a coordination mechanism that allows them to reach one of the
solutions.
Asymmetric Information A situation in which one player has more information than
other players.
At the Limit A holdover from mathematics, this refers to playing a game an infinite
number of times.
Balanced Equal
Binary Choice See: choices
Bluff, Bluffing Announcing a choice that you don’t actually intend to make.
Brainwashing Psychologically manipulating someone into believing things that are
not true or things that are not in alignment with a person’s existing value
structure.
Break-even Price The price that exactly equals the cost of production.
Brinkmanship Situations where the only available threat is generally considered to
be too big to be credible. Players often start down a path that could lead to
the threat happening. Different players and different decisions move the process
down what is often called a “slippery slope.”
Cartel A group of firms that coordinate their production and prices in order to
maximize group profits. A cartel is, in effect, a coordinated “group monopoly.”
© The Editor(s) (if applicable) and The Author(s), under exclusive 233
license to Springer Nature Switzerland AG 2024
J. Prag and A. I. Prag, Useful Game Theory, Palgrave Executive Essentials,
https://doi.org/10.1007/978-3-031-75154-7
234 Glossary
Cognitive Bias A thought process that is skewed, often inappropriately, toward a set
of beliefs or a particular outcome.
Choice, Binary A choice set with only two options for each player.
Choice, Non-Binary A choice set with more than two options.
Choice Process The approach that a person uses when choosing between alternatives.
Choices, Choice Set The available alternatives that a player or players must choose
between.
Cold War The period between the end of World War II and the early 1990s when
the US and USSR were building large arsenals of nuclear weapons, purport-
edly for deterrence and defense, and using the threat of nuclear annihilation to
destabilize the world.
Collective-action Games Games in which individual choices and outcomes depend
on the choices and actions of a larger group.
Collective Bargaining (Power) Often discussed in relation to labor unions, people who
agree to bargain as a group rather than individually. In many cases, the group has
better bargaining power than any one individual.
Collusion players actively coordinating their decisions or choices. Famously, busi-
nesses might do this to keep prices from falling when they actively compete.
This is called price collusion.
Commitment A choice that one player says they are going to make before a game is
played. Commitments are usually made in simultaneous games, games in which
all choices are made at the same time. Committing to a choice is saying, in
advance, what you are going to choose.
Complements Things that are used together or at the same time. Often, things that
augment each other.
Consistent Making the same choices or decisions under the exact same circum-
stances.
Context The situation under which a decision is being made.
Control Ability to determine the outcome.
Cooperation, Cooperative Games, Cooperative Solution An active alignment of the
decision-making process by all players. We generally think of cooperation as a
decision on the part of all players to make choices that are best for the group
rather than any one player.
Coordination Game A game with many possible solutions (equilibria) in which
players must find a way to reach one of the solutions. If players can find the
“coordination mechanism,” this becomes an assurance game.
Counterpart A different decision-maker who is part of the opposite player group.
Credible, Credibility A measure of believability.
Cuban Missile Crisis The thirteen days in 1962 when the US and the USSR dealt
with the Soviet attempt to put nuclear missiles in Cuba.
Culture Clash A situation in which players have different backgrounds or belief
systems and thus do not approach things in the same way.
Decision Tree See: extensive form
Delegated Authority Passing decision-making responsibility to other decision-
makers.
Glossary 235
Negotiation The situation in which players try to divide the outcome from a joint
venture.
Non-binary Choice See: Choice.
Observable Easily seen.
Original Boundaries The initial parameters or limitations of a situation or game.
Outcomes See Payoffs.
Passive Strategy The decision to not actively make choices.
Payoff, Payoff Matrix what players will get when all players have made their choices.
Perfect Competition An economics model in which there are many, small, homoge-
nous firms and easy exit and entry. No firm, in this model, is able to make excess
profits.
Players In game-theoretic situations, the decision-makers.
Polarization A situation in which players gravitate to extreme political positions.
Populus Candidate A political candidate with broad appeal to the vast majority of
the electorate.
Power The ability to exert influence over another person.
Preferences A ordering of outcomes based on a person’s desires.
Prejudice A biased choice often based on spurious information or skewed beliefs.
Price Collusion See Collusion.
Principal-Agent Problem See Agency Theory.
Prisoner’s Dilemma A well-known simultaneous game in which players must decide if
they should confess to a crime which they committed with a partner not knowing
if their partner will confess.
Promise in sequential games, a choice that the player making the second choice
says they are going to make if the player making the first choice doesn’t make a
particular choice.
Prospering In general, doing very well or getting good outcomes repeatedly.
Proxy a player who is making decisions for other players and usually with the other
players preferences in mind.
Pyrrhic Victory Winning a game but at a very high price.
Rational For our purposes, following a consistent line of reasoning or using a
consistent set of preferences.
Reinforcement Strengthening a position.
Repeated Game Any game that’s played more than one time with the same setup
(same players, choices, payoffs, etc.)
Repressive Control Mechanism Any way in which societies prevent a subgroup from
playing the game. We associate these mechanisms with rules that don’t let some
people vote.
Reputation In a general sense, what other players think about you.
Revenge Exacting retribution or imposing punishment for another player’s inappro-
priate choice.
Risk See Uncertainty.
Rollback, rollback solution in sequential games, the ability of the player choosing
first to find a path (a set of choices by both players) for achieving a particular
238 Glossary
outcome. A rollback solution requires the first player to consider what the second
player will choose under all circumstances.
Rules The procedures, usually pre-specified, for playing a game.
Rules of Thumb A colloquial expression referring to choices that work well in most
situations.
Scale of Preferences The possibility that the ordering of preferred outcomes does not
represent the full extent of preference. It is the difference between saying “better”
and saying “a lot better.”
Screen In situations where there is a less-informed and a more-informed player,
this is a mechanism for the less-informed player to learn some of the unknown
information.
Second-best Solution In economics, a solution to a problem in which a superior
solution exists.
Second-mover Advantage in sequential games, a setup that gives the player who
chooses second the ability to determine his best outcome.
Secrets information known as to some players that they are not supposed to reveal.
Self-interest Incorporating only the interests and desires of yourself.
Sequential Game A game in which there is a pre-specified order in which players
make their choices. Well-known games like chess, checkers, and Tic-Tac-Toe are
sequential games.
Setup in Game Theory, the players, choices, rules, and payoffs for a game.
Short-sightedness A tendency to focus too much on outcomes that will occur sooner
rather than later.
Signal In situations where there is a less-informed and a more-informed player, this
is a mechanism for the more-informed player to reveal some of the unknown
information.
Signaling Theory The body of work that analyzes the transfer of information from a
more-informed player to a less-informed player.
Signal-to-Noise Ratio A mathematical colloquialism that refers to the accuracy of
information. A high signal-to-noise ratio means that information is generally
accurate.
Simultaneous Game A game in which all players make their choices at the same time.
Sincere Believable and trustworthy.
Skill An inherent or learned competitive advantage.
Social Norm In any society, the way people are expected to act. Social norms typically
come from a societies culture, history, and shared beliefs.
Specify Indicate or dictate. When we specify a game, we understand the other
player’s jollies and the potential outcomes. Specifying a game correctly leads to
better outcomes. If a game does not land in your favor, or at an outcome you
didn’t anticipate, you mis-specified the game.
Stalemate At the end of a game, the situation in which neither player has a move
that allows them to win.
Strategic Advantage A situation such as superior information that allows one player
to make choices that will move the outcome of a game in their favor.
Glossary 239
© The Editor(s) (if applicable) and The Author(s), under exclusive 241
license to Springer Nature Switzerland AG 2024
J. Prag and A. I. Prag, Useful Game Theory, Palgrave Executive Essentials,
https://doi.org/10.1007/978-3-031-75154-7
242 Index
G
D Game structure. See Structure
Decision tree 108, 142, 200, 234. Game Theory 1–5, 7–12, 14–19,
See also Extensive form 21, 23–26, 28, 29, 34, 37–40,
Delegated authority 163, 234 42, 45–47, 49–51, 53–55, 60,
Disneyland 2 61, 63–65, 68, 71, 73, 78, 79,
Dominant strategy 25, 26, 30–32, 81–83, 86, 93, 95, 101, 102,
39, 41–44, 50, 51, 59, 60, 84, 104, 106, 107, 110, 111, 113,
86, 88, 91, 107, 124, 126, 116, 120, 123, 127, 134, 135,
128, 130, 138, 142, 143, 138, 146, 151, 153, 155, 156,
145–147, 235 161, 162, 164, 166, 167, 174,
Dominated 40, 235 177–179, 186, 187, 189, 190,
Dominated choice 40 192, 193, 199–205, 209–212,
Dominated strategy 44 214–217, 219, 221, 222, 235,
Duress 5, 235 236, 238, 239
Gaslighting 213, 235
God 152
Index 243
Government 2, 114, 162, 167, 171, 115, 120, 124, 127, 129,
174, 179, 180, 183, 186, 189, 132–134, 157, 158, 166–168,
191–193, 196, 206 179–181, 184, 194, 200, 202,
Group think, Group-think 177, 180, 205, 207, 211, 213, 216, 217,
235 220, 236
Guarantees 70, 126, 236
L
H Last-period problem 146, 147, 152,
Happiness 12, 22, 27, 29, 30, 34, 153, 162, 236
45, 50, 51, 60, 83, 112, 115, Leadership/Leaders 17, 18, 47, 54,
236 68, 69, 92, 94, 97, 163–166,
Heuristics 9, 46, 87, 236 168, 174, 184, 190, 192–195,
Honor 22, 23, 30–32, 35, 37, 90, 212, 215
158, 220, 236 Lies 78, 152, 201, 202, 236
Lightbulb moment 16, 236
Limit 10, 14, 91, 93, 112, 117, 145,
I 153, 158, 163, 209, 236
Imbalanced 120, 236
Immutable 130, 236
Information 5, 10, 12–14, 17, 37, M
41, 42, 44, 47, 52, 54, 59, Maximin 48–50, 93, 104, 105, 121,
63–75, 77–79, 85, 88, 91, 93, 170, 236
96, 97, 110–112, 117, 124, Median-voter model 187, 190, 191,
126, 130, 132, 134, 138, 141, 236
142, 146, 163, 167, 168, 174, Mixed signal 71, 72, 75, 77, 205,
181, 182, 193, 194, 201–204, 236
208, 209, 212–215, 217, Mixed strategy 210, 236
235–239 Mnemonics 115, 236
Information asymmetry 64, 236. See Model 1, 2, 46, 95, 112, 178, 179,
also Asymmetric information 192, 236, 237
Irrational 46, 167, 236. See also Monopoly/Monopolist 181, 182,
Rational 236
Irrationality 47, 50, 167 Mutually beneficial outcome 4, 81,
Irreversible 126, 236 87, 88, 163, 236
Irrevocable 236. See also Immutable
Iterate 86, 236
N
Nash Equilibria 28
J Nash Equilibrium 37, 86, 126, 236
Jollies 27–31, 34, 39, 40, 42, 45, Negotiation 76
46, 49–52, 54–57, 59–61, 63, Negotiations 15, 19, 26, 45, 65, 76,
65, 70, 73, 76, 77, 79, 81, 83, 77, 163, 166, 167, 174, 204,
84, 86, 92–94, 104, 105, 113, 237
244 Index
Repressive control mechanisms 52, Signal 5, 58, 64–72, 78, 105, 132,
237 135, 149, 194, 200, 201, 238
Reputation 30–32, 123, 135, 138, Signaling Theory 65, 238
142, 150, 157, 161, 162, 203, Signal-to-noise ratio 65, 238
237 Simultaneous game 5, 35, 44, 85,
Revenge 55, 56, 211, 237 104, 106, 110, 124–126, 128,
Risk 3, 8, 46, 49, 60, 64, 69, 116, 136, 142, 169, 202, 234, 237,
126, 148, 168, 169, 195, 208, 238
235, 237 Sincere 58, 238
Rollback, rollback solution 106, Skills 10, 11, 15, 17, 18, 34, 66, 71,
117, 237 76, 82, 98, 113, 168, 169,
Rules 2, 9–11, 16, 22, 26, 34, 35, 184, 195, 203, 210, 212, 215,
44, 50, 52, 56, 57, 78, 85, 87, 216, 238
88, 91, 94, 96–98, 106, 117, Social norm 206, 207, 238
120, 121, 127, 130, 152, 153, Specify 12, 14, 34, 45–47, 54, 61,
169–171, 183, 184, 186, 196, 63, 65, 75, 104, 152, 157,
206–208, 215, 217, 218, 235, 177, 207, 220, 238
237, 238 Stalemate 83, 238
Rules of thumb 9, 10, 238 Strategic advantage 10, 168, 238
Strategic decision making 1, 4, 169,
209, 235, 239
Strategic (game-theoretic)
S advantages 10
Scale of preferences 49, 238 Strategic moves 123, 125, 133, 135,
Screen 65–67, 70, 74, 76, 131, 132, 136, 138, 162, 203, 208, 239
157, 194, 217, 218, 238 Structure 1, 2, 4–6, 13, 21, 24, 34,
Second-best solution 77, 238 35, 37, 45, 54, 61, 78, 83, 85,
Second-mover advantage 107, 110, 91, 96–98, 111, 113, 121,
111, 238 127, 137, 142, 152, 154, 155,
Secrets 85, 201–204, 238 158, 167, 170, 171, 196, 199,
Self-interest 2, 8, 10, 26, 27, 34, 60, 203, 206, 218, 219, 221, 233,
85, 149, 151–153, 238 239
Sequential games 15, 28, 33, 35, 37, Subgame 41, 82, 83, 85, 163–165,
41, 48, 53, 101, 102, 104, 173, 239
106–108, 110, 111, 113, 114, Suboptimal 125, 239
116–119, 123, 124, 128, 135, Synergy 76
136, 142, 169, 185, 194, 201,
202, 205, 212, 214, 215,
217–219, 235, 237–239 T
Setup 7, 12, 17, 21, 23, 24, 37, 47, Thief 22, 27, 239
63, 64, 81, 90, 101–103, 116, Threat 19, 51, 79, 93, 95, 123,
125, 129, 141, 146, 147, 149, 127–130, 132–136, 138, 154,
156, 182, 189, 190, 237, 238 161–167, 169, 171–173, 186,
Short-sightedness 215, 238 187, 203, 233, 234, 239
246 Index
Timing 35–37, 69, 85, 102, 110, Unstable 192, 239. See also
111, 134, 185, 235, 239 Equilibrium
Tipping point 169, 170, 239 Utility, Utility theory 12, 22, 236,
Transfer portal 127, 239 239
Trust 17–19, 64, 84–86, 97, 142, utils 12, 27, 45, 239
202, 239
Truthfulness 9, 239
W
U
Whistleblowers 171
Uncertainty 63, 110, 113, 116, 117, Whistleblowing 171, 239
133, 168, 205, 215, 239
Unilateral choices 51
Unilateral decision 2, 239 Z
Unilaterally 36 Zero-sum game 6, 67, 81, 82, 239