Introduction
In the immortal words of Kenny Rogers: You gotta know when to hold ‘em, know when to fold ‘em, know when to walk away, know when to run.
It’s as true in gambling as it is life: From what you wear to the person you marry, your life is the result of your choices. How can you be sure you’re making the right ones?
It’s dizzying to think about all the factors that go into even small decisions.
Take one of life’s least dramatic scenarios: dinner with friends. Consider the sheer number of variables involved in such a routine dilemma:
How much is everyone willing to spend? Which restaurants are rated best? And how can you be sure the majority aren’t claiming to be “good with anything,” while secretly loathing your final selection?
Yikes. Even in a situation this trivial, uncertainty is rampant and getting it wrong can have negative impacts on your wallet, social life or stomach. Yes, we’re over analysing – but whether it’s life and death or pizza versus pasta, every decision has risks and rewards, probabilities and payoffs.
The good news is that there’s plenty of work out there that breaks decision-making down into basic concepts, and mastering just a few of them will leave you better equipped in both the casino and your everyday life.
Decision Theory Nature vs. Numbers
There are two ways of understanding choice: Going with your gut, or using your brain.
While many gamblers argue otherwise, going with your gut is a quick path to chaos – an impulsive decision-making process that tends towards either the random or the patently wrong. Using your noggin seems the more obvious choice, and doing so commits you to rational analysis instead of flying by the seat of your trousers.
Let's return to dinner. This time you're eating solo, deciding between your go-to local bar and a gourmet sushi place.
The strategies here that don't involve your gut (pun intended) draw from “decision theory”, a study focused on probabilities. Decision theory recommends using basic multiplication—the value of the proposed benefit (a delicious meal) multiplied by the probability of achieving said benefit.
On a scale of 1 to 10, you'd grade the bar’s nachos as a 5. Sushi is among your favourite foods, so you score it a 10.
But let’s say the sushi restaurant has been phenomenal on one occasion and terrible on two other visits. Your probability of a 10-point experience is just 33%. The local bar, having never let you down, is 100% likely to serve up a perfectly fine dinner.
Should you shoot for the greater reward of sushi, despite the risk?
Decision theorists call this a problem of “expected utility”, where ‘utility’ is just used to mean simply ‘desirable outcome’. Your expected utility for dinner at the local bar is 5 (the value of the meal) multiplied by 100%. That comes to 5. Your expected utility for the sushi spot is 10 multiplied by 33%, or 3.3. Since 5 is greater than 3.3, decision theory would typically steer you away from the wonderful but unreliable choice and towards the decent, risk-free option - the optimal balance of risk versus reward.
Game Theory
When more than one person is involved in a choice, game theory takes over. Game theory proposes that nearly any interaction between decision makers can be mapped out as a game or organized thought experiment. Wherever decisions overlap, game theory applies.
While decision theory is concerned with quantities and probabilities, game theory has to contend with duelling priorities and uneven, possibly unfair outcomes. It's a complex, fun field involving messy psychology, messy motivations and even messy thinking with economics and cuts to the heart of human nature.
A History of Choice From Pascal's Wager to the Prisoners Dilemma
The roots of decision and game theory date back to 1669, when mathematician and philosopher Blaise Pascal presented this argument for believing in God:
When it comes to your stance on God, you have two choices: Believe in God, or don’t. Religion aside, what’s the rational decision?
Suppose you're dead. If God exists and you’re a believer, your payoff is infinite. You go to heaven for eternity. Congratulations!
If God doesn't exist and you’re a believer, your loss is finite and relatively small. In living a clean life, you enjoyed yourself less than you might have - but regrets are for the living and the afterlife, and neither now applies.
Now imagine you're a disbeliever. Your best payoff isn't much of a payoff at all: partying hard, then dying and not going to heaven, because neither it nor its supposed creator are real.
But if you disbelieve, and God does exist, your loss is infinite. You're going to hell.
Self-interest alone dictates you choose the strategy with the best possible payoff; like it or not, you should be a believer.
Pascal’s Wager breaks down in a world of multiple worldviews, but as pure maths (and game theory), it remains broadly relevant. We can use the same rationale to save the planet. For example: Should you believe or disbelieve in the climate crisis?
Belief, and the effort that accompanies it, will either result in saving the human race or losing some time and money to an overblown problem. Disbelief might save you a finite amount of cash and effort if climate change turns out to be no big deal – or lead to humanity’s untimely demise.
The Game is Afoot Von Neumann and the Birth of Game Theory
To understand why humans are terrible at cooperating, we need to dig into the history of game theory - starting with the man who essentially invented it.
Mathematician John von Neumann helped usher the era of nuclear weapons as part of the Manhattan Project. But before he was helping the military blow things up, he authored various works on game theory and economic behaviours.
Von Neumann's status as the father of game theory is based on his cold-blooded, mathematical analysis of competitive scenarios.
Along with describing interactions between decision-makers as “games,” Von Neumann proved that in two-person ‘zero-sum’ games, or contests where one player's gain is the other player's loss, fortune doesn't actually favour the brave. The best strategy is to minimize your losses in the event of a maximum loss situation – his “Minimax Theorem”.
In gambling, that means mapping out the worst possible outcome—an opposing poker player holding the best possible cards for a given hand—and choosing the action that leaves you least vulnerable should that ugly outcome materialize.
More than choosing based on static probabilities - like the odds the dealer shows an ace to complete your hand – strategies for competitive games require putting yourself in your opponent’s shoes.
What happens if that ace turns up and they push all of their chips in? With enough time, every possible outcome of this situation could be mapped out, including weighing the risks and rewards of folding or staying in.
By associating games with the larger world of economics, Von Neumann showed that most human interactions could be modelled, opening the door for smart thinkers to apply game theory to everything from the stock market to nuclear warfare. The wide-ranging implications make it no wonder that, over the years, twelve Nobel Prize winners have been game theorists.
Built to Betray The Prisoner's Dilemma
In an arms race, competing nations devote precious resources to stockpiling weapons they may never use they’re responding to the actions of their enemies, choosing a merely bad outcome (huge spending on unused weaponry) to avoid a worse one (defeat at the hands of a better-equipped enemy).
This goes beyond Pascal’s Wager, since armed conflict comes with multiple finite consequences ranging from minor battles all the way up to nuclear annihilation. To understand this interaction, we need game theory’s most famous exercise: The Prisoner’s Dilemma.
Imagine two criminals caught for the same crime and given a choice: betray your partner (defection), or shut your mouth (cooperation).
If one defects and the other cooperates, the defector is released and the co-operator gets ten years in jail. If both criminals defect, they both receive a three-year sentence. Finally, if both criminals refuse to rat the other out, they both get one year.
Here’s the thing: if no communication is allowed between the prisoners, it becomes rational and even inevitable that both players will betray each other. Here’s how it breaks down:
If a prisoner cooperates, he faces the possibility of the game's best-case outcome (one year, if both cooperate) and worst outcome (ten years, if his partner defects). That still falls short of the game's biggest win: freedom. Coming from a place of self-interest, this is a grim path.
If a prisoner defects, he avoids the worst possible outcome (since his maximum sentence is three, rather than ten years) and has a shot at the best payoff (sweet, sweet freedom). If you're looking out for number one, this is the preferred choice.
That makes defection in the Prisoner's Dilemma a dominant strategy: whatever the other player chooses, the defection strategy promises a better payoff. By ratting on your partner, you're also minimizing your losses in a worst-case outcome.
Now swap nervous criminals for armament-happy nations. Suddenly the Prisoner's Dilemma explains the tendency for opposing powers to keep stockpiling weapons instead of levelling off early. Since both players in the game are forced to choose the dominant strategy, assuming that both sides are rational, both are going to defect. They'll act out of rational self-interest and avoid cooperation at all costs. The result is a game that's immediately locked into an escalating standoff.
The irony of the Prisoner's Dilemma is that the shared outcome is actually worse than it would be if both parties opted to cooperate. Opposing defectors each get three years of jail time, while co-operators would be out in a year – and the same goes for an arms race: resources are consumed and war is averted – but both nations would be better off having agreed to never waste the investment.
Game theory doesn't just show us the best choice in a given situation, it also shows us why bad choices are so hard to avoid.
When Stasis Saved the World Mutually Assured Destruction
There’s an upside to the Prisoner's Dilemma. During the Cold War, game theory pioneer Neumann applied the notion of equilibrium to the escalating tensions between the Americans and the Soviets. The policy of mutually assured destruction (MAD), encouraged the production of vast nuclear arsenals by both players.
Once both sides have the capability to utterly destroy the other, equilibrium is created – an eternal standoff. It's not that both players are cooperating, or choosing peace. They're simply choosing the option that holds the least risk. The common thread is the way the player’s choices lock into place, such that changing one's strategy is ill-advised to the point of being impossible.
MAD is a perfect example of a “Nash Equilibrium”, named after Nobel-winning mathematician John Nash, the subject of the 2001 film A Beautiful Mind.
A Nash Equilibrium is the ultimate demonstration of game theory's ability to model the gridlock that can occur among rational decision-makers.
A Nash Equilibrium can be also be positive, like a couple choosing to take the same day off work to extend their weekend vacation. The most instructive aspect of the Nash Equilibrium, and game theory on the whole, is the reminder that the world is full of other agents.
Your decisions might be conflicting or overlapping with those agents’ decisions right now. Or your decisions might seem to be made in isolation, when really the payoffs and risks involved are based on past decisions made by various agents.
In other words, you aren't alone. Just like in a hand of poker, your choices are impacted most often, and most significantly, by other choices.
From Game To Go Turning Theory into Action
We’ve seen how decision and game theory are relevant to criminals, world leaders and Nobel Prize winners – but what about the rest of us?
It may seem unclear, but these principles can shape a surprising number of our decisions.
Take buying a car, for example. Rather than going to a dealership and engaging a salesman directly, you can first find all the dealerships who carry the car you want in a specific area, then call them all to tell them you’ll buy the car from whomever gives you the best price.
Or, consider bluffing in poker. According to David Sklansky, players should bluff with a bad hand on the river with the same frequency as the odds they’re offering their opponent from the pot. For example, if the pot offers 3/1 at the river, bluffing a third of the time means your opponent will lose the same amount whether he folds or calls, all things being equal. That’s a bit oversimplified (you want to adjust to a players’ history, too), but you can see how it begins to apply.
Health decisions can also be untangled using these concepts – like the decision to take a daily aspirin for 20 years to reduce the risk (1 in 16) of colon cancer. Death is about as worst-case as a loss can get, but the expected utility that comes from prolonging life can be minimized by other factors, such as the risk of stomach bleeding, or the fact that the benefits only kick in after a decade of doses. It's an easy choice for a young man with a family history of colon cancer, but for the elderly gentleman on blood thinners, the loss-to-benefit ratio is less favourable.
Game theory doesn't have to be quite so dramatic. Deciding whether to wear rain boots based on weather forecasts is an analysis of duelling losses. If uncomfortable boots will create blisters and misery throughout an entire work day, perhaps it’s worth risking a few moments of wet feet. However, even a moderate chance of rain during a walking-centric vacation is reason to at least pack some boots, since the worst-case scenario is soaked feet and a ruined trip.
What's the point of all this?
The point of all this isn't to get you to do more math. Converting every decision into an equation comes with real costs—the time spent learning how to crunch the numbers and the risk of being the person who can’t commit to lunch without scribbling into a notebook for a half-hour.
The lessons here are more general:
First, prepare for the worst-case scenario. Decision and game theory most often come down to strategies that avoid being wiped out. A big win is a nice fantasy, but your priority should be in minimizing the impact of catastrophic loss. Once that happens, you're no longer making the decision—chance is making it for you.
We’re also shown the value in communication. The Prisoners Dilemma can be solved with an open line of communication between the players. Reason alone can lead to poor shared outcomes, and a lack of honour. Communication carries risk, which can and should be modelled, rather than dismissed outright.
Which brings us back to our tortured dinner party. The most efficient solution to that problem is not to set friends scheming against friends, but honest, non-competitive communication.
But the final, most important lesson is to make guided decisions instead of trusting your gut. Instincts are the enemy of reason, and will, on average, net you the goat instead of the cash.
Whether you're multiplying your way towards a favourable ratio of risk versus reward, or trying to outsmart another agent in a hand of poker - assume nothing, and analyse everything.
And that splits another way: the decision to game responsibly is also yours to make. Knowing your odds and analysing your decisions rationally goes hand-in-hand with knowing your limits and, as ol’ Kenny Rogers says – knowing when to fold ‘em and walk away.