**Background**

I talk a lot about the idea of becoming a competent (wo)man, and the benefits of jack-of-all trading. When I was stationed in Groton, CT in the Navy, I spent **A LOT **of time at Mohegan Sun and Foxwoods Casinos. I eventually became a poker player (this was around 2002; I was part of the Rounders generation, not the WPT generation), but before that, I also spent a great deal of time playing and studying table games.

Understanding casinos was a skill I thought might come in handy some day. I learned how to count blackjack, though the edge is tiny with 6-8 decks. MIT teams had already forced the no mid-shoe entry rule when I got started. I also learned about other table games and even developed a team craps strategy which had slight positive expectation (in certain circumstances) and was fun to implement with a couple of friends.

Many of my submarine school classmates thought it was a waste of time and money to spend so much time at the casino. I’m not sure what they did with their spare time, but I suspect that I’ve benefitted more from my casino time than whatever they were doing.

**The proposition at Wharton:**

Fast forward to today, where I’m a student at Wharton’s MBA for Executives.

In marketing class, we had a discussion about how estimates are biased by personal preferences. For example, we were asked in a survey about a preference, such as:

**1. Do you prefer Chipotle over Qdoba?**

Then we were asked to estimate the group’s preference:

**2. What percentage of your classmates prefer Chipotle over Qdoba?**

The point of the exercise is this. The first question determines the factual % of the group that prefer one preference over the other. It turns out that the average of the answers to the second question is an accurate predictor of the responses to question one, *but there is always a positive prediction gap. *This means that those who prefer Chipotle

*tend to over estimate*what percentage of the group prefer Chipotle, and those who prefer Qdoba

*tend to overestimate the fraction of the group that prefers Qdoba.*

The takeaway seemed to be that preferences of marketers will influence their estimate of a population’s preference, so data collection is always necessary to ensure personal bias doesn’t taint strategy.

All of this is an elaborate leadin to discuss one particular question we received in the survey.

**The Question:**

*Would you accept a 50-50 gamble where you would win $1000 if a coin comes up heads and lose $750 if the coin comes up tails? (Assume that the stakes are real and the coin is fair).*

and the followup:

*What percent of your classmates would accept a 50-50 gamble where you would win $1000 if a coin comes up heads and lose $750 if the coin comes up tails? (Assume that the stakes are real and the coin is fair).*

**Without even taking pen to paper, I know you absolutely have to take the bet**, as much and as frequently as possible. Or even just once! I’ll go through the exercise, to give a basic understanding of how to approach bets.

*It’s worth discussing the theory behind this kind of bet, since it is (apparently) obscure!*

**Calculating the edge:**

Half the time, you will win $1000. Half the time you will lose $750. So (0.5 x 1000) – (0.5 x $750) = $500 – $375 = $125. Each time you make this bet, you will win $125, *on average.*

What’s known as the “edge” in gambling parlance is computed by dividing the expectation by the risked capital. So in this case you have a $125/$750 or a 16.6667% edge as a player. In other words, you’ll get $1.17 back for every $1 you manage to wager on this proposition.

That may sound like much, but many fortunes have been made in casino gaming by offering games with much more microscopic edges. For example, typical blackjack rules are usually <1% house edge when players use basic strategy. Baccarat is less than 2%, and is impossible to mess up as a player (unless you take insurance). Roulette, which is *notoriously* house friendly, has a house edge of around 5%.

Offer a real gambler a fair bet with a 16.67% edge- they won’t believe you, at first, but if they do, they’ll do everything they can to pile into the bet. This is a wager that would cause real gamblers to jump in the air, click their heels, and give a rebel yell… *whether they actually win or lose the bet. *Real gamblers, and real businesspeople have to believe in the long run.

**Bet sizing**

You might say, “Sure, but what about the risk of ruin? If you offer me a coin flip where I get paid $10 million if I win but owe $1 million if I lose, I can’t take that bet because it’s too likely I’ll go broke. What if I only get to do this bet once?”

There’s an answer– the Kelly Criterion. It was developed at Bell Labs, based partly off the work of Claude Shannon, who developed what’s known as the Shannon Channel Capacity equation. Shannon’s equation helped determine all sorts of things, such as how much data a modem can push over telephone lines before data losses get too high and error correction has to kick in. (There’s an interesting extension of this concept, which is if you push a modem to very high data transmission speeds, the losses will get very high… *but the data rate will also get higher* because you’re shooting more marginal data through the line than the losses you incur. Modems capped out at 57.6Kbaud or whatever, but DSL uses the exact same telephone lines to shoot data MUCH faster. Through the air this is Bluetooth. But we digress.)

The Kelly Criterion is based on an extension of information theory into gambling. If there is a certain probability of successful outcome and a certain “edge,” then *there is a specific percentage of your bankroll you should wager each time.* If your edge is negative, i.e., there is a house edge, then the percentage of bankroll you should wager is zero.

You can see on the wikipedia page that for investment decisions, the Kelly criterion is:

where

f* = percentage of bankroll to wager each iteration

p = probability of success, 0.5 in this case

q= probability of failure (1-p), *also 0.5 in this case*

b = if you win, your investment increases from 1 to (1+b) = 1 here

a = if you fail, your investment decreases from 1 to (1-a) = .75 here

**So, applying the Kelly Criterion, the optimal bet is 16.66667% of the bankroll.** (You might note that this is the same as the edge %, and you’d be right. It can get more complicated than that, but not in this case, and this is an excellent rule of thumb).

*Microeconomics note: I interpret that the Kelly Criterion assumes risk-neutral individuals from a utility standpoint. A factor could be applied to increase or reduce f* based on risk aversion or risk seeking attitude, but is not necessary here.*

**The Bankroll**

In gambling, a bankroll has a very specific meaning. It is not net worth. It is not what’s in your bank account. It’s not what you take living expenses from. It’s specifically the amount of money you have handy to apply to what you perceive to be high positive expectation situations.

In this case, a $1000 bet would be optimal for someone with a bankroll of $6000. It’s important to note that the Kelly Criterion represents the *maximum *optimal bet. If a gambler bets more, the risk of ruin (bankruptcy) is too high. If the gambler bets *less *than the Kelly Criterion amount, then the positive expectation is there, but variance falls dramatically.

To put this in colloquial terms, you should feel reasonably safe betting the precise Kelly Criterion amount. Anything less will make money for you as well, but less quickly- though you’ll be able to sleep better at night.

Coming full circle, what bankroll means to non-gamblers in my mid-career executive MBA program is a bit squishy, but I’m almost certain that *everyone *in my class has at least $6,000 sloshing around somewhere. If not, then certainly they have an effective bankroll of at least that much based on human capital alone.

**Full circle to Gambling at Wharton.**

Now you know almost as much as I know about gambling, so let’s revisit this question.

*Would you accept a 50-50 gamble where you would win $1000 if a coin comes up heads and lose $750 if the coin comes up tails? (Assume that the stakes are real and the coin is fair).*

**Not only yes, but HECK YES.** I will take this bet once, or as many times as it will be offered to me. This is not a good bet, it’s a great, screaming, monstrous, OMG, back up the truck, let’s do this, awesome bet. Especially hypothetically since we can believe the coin is fair (might not believe that in real life).

*What percent of your classmates would accept a 50-50 gamble where they would win $1000 if a coin comes up heads and lose $750 if the coin comes up tails? (Assume that the stakes are real and the coin is fair).*

**Here’s the whole reason I wrote this article. **I am very impressed with my classmates, and seriously thought about guessing 100%, but I was **dead wrong** about this question. I didn’t go to the trouble of computing the Kelly Criterion, but I knew the required bankroll was very, very, low. I decided that since I knew the point was to demonstrate bias, I would lowball my class and guess 75%. SURELY, I thought to myself, more than three of every four would take this home run bet.

**The answer… was 40%. **

**Only 40% of my classmates would take this bet.**

I immediately looked back at the question to see if I misinterpreted it.

Then I looked back at the question to see if others might have misinterpreted it.

**No to both.**

You could knock me down with a feather. Only two out of every five of my Wharton MBA for Execs classmates would take this bet? It’s just so… perfect. It’s just so… obvious. In fact, the prediction gap was the largest of any question in the survey at 44%: those who would take the bet estimated 70% of the class would take it, and those who would not take the bet estimated that only 26% of the class would take it.

Sure, you might take the bet and lose $750. You might not get to do it again. But *you have a whole lifetime to live.* If you always take gambles when you have a high positive edge, and limit risk of ruin through a concept like the Kelly Criterion, *you will win in the long run*. In fact, the more bets like this you take in a lifetime, the more you’ll win.

I’ve thought about this a lot, and this scenario is a profound learning experience. I think my diverse background has simply armed me with a lot of helpful tools that some people don’t have, and it’s important to neither overestimate nor underestimate anyone of any background encountered in business.

The ultimate takeaway for me is that behavioral finance is **real**. In a group of smart, successful businesspeople who have already had graduate-level coursework in game theory, statistics, and monte carlo simulations, three out of five students made what I consider the absolute *wrong* decision in a straightforward probabilistic financial proposition. The stock market is much more complicated, and the notion that everyone is driving valuations through purely rational decisions is hard to believe.

** One thing’s for sure, I’ve found a great interview question.** Everyone’s entitled to their opinion, and there are no truly wrong answers to this question… but I can’t imagine ever working closely on a business project with someone who would not take a 16.7% edge with reasonably low stakes!