|     Home     |     Portfolio     |     Exclusives     |     About     |    

April 16, 2011

Revisiting the Monty Hall problem

Monty Hall

“Charles Sanders Peirce once observed that in no other branch of mathematics is it so easy for experts to blunder as in probability theory.”

Thus began an article in the October 1959 Scientific American by the celebrated math columnist Martin Gardner. In fact, as John Allen Paulos observed in last January’s issue (“Animal Instincts” [Advances]), humans can sometimes be even worse than pigeons at evaluating probabilities.

Paulos, a mathematician at Temple University in Philadelphia, was describing a notoriously tricky problem known as the Monty Hall paradox. A problem so tricky, in fact, that scores of professional mathematicians and statisticians have stumbled on it over the years. Many have repeatedly failed to grasp it even after they were shown the correct solution.

According to an article by New York Times reporter John Tierney that appeared on July 21, 1991, after a writer called Marilyn vos Savant described the Monty Hall problem—and its uncanny solution—in a magazine the year before, she received something like 10,000 letters, most of them claiming they could prove her wrong. “The most vehement criticism,” Tierney wrote, “has come from mathematicians and scientists, who have alternated between gloating at her (’You are the goat!’) and lamenting the nation’s innumeracy.”

Sure enough, after Paulos mentioned the Monty Hall problem in Scientific American, many readers (though nothing in the order of 10,000) wrote to complain that he had gotten everything wrong, or simply to confess their befuddlement.

“Paulos shows a strange lack of understanding of basic conditional probability,” wrote one reader, “and as a result his article is nonsense.” The reader added that Paulos’s blunder shook his trust in the magazine. “What are your procedures for evaluating submitted papers?” he wrote. This reader was a retired statistics professor.

So we at Scientific American thought it might be worthwhile to try and clarify things a bit. What is this Monty Hall business, and what’s so complicated about it?

The Monty Hall problem was introduced in 1975 by an American statistician as a test study in the theory of probabilities inspired by Monty Hall’s quiz show “Let’s Make a Deal.” (Scholars have observed that the Monty Hall problem was mathematically identical to a problem proposed by French mathematician Joseph Bertrand in 1889—as well as to one, called the three-prisoner game, introduced by Gardner in his 1959 piece; more on that later.) Let’s hear the game’s description from Paulos:

A guest on the show has to choose among three doors, behind one of which is a prize. The guest states his choice, and the host opens one of the two remaining closed doors, always being careful that it is one behind which there is no prize. Should the guest switch to the remaining closed door? Most people choose to stay with their original choice, which is wrong—switching would increase their chance of winning from 1/3 to 2/3. (There is a 1/3 chance that the guest’s original pick was correct, and that does not change.) Even after playing the game many times, which would afford ample opportunity to observe that switching doubles the chances of winning, most people in a recent study switched only 2/3 of the time. Pigeons did better. After a few tries, the birds learn to switch every time.

But wait a minute, you say: after Monty opens the door, there are only two options left. The odds then must be 50-50, or 1/2, for each, so that changing choice of door makes no difference.

To understand what’s going on, we must first make some assumptions, because as it is, the problem’s formulation is ambiguous. So, we shall assume that Monty knows where the car is, and that after the player picks one door he always opens one of the remaining two. Moreover, if the player’s first choice was a door hiding a goat, then Monty always opens the door that hid the other goat; but if the player picked the car, Monty picks randomly between the other two doors, both of which hide a goat.

So imagine you are the player. You take your pick: we’ll call it door 1. One-third of the time, this will be the door with the car, and the remaining 2/3 of the time (66.666… percent) it will be one with a goat. You don’t know what you’ve picked, so you should formulate a strategy that will maximize your overall odds of winning.

Let’s say you picked a goat-hiding door. Monty now opens the other goat-hiding door—call that door 2—and asks you if you want to stick to door 1 or switch to door 3 (which is the one hiding the car). Obviously, in this case by switching you’ll win. But remember, this situation happens 2/3 of the times.

The remaining 1/3 of the time, if you switch, you lose, regardless of which door Monty opens next. But if you adopt the strategy of switching always, no matter what, you’re guaranteed to win 2/3 of the time.

Seems easy enough, doesn’t it? If however you happen to know a little bit of probability theory and you pull out your paper and pencil and start calculating, you might start to doubt this conclusion, as one statistically savvy reader did.

(Warning: this post gets a bit more mathy from here on.)

The reader analyzed the problem using conditional probability, which enables you to answer questions of the type “what are the odds of event A happening given that event B has happened?” The conventional notation for the probability of an event A is P(A), and the notation for “probability of A given B” is P(A | B). The formula to calculate the latter is:

P(A | B) = P(both A and B) / P(B)

The reader wrote:

Let A be the event that the prize is behind door 1 (the initially chosen door), and let B be the event that the prize is not behind door 2 (the door that has been opened). Here, A implies B, so P(both A and B) = P(A) = 1/3, while P(B) = 2/3. Thus P(A | B) = (1/3) / (2/3) = 1/2. Contrary to the claim of Prof. Paulos, nothing is gained by switching from door 1 to door 3. Prof. Paulos is mistaken when he says that P(A | B) = P(A) = 1/3.

What is wrong with this reasoning? It seems utterly plausible and in fact it gave me a headache for about an hour. But it is flawed.

The probability of Monty opening one door or the other changes depending on your initial choice as a player. If you picked a door hiding a goat, Monty has no choice: he is forced to open the door hiding the other goat. If, however, you picked the door hiding the car, Monty has to toss a coin (or some such) before he decides which door to open. But in either case, Monty will open a door that does not hide the prize. Thus, the “event that the prize is not behind door 2 (the door that has been opened)” happens with certainty, meaning P(B) = 1.

Thus, when we apply the formula, we get P(A | B) = (1/3) / (1) = 1/3, not 1/2. The probability that the car is behind door 3 is now 2/3, which means you had better switch.

The Monty Hall paradox is mathematically equivalent to a “wonderfully confusing little problem involving three prisoners and a warden,” the one that Gardner introduced in 1959. Here is Gardner:

Three men—A, B and C—were in separate cells under sentence of death when the governor decided to pardon one of them. He wrote their names on three slips of paper, shook the slips in a hat, drew out one of them and telephoned the warden, requesting that the name of the lucky man be kept secret for several days. Rumor of this reached prisoner A. When the warden made his morning rounds, A tried to persuade the warden to tell him who had been pardoned. The warden refused.

“Then tell me,” said A, “the name of one of the others who will be executed. If B is to be pardoned, give me C’s name. If C is to be pardoned, give me B’s name. And if I’m to be pardoned, flip a coin to decide whether to name B or C.”

“But if you see me flip the coin,” replied the wary warden, “you’ll know that you’re the one pardoned. And if you see that I don’t flip a coin, you’ll know it’s either you or the person I don’t name.”

“Then don’t tell me now,” said A. “Tell me tomorrow morning.” The warden, who knew nothing about probability theory, thought it over that night and decided that if he followed the procedure suggested by A, it would give A no help whatever in estimating his survival chances. So next morning he told A that B was going to be executed.

After the warden left, A smiled to himself at the warden’s stupidity. There were now only two equally probable elements in what mathematicians like to call the “sample space” of the problem. Either C would be pardoned or himself, so by all the laws of conditional probability, his chances of survival had gone up from 1/3 to 1/2.

The warden did not know that A could communicate with C, in an adjacent cell, by tapping in code on a water pipe. This A proceeded to do, explaining to C exactly what he had said to the warden and what the warden had said to him. C was equally overjoyed with the news because he figured, by the same reasoning used by A, that his own survival chances had also risen to 1/2.

Did the two men reason correctly? If not, how should each calculate his chances of being pardoned?

Gardner saved the answer for his next column.

[This post first appeared April 15, 2011 on the Observations blog at ScientificAmerican.com]

Home : : Exclusives : : Portfolio : : About
Copyright 2003-2012 Davide Castelvecchi all rights reserved.
The opinions expressed on sciencewriter.org are not necessarily those of
Scientific American or of Nature Publishing Group.