You see The New York Times’ “election needle,” and you get anxiety. You read FiveThirtyEight, and you wonder how your candidate can lose after being up by some seemingly impressive number of “points” or polls prior to an election.
Welcome to the wide world of uncertainty, which many people—including myself—tend to discount when glancing at statistics, probability, and flat-out guesses. A prediction based on polling can be true and not true at the same time; yes, your candidate can be ahead in a poll, theoretically, but still end up losing because the poll didn’t quite accurately capture the will of the people that it intended to represent. And that little thing called the “margin of error” is important, too, even though we often pretend it doesn’t exist.
To help us better understand this, Matthew Kay, an assistant professor of computer science at Northwestern University, devised a rather clever way to show how uncertainty can affect polling data.
He’s using that classic The Price is Right game, Plinko, to represent how forecasting can still produce results that swing one way or the other. It all depends on how the chip—or virtual Plinko ball—falls. As Kay describes:
“The short version is, I approximate each forecaster’s predictive distribution with a scaled-and-shifted binomial distribution, which ultimately determines the height of each board. I then determine plausible paths through the board that could have lead to the final predictive distribution, which is shown as a quantile dotplot. Thus, while the output looks random, the final distribution is exactly the forecaster’s published distribution, down to the resolution of the dotplot. Full details of the methodology and source code are in the Github repository.”
That’s all well and good, but I find myself dropping ball after ball after ball in Kay’s Plinko game—which you can do for yourself, too, at his site. You get the option to drop a single ball or just let ‘em all fall.
I think the former is the route you’ll want to go, if you make the generalization that your single, virtual ball represents the election and however it lands is the outcome we’ll all get. Spoiler: Your candidate might not win, but that’s the entire point of this exercise. Probability is a science, not a time machine. Remember 2016?
As for the aforementioned ‘Needle’ that many of us are familiar with from The New York Times, it’s a useful tool to gauge sentiment over the election when people understand what it’s doing. Kay argues that the use of a needle, however, allowed people to more easily misinterpret the Times’ noble approach.
“I think the Needle got one thing right and another thing wrong. What it got right was that this kind of animation can help people experience uncertainty.
This makes the visualization more powerful, and the uncertainty harder to ignore. The visualization made people anxious, because they were uncertain about something they cared about. But if you’re uncertain about something you care about, you should be anxious,” he writes.
“However, I think the Needle also fell victim to a deterministic construal error:
many people more readily associate the mechanism of a needle with some deterministic measurement, not an uncertain quantity. Those people understandably thought the rapid movement of the needle reflected that the forecast itself was changing just as rapidly.”
Now, when can we get our election predictions modeled by Hole in One (or Two)? That’s what I want to know.