Nate Silver, the election forecaster at FiveThirtyEight who gave Donald Trump a 5% chance of winning the GOP nomination, now gives him a 25% chance of defeating Hillary Clinton. This is a stark contrast from his last look at the race in November, when he gave a generic Republican a 50/50 chance against the likely Democratic nominee.
I frequently receive emails and messages on social media requesting I push back on Mr. Silver’s oft-cited projections, mostly and unsurprisingly from Republicans looking for someone more sympathetic to their plight. Mr. Silver is a liberal and statistically likely to vote for the Democratic nominee, as are nearly all the other election forecasters on the Internet.
I, on the other hand, am a conservative and statistically likely to vote for the Republican nominee. For the most part, I have declined to oblige readers’ requests to pounce on Mr. Silver, at least not by name. The reason is simple.
Predicting the outcome of an election–whether national or statewide–is far from a science and can be more convoluted than other disciplines that essentially aim to predict human behavior. Election models, like the PPD Election Projection Model and FiveThirtyEight forecast model, are built upon a set of assumptions and those assumptions are predicated on a hypothesis or set of hypotheses.
A lot can and often does go wrong somewhere along the line. Thus, we should chastise mediates who cite these forecasts as gospel and caution readers about consuming them as such, rather than chastise the forecasters. Perhaps that is the result of my own biases, being that I am one, but I don’t favor pile ons, particularly by those who also suck in their efforts at pontification (more on that soon).
That said, this outlet was in fact founded to push back on what we all viewed to be something more than harmless, honest reporting mishaps and forecast misfires. Lately, I am increasingly becoming suspicious that is the case at FiveThirtyEight. Unlike Nate Silver–and most others–I am open and blatant about my political persuasion. We take painstaking precautions to ensure those biases have a minimal (ideally non-existent) influence on our predictions.
Which brings us back to Nate Silver. I’m just not too sure that is also the case with him.
He’s wrong about Donald Trump, again. In fact, Mr. Silver has been wrong, a lot, as Matt Lewis at RollCall just highlighted. However, even though Mr. Lewis is right to question the limits of big data, RollCall itself is living in a glass house built on a series of failed forecasts and, thus, perhaps shouldn’t throw stones.
The PPD Election Projection Model was “hands down” the most accurate election projection model in 2014. Thus far, we have the best track record again in 2016, catching on early to the dynamic and underestimated political coalition behind Mr. Trump’s success.
We haven’t compiled our track record by simply mimicking other models or by becoming glorified poll readers, which I strongly suspect Mr. Silver and most others have become. In fact, the one and only adjustment we made to the model since 2014 was to diminish the importance of polls, a call which we made after they caused us to miss the mark on the U.S. Senate race in North Carolina. We stuck to our guns, made the tough calls and swam against the current of conventional wisdom.
That also being said, the PPD Election Projection Model, which will be updated to reflect the new state-by-state ratings this week, is what we here like to refer to as a hybrid model. We recognize the importance of empirical data but do not overly rely upon it. As of now, though it’s technically considered a Toss-Up, we give Mr. Trump a slightly better chance (52%) than Mrs. Clinton in November.
So, where has Mr. Silver gone wrong and where is he going wrong, again? Rather than discuss specific states and data sets, which I will do in great detail in future articles, let’s first talk theory, methodologies and models.
Sean Trende at RealClearPolitics.com recently wrote a fascinating piece–“The Value of Data Journalism”–in response to what I also somewhat viewed to be unfair criticism leveled at Mr. Silver. It served as both a defense and a critique of data journalism, and is certainly worth the time to read. David Byler, who is Mr. Trende’s colleague at RCP, also added worthwhile post-primary analysis reflecting on the many failures of pundits this cycle.
To be sure, data journalism as a whole has value, but it also has some serious drawbacks and can put psephologists in a position to fall into a dangerous trap. Mr. Trende used a blackjack analogy in his response to Mr. Silver’s critics, but it actually helps to make my argument, as well.
Unlike electoral politics, the game of blackjack only changes insofar as the players, that is to say, the rules and potential cards in the deck remain the same. I have repeated more times than I can count at PPD that the only thing about political coalitions we can count on being consistent is that they are consistently changing.
In the 1940s, no one would’ve guessed that the South would end up reliably Republican.
When Bing Crosby and Danny Kaye starred in the classic film “White Christmas,” no one would’ve ever guessed Vermont would’ve been anything other than reliably Republican on the presidential level. In fact, they make a reference to that fact in the movie. No one would’ve ever foresaw the Green Mountain State giving 85.69% of the Democratic primary vote in 2016 to their home state socialist senator.
U.S. political history is riddled with endless examples of political shifts. Successful election forecasters should not only know that the map on the presidential level isn’t static but also learn how to recognize when the map is shifting. Hypotheses centering on Blue Walls–and, even Red Walls once upon a time–are destined to fail, eventually.
The last time we saw a mainstream denial of this fact in a presidential election was in 2000, when George W. Bush shocked forecasters by completing the migration of voters formerly in the Ross Perot coalition to the Republican Party. The result was a political shift on the presidential map that included the state of Kentucky. On Tuesday, Mrs. Clinton barely eked out a win in the Bluegrass State, a state her husband carried twice when he ran for president in 1992 and 1996.
To be sure, polls are important when making election predictions, but they aren’t the end all be all. If they were, then political observers, junkies and news consumers wouldn’t need forecasters like myself or websites likes FiveThirtyEight.com. If Mr. Silver insists on simply being a poll-reader and selling his take on those polls as analysis, then I submit his forecasts offer these political observers, junkies and news consumers little value.
We also talk to actual Americans in Main Street America and don’t exclusively rely on third-party pollsters. PPD has an in-house polling operation and we compare our results to higher-rated pollsters on the PPD Pollster Scorecard. That certainly helps to reduce the negative impact of mirroring, when pollsters simply copy each other and everyone ends up being wrong.
For instance, FiveThirtyEight in 2014 missed the mark on several key U.S. Senate races, the worst call being against incumbent Republican Sen. Pat Roberts in Ruby Red Kansas. The head-to-head polls against the so-called independent candidate Greg Orman were wrong, and highly suspect. Voters we spoke to in the state were fully aware he was previously, and likely still, a Democrat in an independent’s clothing.
Further, the fundamentals were clear and Mr. Silver ended up in Wrongville–population: Upshot, The Fix and RollCall. Larry Sabato, the man with the Crystal Ball at the University of Virginia Center for Politics, was also a bit slow to the roll but eventually came around on most of the Senate and gubernatorial races.
In 2016, Mr. Silver has again demonstrated an over-reliance on polling data, shifting his Indiana Republican primary forecast from Ted Cruz having a nearly 70% chance of victory to a greater probability for Mr. Trump in just a 24-hour period. That’s absurd and reminds me of my pre-Election Day, “final projections” article in 2014.
Second, it is remarkable that the map and ratings have changed so little since the release of PPD’s 2014 Senate Map Predictions model in December, 2013. Our ratings haven’t wildly gyrated back-and-forth as we’ve seen with others. As we’ve repeatedly stated, PPD’s model is a “big picture” model that weighs more heavily for the variables that actually matter, rather than making constant knee-jerk reactions to this poll or that poll. Over the months, we have tried to point out how and when other election projection models were doing just that, when we took notice.
Why are we taking time to mention this? It’s not to brag, but rather for the same reason we started PPD’s election projection model: To let you know these people are full of it, plain and simple.
This cycle, we’re again going to do what we do best at PPD: call BS.
So, stay tuned.
The most damning journalistic sin committed by the media during the era of Russia collusion…
The first ecological study finds mask mandates were not effective at slowing the spread of…
On "What Are the Odds?" Monday, Robert Barnes and Rich Baris note how big tech…
On "What Are the Odds?" Monday, Robert Barnes and Rich Baris discuss why America First…
Personal income fell $1,516.6 billion (7.1%) in February, roughly the consensus forecast, while consumer spending…
Research finds those previously infected by or vaccinated against SARS-CoV-2 are not at risk of…
This website uses cookies.
View Comments
Sure sure.
Highly intelligent response to the facts, which were laid out pretty convincing and accurately. Check Google. It's this great tool on something called the Internet, which remarkably gives us the opportunity to NOT be ignorant. Check it out!