When will the glaciers all melt?

Glacier National Part in Montana [has] fewer than 30 glaciers remaining, [it] will be entirely free of perennial ice by 2030, prompting speculation that the park will have to change its name – The Ravaging Tide, Mike Tidwell

Americans should plan on the 2004 hurricane season, with its four super-hurricanes (catagory 4 or stronger) becoming the norm […] we should not be surprised if as many as a quarter of the hurricane seasons have five super-hurricanes – Hell and High Water, Joseph Romm

Two points of order:

  • In 2006, when Mike Tidwell wrote about glaciers, Glacier national park had 27 glaciers. It now has 26 glaciers, and isn’t expected to suddenly suddenly lose them all in 5 years.
  • Since 2007, when Joseph Romm wrote about hurricanes, just four hurricane seasons have had four so-called “super-hurricanes,” and just one season has had five. The 2004 season has not become the norm, and we are averaging less than 6% of seasons having five super-hurricanes

I do not write this to dunk on climate science, I write only to dunk on the popular press. The science of global warming is fact, it is not a myth or fake news. But the popular press has routinely misused and abused the science, taking extreme predictions as certainties and downplaying the confidence interval.

What do I mean by that? Think of a roulette wheel, where a ball spins on a wheel and you place a bet as to where it will land. If you place a bet, what is the maximum amount of money you can win (aka the “maximum return”)? In a standard game the maximum amount you can win is 36 times what you bid, should you pick the exact number the ball lands on. But remember that in casinos, the House Always Wins. Your *expected* return is just 95/100 of your bid. You’re more likely to lose than to win, and the many many loses wipe out your unlikely gains, if you play the game over and over.

So how should we describe the statistical possibilities of betting on a roulette wheel? We should give the expected return (which is like a mean value how much money you might win), we should give the *most likely* return (the mode), and we should give the minimum and maximum returns, as well as their likelihood of happening. So if you bet 1$ on a roulette wheel:

  • Your expected return is 0.95$
  • Your most likely return is 0$ (more than half of the time you win nothing, even if betting on red or black. If you bet on numbers, you win nothing even more often).
  • Your minimum return is 0$ (at least you can’t owe more money than you bet), this happens just over half the time if you bet on red/black, and happens more often if you bet on numbers
  • Your maximum return is 36$. This happens 1/38 times, or about 2.6% of the time.

But would I be lying to you if I said “hey, you *could* win 36$”?

By some standards no, this isn’t lying. But most people would acknowledge the hiding of information as a lie of omission. If someone tried to entice someone else to play roulette only by telling them that they could win 36$ for every 1$ they put down, I would definitely consider that lying.

So too does the popular press lie. Climate science is a science of statistics and of predictions. Like Nate Silver’s election forecasting, climate modeling doesn’t just tell you a single forecast, they tell you what range of possibilities you should expect and how often you should expect them. For instance, Nate Silver made a point in 2024 that while his forecast showed Harris and Trump with about even odds to win, you shouldn’t have expected them to split the swing states evenly and have the election come down to the wire. The most common result (the mode) was for either candidate to win *all* the swing states together, which is indeed what happened.

Bad statistics and prediction modellers will misstate the range of possible probabilities. They will heavily overstate their certainties, understate the variance, and pretend that some singular outcome is so likely as to be guaranteed.

This kind of bad statistics was central to Sam Wong of the Princeton Election Consortium‘s 2016 prediction, which gave Hillary Clinton a greater than 99% chance of victory. Sam *massively* overstated the election’s certainty, and frequently attacked anyone who dared to caution that Clinton wasn’t guaranteed to win.

Nate Silver meanwhile was widely criticized for giving Hillary such a *low* chance of victory, at around 70%. He was “buying into GOP propaganda” so Sam said. Then after the election Silver was attacked by others for giving Clinton such a *high* chance, since by that point we knew she had lost. But 30% chance events happen 30% of the time. Nate has routinely been more right than anyone else in forecasting elections.

I don’t doubt that some people read and believed Sam Wong’s predictions, and even believed (wrongly) that he was the best in the business. When he was proven utterly, completely wrong, how many of his readers decided forecasting would never be accurate again? How much damage did Sam Wong do to the popular credibility of election modeling?

However much damage Sam did, the popular press has done even more to damage the statistical credibility of science, and here we return to climate change. Climate change is happening and will continue to accelerate for the foreseeable future until drastic measures are taken. But how much the earth will warm, and what effects this will have, have to be modeled in detail and there are large statistical uncertainties, much like Silver’s prediction of the 2016 election.

Yet I have been angry for the last 20 years as the popular press continues to pretend long-shot possibilities are dead certainties, and to understate the range of possibilities. Most of the popular press follows the Sam Wong school.

In the roulette table, you might win 36$, but that’s a long-shot possibility. And in 2006 and 2007, we might have predicted that all the glaciers would melt and super-hurricanes would become common. But those were always long-shot possibilities, and indeed these possibilities *have not happened*.

The climate has been changing, the earth has been warming, but you don’t have to go back far to see people making predictions so horrendously inaccurate that they destroy the trust of the entire field. If I told you that you were dead certain to win 36$ when putting 1$ on the roulette wheel, you might never trust me again after you learned how wrong I was. Is it any wonder so many people aren’t trusting the science these days, when this is how it’s presented? When we were told 20 years ago that all the glacier in America would have melted by now? Or that every hurricane season would be as bad as 2004?

And it isn’t hard either to find numerous even more dire predictions couched in weasel words like “may” and “possibly.” The oceans “may” rise by a foot, such and such city “may” be under water. It’s insidious, because while it isn’t *technically* wrong (“I only said may!”) it makes a long-shot possibility seem far more likely than it really is. Again, it’s a clear lie of omission, and it’s absolutely everywhere in the popular press.

We have to be accurate when modelling our uncertainty. We have to discuss the *full range of possibilities*, not just the possibility we *want* to use for fear-mongering. And we have to accurately state the likelihoods for our possibilities, not just declare the long-shot to be a certainty.

Because the earth *has* warmed. A glacier has disappeared from Glacier national park and the rest are shrinking. Hurricane season power is greater than it was last century. But writers weren’t content to write those predictions, and instead filled books with nonsense overstatements that were not born out by the data and are easily disproven with a 2025 google search. When it’s so easy to prove you wrong, people stop listening. And they definitely won’t listen to you when you “update” your predictions to match the far less eye-catching trend that you should have written all along. Lying loses you trust, even if you tell the truth later.

I think Nate Silver should be taken as the gold standard for modelers, statistician, and more importantly *the popular press*. You *need* to model the uncertainties, and more importantly you need to *tell people* about those uncertainties. You need to tell them about the longshots, but also about *how longshot they are*. You need to tell them about the most likely possibility too, even if it isn’t as flashy. And you need to tell them about the range of possibilities along the bell curve, and accurately represent how likely they all are.

Nate Silver did just this. In 2016 he accurately reported that Trump was still well within normal bounds of winning, an average size polling error in his favor was all it would take. He also pointed out that Clinton was a polling error away from an utter landslide (which played much better among the twitterati), and that she was the favorite (but not enough of the favorite to appease the most innumerate writers).

In *every* election Silver has covered, he has been the primary modeller accurately measuring the range of possibilities, and preparing his readers for every eventuality. That gets him dogpiled when he says things that people don’t like, but it means he’s accurate, and accuracy is supposed to be more important than popularity in science.

So my demand to the popular press is to be more like Nate Silver and less like Sam Wong. Don’t overstate your predictions, don’t downplay uncertainties, don’t make extreme predictions to appeal to your readers. Nate Silver has lost a lot of credibility for his temerity to continue forecasting accurately even in elections that Democrats don’t win, but Sam Wong destroyed his credibility in 2016 and has been an utter joke ever since. If science is to remain a force of informing policy, it needs to be credible. And that means making accurate predictions even if they aren’t scary enough to grab headlines, or even if they aren’t what the twitterati would prefer.

Lying only works until people find you out.