Select Page

A Touch of the Random

By Rob Mitchum // June 13, 2014

There’s a new debate heating up in the world of climate modeling — not the fictitious “debate” that plays out in the media over climate change and its causes, but a contest over the best methods to forecast how climate change will affect the planet. Until now, the dominant approach has been deterministic models, which use environmental variables and equations replicating physical laws to run numerical simulations of climate. But as these models seek higher and higher resolution, they become extremely expensive computationally, without much improvement in forecasting accuracy. That’s opened up space for a new challenger, Science writes today, from less complex stochastic models that incorporate random variables into their projections.

As evidence that a more complicated model is not always better, the story cites research by the Center for Robust Decision Making on Climate and Energy Policy (RDCEP) and London School of Economics researcher Leonard Smith. For the Journal of Climate, Smith and Emma Suckling compared the predictions of state-of-the-art climate models with a simple statistical model based on mean temperature changes over the last century. In a test of which method could predict the future the best, the simpler model won. Those results fueled Smith’s skepticism that we will be able to adequately model the effects of climate change before we experience them directly.

Advocates of stochastic approaches, however, say only a drastic change of course can jolt predictive climate modeling out of its current rut. With policymakers clamoring for robust forecasts of how temperature and precipitation will change region by region in coming decades, Smith says, time is running out: “The question is, when will we have significantly better quality information than we have today? I think we may have our answer from the climate before we get it from the physics.”

For more on Smith and RDCEP’s research, visit rdcep.org.