If you’ve ever found yourself anxiously wondering where a hurricane might make landfall, then you’re probably familiar with “spaghetti charts” — the intertwined web of possible storm tracks put out by many forecasters.
Those lines represent hundreds of millions of observations from satellites, aircraft, balloons and buoys, all crunched from complex forecasting equations on some of the world’s most powerful computers.
Brian McNoldy, a senior research assistant at the Rosenstiel School of Marine and Atmospheric Science at the University of Miami, says that if you’re trying to untangle the spaghetti charts, it’s important to understand that not all models are created equal.
Some of them, in fact, are routinely ignored by meteorologists.
McNoldy says simple statistical models developed as far back as the 1980s, such as the Beta and Advection Model, are of little value as anything more than a benchmark to compare against newer, more sophisticated models. The same thing goes for the CLP5, sometimes known as the “Clipper” model
“I personally don’t weight them at all. If I were to make a map, I wouldn’t include them,” he says.
Instead, McNoldy and other forecasters focus on dynamic models, the ones produced from lots of observational data, equations and massive supercomputers that make trillions of floating point operations per second.
For the past few years — the top of the heap for reliability in forecasting storm tracks — what is called “skill” in meteorological parlance has been the European Center for Medium Range Weather Forecasting Model.
A close second is the Global Forecast System model run by NOAA’s National Centers for Environmental Prediction. NCEP also runs the Hurricane Weather and Research Forecast, which also is among the highest skill models.
“Sometimes there’s what we call the ‘Model of the Year,’ ” says Christopher Velden, a senior researcher at the University of Wisconsin, Madison’s Space Science and Engineering Center.
“Some years, one model does better. It could be just luck, it could be that it handles one type of storm better or it could be that upgrades and advances to the model that year led to some improvements,” he says.
While ECMWF and GFS have been close contenders for model of the year recently, the U.S. Navy’s Operational Global Atmospheric Prediction System may have to settle for a consolation prize. NOGAPS has “historically not shown a lot of skill with tracks. It is often ignored. Obviously, that’s not good for them,” says McNoldy.
Tweaking The Data
The latest trends in hurricane forecasting involve the use of “ensembles.” Meteorologists tweak the data they input for the initial conditions to see how much they will change the track forecast. If changing those inputs creates little change in the track for any given storm system, it bolsters forecasters’ confidence in that model’s reliability. If the ensemble tracks diverge widely, their confidence goes down.
“At the end of the day, the forecaster may have five or six of these models and he’s tasked with weighting them into what he thinks is the most likely model handling the situation best,” says Velden.
Some of the models are run every six hours and others are run every 12 hours. That puts a huge demand on computing power. In fact, NCEP is in the process of swapping its two supercomputers for new models that are nearly three times as fast.
Smaller-scale dynamic models appear to be better at forecasting intensity, something larger-scale models — which are good at predicting tracks — haven’t been as skilled at doing, says McNoldy.
These regional models, which may focus on an area as small as an individual squall line, are increasingly being nested within the global models to improve their accuracy, something Velden describes as sort of a picture within a picture.
Steve Bennett is the chief science and products officer for EarthRisk, which is aiming to decrease the uncertainty in future hurricane track forecasts. EarthRisk is starting with a meta-analysis of forecast errors and hopes that by looking at where storms originated, how strong they were and their direction of travel, forecasters might understand what led to errors.
“Instead of treating all of those forecasts the same, putting them in a bucket, we’re saying, ‘Let’s take an apples-to-apples approach,’ ” he says.
“If we have a Category 5 hurricane, let’s look at the error around Category 5 hurricanes through history. Let’s treat them the same,” Bennett says. “If we have a storm that’s moving north through the Gulf of Mexico, let’s see if we can create an error band around all storms that have been moving north in the Gulf of Mexico.”