The notorious inaccuracy of weather forecasts has been the stuff of countless jokes, comic strips, and sitcom disasters, such as the recent New Yorker cartoon that shows a weather reporter in a rain jacket, holding an umbrella. “At this point, it’s still not classified as a hurricane,” he says into the camera. “It’s still being called a raindrop.”
Brian F. Farrell doesn’t want to rain on anyone’s parade, so to speak, but all that will change soon, if he gets his way.
“The research we’re doing now,” says Farrell, the Robert P. Burden Professor of Meteorology, “has to do with trying to improve both the deterministic forecast, which is the weather for tomorrow and the next day and the day after, and also the statistical forecast, which is a longer-range look at how the climate changes – for instance, if we’re going into a period of more snowstorms on average next year.” His project is part of a five-year initiative funded by the National Science Foundation and the Office of Naval Research to spark progress in the general area of predictability of the atmosphere and oceans.
“Right now we have a fairly good forecast out to 48 hours,” he says, “maybe two or three days. After that, you never know.” The reason for this, he adds, is that starting out with a certain amount of error is inevitable. Both the observational data and the mathematical model used to parse it are imperfect; the necessity of using them together only compounds the problem.
“In order to use the observations accurately,” Farrell says, “you have to know very accurately what are the errors the model is making. Because if you put an observation in, it tells you that you should make a correction in the model. But it has a little error of its own.”
The trick is knowing which is more right in any given situation – the model or the observation. “Supposing you’re out over the Atlantic,” he says, “and you get an observation that says the wind should be 10 meters a second, but your model says it should be 15. Well, way out there over the Atlantic, most likely, the model is wrong, because it hasn’t had any data for a long time. So you take the observation. But if you come to a place where the model is giving very accurate winds and a weather balloon tells you the wind is two meters per second off from the model, you don’t want to just use the balloon’s observation. The balloon is way less accurate than the model in that region.”
Helping forecasters to know whether to trust the model or the observation is the task that Farrell set for himself.
Though traditionalists bemoan the cost-cutting that has led to fewer observation stations, Farrell maintains that “there are fundamental scientific reasons for believing that improving the way you use observations is potentially far more important than doing more observations – even than doing more accurate observations. Because often, you put in a whole bunch of observations, and it makes the forecast worse, rather than better. This is sobering.”
Part of the reason for this, Farrell says, is that “you need to figure out where’s the model right, where’s the model wrong. And you can’t use observations correctly to answer that question.” The number of error structures in the model, he points out, is “astronomically large” at a hundred trillion, or 1014th. “And everybody knows there’s no computer big enough in the world to integrate forward a system with 1014th degrees of freedom.”
A key element that had been overlooked, however, was that “there are only a thousand or 2,000 error structures that grow over time, rather than decay. And as far as error-growth is concerned, we’re only interested in those.” The degree of error in the initial state of today’s forecasts is about a meter per second in the wind and a degree centigrade in the temperature. It may not sound like much, but the problem is it builds up over time, doubling about every two days – which is why forecasts are currently accurate to only 48 hours.
“But it turns out that if you look only at the errors that grow over time,” Farrell says, “the true dimension of the system is not 1014th, but actually more like 103rd, 104th. It’s much much smaller. Doable. The question is how.”
Luckily, Farrell’s previous work used control theory to reduce turbulence in boundary layers – such as the area where the wind blows over the Earth’s topography, where a dolphin’s skin touches the water, or where an airplane’s surface meets the air. The mathematical models he developed during that research could be converted to figure out something called “optimal state estimation,” or the ideal initial condition from which to carry forward the weather forecast. “You want to start from the best state you can get,” he says. “You get the best state by using the observations and the model together – as a pair, as a couple, as a total – most effectively. You use the model most effectively by knowing where its errors are. You know where its errors are by taking the error system and reducing it using this engineering method of control theory.”
Doing this, Farrell showed that with just 1,000 degrees of freedom, he was able to identify the optimal state. He and his research partner, Petros Ioannou of the University of Athens Physics Department, took their findings to the European Center for Medium-Range Weather Forecasting (ECMWF) in Reading, England, which produces one of the best models in the world. Likely in September, the ECMWF will run the new model parallel to its current model to test its results. “The promise,” Farrell says, “is that there will be a two- to four-day, or maybe even more, increase in the accuracy of the forecast. The optimistic scenario is that it will give such a dramatic improvement right from the first that it will sweep the world. The more realistic [scenario] is that it will make some modest improvement but it will be years before we manage to get all of the bugs out of the implementation and get the thing to live up to its potential.” So – like weather-watchers everywhere – “we’re anxiously awaiting.”