Since the early times of weather prediction, a by-product was now-casting – determining the state of the atmosphere at day “0”. Using various observations of meteorological variables such as wind direction and speed, temperature and pressure, “synoptic weather maps” were drawn. Since about 1900, such maps are now available for some parts of the world, in particular, Northern Europe and the adjacent Atlantic Ocean.
Later when computers and upper air data became available this “analysis” process was refined – from recent forecasts for day “0” plus observational data on day “0” initial fields for the numerical weather prediction were constructed. These series of maps served the purpose of making weather predictions possible very well, but they were less powerful for climatic purposes, in particular for determining long-term trends, simply because of so-called inhomogeneities in the series of maps: because of changing prediction models, changing densities of observational networks and the availability of new observational platforms (in particular satellite-based products) lead to changing quality and detail in these “analyses”.
Thus, changes across time and space may be related to variations in weather patterns and frequencies, but also to changing instrumentation and observational practices. Using unchanged models for the entire time helped a little with this problem – these products are called “re-analyses” – but the fundamental problem of inhomogeneities on regional and local scales remains.
However, when planning for maintenance of off-shore structures, when designing ships to withstand environmental stresses, when assessing the risk of very high storm surges, or the potential of wind energy, of the variability of regional upwelling, implications on permafrost thermal state, ecology and biochemical cycles through snow cover and depth – which are just a few of many more examples – realistic and homogeneous descriptions of the regionally detailed weather stream for a sufficiently homogeneous long time are needed. Such descriptions cannot be constructed directly from observations simply because of a scarcity of observational data, and regional-scale inhomogeneities of re-analyses.
Movie: Description of the midlatitude storm Christian and the tropical storm Hayan in 2013, as provided by global downscaling. Data provided by Martina Schubert-Frisius and Frauke Feser, movie compiled by Felicia Brisc, Climate Visualization Laboratory, CEN/CLISAP, Hamburg University, Hamburg. Additional information can be found here.
Here, the idea of “downscaling” is offering a solution. Downscaling means framing the regional weather stream as being “conditioned” by the large-scale atmospheric state, and regional and local physiographic detail, such as coastlines or mountain ranges. Practically, this is done by “nudging” the large-scale components of the model generated fields to those of the re-analyses, while the regional scale components are left entirely to the model. Past experiences with limited area (regional) atmospheric models demonstrate that this concept works fine as long as the quality of the description of the large-scale is sufficient. In particular, when the large-scale state is described homogeneously by the re-analyses, a consistent and homogeneous description of the regional dynamics is generated.
The basic assumption of homogeneity of the analysis of the large-scale component seems to be fulfilled for the most part of the Northern Hemisphere for the NCEP-reanalyses since about 1960 or maybe even earlier; for the Southern Hemisphere, homogeneity is achieved only after the advent of satellite data, i.e., since about 1980. Re-analysis such as the European ERA-interim or JRA55 of Japan are available only since 1980 or later.In various studies, the added value of this procedure, in describing regional and local detail, has been demonstrated using regional atmospheric models which implement the large-scale constraining (“Spectral nudging”) (Feser et al., 2011). In 2008, Yoshimura and Kanamitsu suggested to implement this constraining method into global models and demonstrated the potential of this approach. We have followed the example of Yoshimura and Kanamitsu and built spectral nudging into the high-resolution global climate model ECHAM6 (T255L95 – horizontal fields are expanded into spherical harmonics up to total wavenumber 255, which amounts to a grid point distance of about 55 km; the vertical is discretized into 95 vertical levels). [The data is publicly available here] Details, also on testing different configurations are provided Schubert-Frisius et al. (2017).
In our recent paper (von Storch et al., 2017) we have assessed the performance of this global analysis of regional dynamics in a number of regions with reduced or absent local data coverage, such as Central Siberia, the Bohai and the Yellow Sea, Southwestern Africa and the South Atlantic. We compared the output of the global simulations with the available data during limited time periods and found satisfying results. Also, a comparison with the output of regional simulations (with similar spatial resolution) yielded skills in reproducing observed variability. Our cases demonstrate that spatially detailed reconstructions of the climate state and its change in the recent three to six decades add homogenously supplementary information to existing observational data for mid-latitude and sub-tropical regions of the world.
We draw the following general conclusions from our study:
- Global homogeneous descriptions of regional variability can be constructed by processing large-scale variability and regional physiographic detail. This can be done with global models.
- Contemporary computer resources allow such simulations for grid resolution of about 50 km, and in foreseeable future, this lower limit will go down to, say 20 or even 10 km’s.
- Thus, the need to employ regional models with such resolutions will lessen. However, for simulations with an explicit description of atmospheric convection, which require grid resolutions of 7 and much less km’s, regional models will be needed also for the foreseeable future. This is, in particular, so for studies of the climate of urban conglomerates.
- The approach of dynamical downscaling allows describing past regional variability in regions with no or only little local observational evidence in the region itself.
- The method may also be applied for constructing scenarios of possible future regional climate change.
- The fundamental constraint of the success of this procedure is the homogeneity of the large-scale variability. If this homogeneity is compromised, then no robust added value is to be expected from the downscaling effort.
- Feser, F., B. Rockel, H. von Storch, J. Winterfeldt, and M. Zahn, 2011: Regional climate models add value. Bull. Amer. Meteor. Soc. 92: 1181–1192
- Schubert-Frisius, M., F. Feser, H. von Storch, and S. Rast, 2017: Optimal spectral nudging for global dynamic downscaling. Mon. Wea. Rev., DOI: https://journals.ametsoc.org/action/cookieAbsent
- von Storch, H., F. Feser, B. Geyer, K. Klehmet, 李德磊 (Li D.), B. Rockel, M. Schubert-Frisius, N. Tim, and E. Zorita, 2017: Regional re-analysis without local data – exploiting the downscaling paradigm. J. Geophys. Res. – Atmospheres, DOI:10.1002/2016JD026332, early online
- Yoshimura K, Kanamitsu M (2008) Dynamical Global Downscaling of Global Reanalysis. Mon Weather Rev 136:2983–2998. doi: 10.1175/2008MWR2281.1
This study, Regional reanalysis without local data: Exploiting the downscaling paradigm was recently published in the Journal of Geophysical Research: Atmospheres. You can find out more about the author Hans von Storch through his personal website, Academia.edu, and ResearchGate.