Not all climate models have the same value. A paper led by Professor Francisco J. Tapiador at the University of Castilla-La Mancha argues that as climate models are increasingly used for societal applications, a focus on gauging their reputations in order to prune those not up to industrial-grade quality assurance (QA) requirements. Thus, climate models developed by a few people and not available to other scientists should be toned down in favor of widely used, community models.
A major reason for placing more confidence on the latest models is that the chances of detecting bugs, errors, and issues exponentially increase with the number of users. Another major reason is scientific objectivity and independence. Climate models validated by the same people that developed them (or by collaborators) are naturally less trustworthy than those independently validated by a variety of independent or even rival scientists.
Recent questionable practices, such as the collusion affair in the Climate Research journal reported by PubPeer, cast shadows on climate change research and should be avoided at all cost. Ongoing international efforts such as the Intergovernmental Panel on Climate Change reports cannot afford to include input from questionable models or to be based on doubtful papers.
For a long time, climate models were research tools aimed at increasing our understanding of the climate. Today, however, they have crossed the thin red line of pure science and stepped into the societal arena. There, the stakes are different and so are requirements. Driving economic decisions and informing policies require a different set of standards and procedures, Tapiador et al. argue.
The societal demands on that respect are clear: more accountability, transparent procedures, QA standards, and professional, dedicated taskforces. However, not all research groups have enough critical mass to cope with stringent requirements. A major hindrance is the continuous need to improve the physics of the models at a pace commensurate with the growing needs of applications, such as hydrological operations. Such effort can only be sustained by large institutions.
Tapiador et al. propose a three-level hierarchy of climate models. A first, basic tier would correspond with those in-house models made by gluing together codes from other scientists (“Frankenstein’s monster models”). These can be useful for pure research but are seldom good enough to be used in real life to make decisions affecting people and properties. The second, intermediate level models increase the level of transparency, accountability, and originality of the codes.
Tier three, full-confidence models in Tapiador et al.’s classification, are those whose source code is original and publicly available, and with the results of the simulations fully replicable by a third party. The results from such models also need to be successfully replicated by several independent groups and the results published in reputable journals. Prototypical examples of full-confidence models are the WRF model and the CESM. Both have been used and scrutinized by hundreds of scientist over the past years, a practice that minimizes the chances of faulty codes, fraud, and malpractice.
The unavoidable consequence of increased scrutiny is a natural selection of climate models. Only those most suited to quality control rules such as Tapiador’s would survive, so it is not hard to envision a near future with only a few, very complex climate models that are developed and have been continuously improved at large research institutions. That would facilitate the implementation of QA standards and more rigorous procedures to evaluate the quality of simulations that now have a definite impact on society. Besides, as models become more complex, the need for computing resources also escalates, so only large, well-funded centers would keep the pace.
As in particle physics research, where a few institutions centralize research and optimize the resources, climate modeling may be in the path of the “big science framework.” Communities such as astrophysicists have long solved this problem by pooling most of the resources into major endeavors such as the Hubble Space Telescope or Laser Interferometer Space Antenna, and freely distributing the massive quantities of data to scientists for further analyses. Tapiador’s criteria are useful not only to gauge model reputation, prune models for the IPCC reports, and to avoid malpractices and collusions, but also to optimize economic resources, drive research to a new dimension, and coordinate the work of hundreds of scientists worldwide.
These findings are described in the article entitled Global precipitation measurements for validating climate models, recently published in the journal Atmospheric Research. This work was conducted by F. J. Tapiador and A. Navarro from Universidad de Castilla-La Mancha, V. Levizzani from the Institute of Atmospheric Sciences and Climate, E. García-Ortega and J. L. Sánchez from the University of León, G. J. Huffman and C. Kidd, W. A. Peterson, W. K. Tao, and F. J. Turk from NASA, P. A. Kucera from the National Center for Atmospheric Research, C. D. Kummerow from Colorado State University, H. Mansunaga from the Institute for Space-Earth Environmental Research, and R. Roca from OMP/LEGOS.
- F.J. Tapiador, A. Navarro, V. Levizzani, E. García-Ortega, G.J. Huffman, C. Kidd, P.A. Kucera, C.D. Kummerow, H. Masunaga, W.A. Petersen, R. Roca, J.-L. Sánchez, W.-K. Tao, F.J. Turk, 2017. Global precipitation measurements for validating climate models, Atmospheric Research, Volume 197,2017, Pages 1-20, ISSN 0169-8095, https://linkinghub.elsevier.com/retrieve/pii/S0169809517303861.
Nice piece! I fully support the idea behind this article and Tapiador et al. paper. Accountability is a must when models are applied to real life. IPCC reports should use only the best, community models.