The INGV Mediterranean Forecasting System (INGV-MFS) is currently one of the Monitoring and Forecasting Production Unit of the European MyOcean2 project. The system was implemented in 2000 by the INGV National Group of Operational oceanography (GNOO) and has been developed in years thanks to a number of European projects (see INGV Technical Report n. 163). The centre at INGV operates a near-real time validation system (Cal/Val and MFS-MyOcean System Evaluation) - which provides comparisons between model and observational data of the main ocean variables (SST, 3D temperature and salinity, sea level and currents) on dedicated websites and it releases a Quality Information Document (QuID) on MyOcean website, that illustrates validation diagnostics and quality of its products. Furthermore, it is a partner of the MyOcean work package dedicated to product quality and to improve verification procedures to assess product quality. This work deals with the verification process of analysis/forecast performances, briefly reviewing methods and measures developed in the context of weather forecasting and illustrating those currently used in ocean forecasting systems. We expose the history and development of verification methods and the common practices and protocols adopted by international organisms that coordinate various national ocean forecasting systems. Then, we give an example of verification of sea level height operated by the INGVMFS, reporting the on-going improvements and illustrating an application of metrics to validate assimilation of different sets of observations. 1. Quality control systems of forecasting Assessing the quality of a forecast is essential to improve it and to inform users about the confidence that can be placed in its products. However, planning an efficient and informative evaluation method and defining what could be a ‘good’ forecast is not straightforward. Murphy (1993) addressed the concept of ‘goodness’ of forecasts identifying three types: 1) consistency - degree of correspondence between forecast and forecasters’ best judgement derived from their knowledge base; 2) quality - degree of correspondence between forecast and some “truth” or reference; and 3) value - incremental benefits deriving from the employment of the forecast by users for decision-making. Furthermore, he described the relationships between the three types of ‘goodness’, highlighting that a high quality not necessary implies a high value of forecasts, and that gaining information about users preferences and being informative on forecast products benefits are crucial to make them valuable. Quality and value of a forecast are the results of verification and evaluation processes designed to answer to users’ needs. Furthermore, quality assessment is of interest for forecasters to improve the system itself: a process called validation. It consists in comparing the hindcasts of different version of the forecasting system to test developments in one or more components; once the validation has proved improvement of the forecast, the new version can be implemented operationally [Schiller and Brassington, 2011]. The results of these processes provide clues for administrative, scientific (or diagnostic) and economic purposes, as classified by Brier and Allen (1951): determining the state of the art and trends in quality of operational forecast performances over time serves to justify and gain financial aid for advancing training and equipment; assessing the strength and weakness of the forecast allows to better understand underlying physical processes for improving forecasting models; tailoring the types of products released and verification schemes on users’ needs helps to effectively communicate forecast outcomes for enhancing its value. 1.1. Historical development of verification methods Techniques of forecast verification have been developed since the end of the nineteenth century, in the context of weather forecasting, thanks to a curious event known as the “Finley affair” [Murphy, 1996]. Although weather forecasting services were operative since the 1850s in the USA and Western Europe, the attention to the question of forecast verification arose only after 1884, triggered by a paper about an experimental tornado forecasting program published in the American Meteorological Journal by John Finley (a Sergeant of the U.S. Army). In the paper, he assessed the quality of his forecast by a yes/no table, validating his outcome as the percentage of correct tornado/no tornado forecasts and coming with a 96.6% of accuracy [Finley, 1884].

Published: 2024-02-12

Quality control system of the Mediterranean Forecasting System products

Marina Tonani, Alessandro Grandi, Jenny Pistoia, Paolo Oddo, Giacomo Girardi, Massimiliano Drudi

38