No Arabic abstract
Inflow forecasts play an essential role in the management of hydropower reservoirs. Forecasts help operators schedule power generation in advance to maximise economic value, mitigate downstream flood risk, and meet environmental requirements. The horizon of operational inflow forecasts is often limited in range to ~2 weeks ahead, marking the predictability barrier of deterministic weather forecasts. Reliable inflow forecasts in the sub-seasonal to seasonal (S2S) range would allow operators to take proactive action to mitigate risks of adverse weather conditions, thereby improving water management and increasing revenue. This study outlines a method of deriving skilful S2S inflow forecasts using a case study reservoir in the Scottish Highlands. We generate ensemble inflow forecasts by training a linear regression model for the observed inflow onto S2S ensemble precipitation predictions from the European Centre for Medium-range Weather Forecasting (ECMWF). Subsequently, post-processing techniques from Ensemble Model Output Statistics are applied to derive calibrated S2S probabilistic inflow forecasts, without the application of a separate hydrological model. We find the S2S probabilistic inflow forecasts hold skill relative to climatological forecasts up to 6 weeks ahead. The inflow forecasts hold greater skill during winter compared with summer. The forecasts, however, struggle to predict high summer inflows, even at short lead-times. The potential for the S2S probabilistic inflow forecasts to improve water management and deliver increased economic value is confirmed using a stylised cost model. While applied to hydropower forecasting, the results and methods presented here are relevant to broader fields of water management and S2S forecasting applications.
Existing methods for diagnosing predictability in climate indices often make a number of unjustified assumptions about the climate system that can lead to misleading conclusions. We present a flexible family of state-space models capable of separating the effects of external forcing on inter-annual time scales, from long-term trends and decadal variability, short term weather noise, observational errors and changes in autocorrelation. Standard potential predictability models only estimate the fraction of the total variance in the index attributable to external forcing. In addition, our methodology allows us to partition individual seasonal means into forced, slow, fast and error components. Changes in the predictable signal within the season can also be estimated. The model can also be used in forecast mode to assess both intra- and inter-seasonal predictability. We apply the proposed methodology to a North Atlantic Oscillation index for the years 1948-2017. Around 60% of the inter-annual variance in the December-January-February mean North Atlantic Oscillation is attributable to external forcing, and 8% to trends on longer time-scales. In some years, the external forcing remains relatively constant throughout the winter season, in others it changes during the season. Skillful statistical forecasts of the December-January-February mean North Atlantic Oscillation are possible from the end of November onward and predictability extends into March. Statistical forecasts of the December-January-February mean achieve a correlation with the observations of 0.48.
Seasonal time series Forecasting remains a challenging problem due to the long-term dependency from seasonality. In this paper, we propose a two-stage framework to forecast univariate seasonal time series. The first stage explicitly learns the long-range time series structure in a time window beyond the forecast horizon. By incorporating the learned long-range structure, the second stage can enhance the prediction accuracy in the forecast horizon. In both stages, we integrate the auto-regressive model with neural networks to capture both linear and non-linear characteristics in time series. Our framework achieves state-of-the-art performance on M4 Competition Hourly datasets. In particular, we show that incorporating the intermediate results generated in the first stage to existing forecast models can effectively enhance their prediction performance.
Prior to adjustment, accounting conditions between national accounts data sets are frequently violated. Benchmarking is the procedure used by economic agencies to make such data sets consistent. It typically involves adjusting a high frequency time series (e.g. quarterly data) so it becomes consistent with a lower frequency version (e.g. annual data). Various methods have been developed to approach this problem of inconsistency between data sets. This paper introduces a new statistical procedure; namely wavelet benchmarking. Wavelet properties allow high and low frequency processes to be jointly analysed and we show that benchmarking can be formulated and approached succinctly in the wavelet domain. Furthermore the time and frequency localisation properties of wavelets are ideal for handling more complicated benchmarking problems. The versatility of the procedure is demonstrated using simulation studies where we provide evidence showing it substantially outperforms currently used methods. Finally, we apply this novel method of wavelet benchmarking to official Office of National Statistics (ONS) data.
Probabilistic weather forecasts from ensemble systems require statistical postprocessing to yield calibrated and sharp predictive distributions. This paper presents an area-covering postprocessing method for ensemble precipitation predictions. We rely on the ensemble model output statistics (EMOS) approach, which generates probabilistic forecasts with a parametric distribution whose parameters depend on (statistics of) the ensemble prediction. A case study with daily precipitation predictions across Switzerland highlights that postprocessing at observation locations indeed improves high-resolution ensemble forecasts, with 4.5% CRPS reduction on average in the case of a lead time of 1 day. Our main aim is to achieve such an improvement without binding the model to stations, by leveraging topographical covariates. Specifically, regression coefficients are estimated by weighting the training data in relation to the topographical similarity between their station of origin and the prediction location. In our case study, this approach is found to reproduce the performance of the local model without using local historical data for calibration. We further identify that one key difficulty is that postprocessing often degrades the performance of the ensemble forecast during summer and early autumn. To mitigate, we additionally estimate on the training set whether postprocessing at a specific location is expected to improve the prediction. If not, the direct model output is used. This extension reduces the CRPS of the topographical model by up to another 1.7% on average at the price of a slight degradation in calibration. In this case, the highest improvement is achieved for a lead time of 4 days.
The prediction of the weather at subseasonal-to-seasonal (S2S) timescales is dependent on both initial and boundary conditions. An open question is how to best initialize a relatively small-sized ensemble of numerical model integrations to produce reliable forecasts at these timescales. Reliability in this case means that the statistical properties of the ensemble forecast are consistent with the actual uncertainties about the future state of the geophysical system under investigation. In the present work, a method is introduced to construct initial conditions that produce reliable ensemble forecasts by projecting onto the eigenfunctions of the Koopman or the Perron-Frobenius operators, which describe the time-evolution of observables and probability distributions of the system dynamics, respectively. These eigenfunctions can be approximated from data by using the Dynamic Mode Decomposition (DMD) algorithm. The effectiveness of this approach is illustrated in the framework of a low-order ocean-atmosphere model exhibiting multiple characteristic timescales, and is compared to other ensemble initialization methods based on the Empirical Orthogonal Functions (EOFs) of the model trajectory and on the backward and covariant Lyapunov vectors of the model dynamics. Projecting initial conditions onto a subset of the Koopman or Perron-Frobenius eigenfunctions that are characterized by time scales with fast-decaying oscillations is found to produce highly reliable forecasts at all lead times investigated, ranging from one week to two months. Reliable forecasts are also obtained with the adjoint covariant Lyapunov vectors, which are the eigenfunctions of the Koopman operator in the tangent space. The advantages of these different methods are discussed.