ترغب بنشر مسار تعليمي؟ اضغط هنا

An Introduction to Applications of Wavelet Benchmarking with Seasonal Adjustment

183   0   0.0 ( 0 )
 نشر من قبل John Aston
 تاريخ النشر 2014
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Prior to adjustment, accounting conditions between national accounts data sets are frequently violated. Benchmarking is the procedure used by economic agencies to make such data sets consistent. It typically involves adjusting a high frequency time series (e.g. quarterly data) so it becomes consistent with a lower frequency version (e.g. annual data). Various methods have been developed to approach this problem of inconsistency between data sets. This paper introduces a new statistical procedure; namely wavelet benchmarking. Wavelet properties allow high and low frequency processes to be jointly analysed and we show that benchmarking can be formulated and approached succinctly in the wavelet domain. Furthermore the time and frequency localisation properties of wavelets are ideal for handling more complicated benchmarking problems. The versatility of the procedure is demonstrated using simulation studies where we provide evidence showing it substantially outperforms currently used methods. Finally, we apply this novel method of wavelet benchmarking to official Office of National Statistics (ONS) data.



قيم البحث

اقرأ أيضاً

Inflow forecasts play an essential role in the management of hydropower reservoirs. Forecasts help operators schedule power generation in advance to maximise economic value, mitigate downstream flood risk, and meet environmental requirements. The hor izon of operational inflow forecasts is often limited in range to ~2 weeks ahead, marking the predictability barrier of deterministic weather forecasts. Reliable inflow forecasts in the sub-seasonal to seasonal (S2S) range would allow operators to take proactive action to mitigate risks of adverse weather conditions, thereby improving water management and increasing revenue. This study outlines a method of deriving skilful S2S inflow forecasts using a case study reservoir in the Scottish Highlands. We generate ensemble inflow forecasts by training a linear regression model for the observed inflow onto S2S ensemble precipitation predictions from the European Centre for Medium-range Weather Forecasting (ECMWF). Subsequently, post-processing techniques from Ensemble Model Output Statistics are applied to derive calibrated S2S probabilistic inflow forecasts, without the application of a separate hydrological model. We find the S2S probabilistic inflow forecasts hold skill relative to climatological forecasts up to 6 weeks ahead. The inflow forecasts hold greater skill during winter compared with summer. The forecasts, however, struggle to predict high summer inflows, even at short lead-times. The potential for the S2S probabilistic inflow forecasts to improve water management and deliver increased economic value is confirmed using a stylised cost model. While applied to hydropower forecasting, the results and methods presented here are relevant to broader fields of water management and S2S forecasting applications.
Prediction rule ensembles (PREs) are a relatively new statistical learning method, which aim to strike a balance between predictive accuracy and interpretability. Starting from a decision tree ensemble, like a boosted tree ensemble or a random forest , PREs retain a small subset of tree nodes in the final predictive model. These nodes can be written as simple rules of the form if [condition] then [prediction]. As a result, PREs are often much less complex than full decision tree ensembles, while they have been found to provide similar predictive accuracy in many situations. The current paper introduces the methodology and shows how PREs can be fitted using the R package pre through several real-data examples from psychological research. The examples also illustrate a number of features of package textbf{pre} that may be particularly useful for applications in psychology: support for categorical, multivariate and count responses, application of (non-)negativity constraints, inclusion of confirmatory rules and standardized variable importance measures.
135 - L. Yu , Z. Lu , P. C. Nathan 2020
Statistical and computational methods are widely used in todays scientific studies. Using a female fertility potential in childhood cancer survivors as an example, we illustrate how these methods can be used to extract insight regarding biological pr ocesses from noisy observational data in order to inform decision making. We start by contextualizing the computational methods with the working example: the modelling of acute ovarian failure risk in female childhood cancer survivors to quantify the risk of permanent ovarian failure due to exposure to lifesaving but nonetheless toxic cancer treatments. This is followed by a description of the general framework of classification problems. We provide an overview of the modelling algorithms employed in our example, including one classic model (logistic regression) and two popular modern learning methods (random forest and support vector machines). Using the working example, we show the general steps of data preparation for modelling, variable selection steps for the classic model, and how model performance might be improved utilizing visualization tools. We end with a note on the importance of model evaluation.
266 - Yan Wang , Xiaowei Yue , Rui Tuo 2019
Estimation of model parameters of computer simulators, also known as calibration, is an important topic in many engineering applications. In this paper, we consider the calibration of computer model parameters with the help of engineering design know ledge. We introduce the concept of sensible (calibration) variables. Sensible variables are model parameters which are sensitive in the engineering modeling, and whose optimal values differ from the engineering design values.We propose an effective calibration method to identify and adjust the sensible variables with limited physical experimental data. The methodology is applied to a composite fuselage simulation problem.
Existing methods for diagnosing predictability in climate indices often make a number of unjustified assumptions about the climate system that can lead to misleading conclusions. We present a flexible family of state-space models capable of separatin g the effects of external forcing on inter-annual time scales, from long-term trends and decadal variability, short term weather noise, observational errors and changes in autocorrelation. Standard potential predictability models only estimate the fraction of the total variance in the index attributable to external forcing. In addition, our methodology allows us to partition individual seasonal means into forced, slow, fast and error components. Changes in the predictable signal within the season can also be estimated. The model can also be used in forecast mode to assess both intra- and inter-seasonal predictability. We apply the proposed methodology to a North Atlantic Oscillation index for the years 1948-2017. Around 60% of the inter-annual variance in the December-January-February mean North Atlantic Oscillation is attributable to external forcing, and 8% to trends on longer time-scales. In some years, the external forcing remains relatively constant throughout the winter season, in others it changes during the season. Skillful statistical forecasts of the December-January-February mean North Atlantic Oscillation are possible from the end of November onward and predictability extends into March. Statistical forecasts of the December-January-February mean achieve a correlation with the observations of 0.48.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا