No Arabic abstract
This paper sets out a forecasting method that employs a mixture of parametric functions to capture the pattern of fertility with respect to age. The overall level of cohort fertility is decomposed over the range of fertile ages using a mixture of parametric density functions. The level of fertility and the parameters describing the shape of the fertility curve are projected foward using time series methods. The model is estimated within a Bayesian framework, allowing predictive distributions of future fertility rates to be produced that naturally incorporate both time series and parametric uncertainty. A number of choices are possible for the precise form of the functions used in the two-component mixtures. The performance of several model variants is tested on data from four countries; England and Wales, the USA, Sweden and France. The former two countries exhibit multi-modality in their fertility rate curves as a function of age, while the latter two are largely uni-modal. The models are estimated using Hamiltonian Monte Carlo and the `stan` software package on data covering the period up to 2006, with the period 2007-2016 held back for assessment purposes. Forecasting performance is found to be comparable to other models identified as producing accurate fertility forecasts in the literature.
We consider the problem of probabilistic projection of the total fertility rate (TFR) for subnational regions. We seek a method that is consistent with the UNs recently adopted Bayesian method for probabilistic TFR projections for all countries, and works well for all countries. We assess various possible methods using subnational TFR data for 47 countries. We find that the method that performs best in terms of out-of-sample predictive performance and also in terms of reproducing the within-country correlation in TFR is a method that scales the national trajectory by a region-specific scale factor that is allowed to vary slowly over time. This supports the hypothesis of Watkins (1990, 1991) that within-country TFR converges over time in response to country-specific factors, and extends the Watkins hypothesis to the last 50 years and to a much wider range of countries around the world.
Massive informations about individual (household, small and medium enterprise) consumption are now provided with new metering technologies and the smart grid. Two major exploitations of these data are load profiling and forecasting at different scales on the grid. Customer segmentation based on load classification is a natural approach for these purposes. We propose here a new methodology based on mixture of high-dimensional regression models. The novelty of our approach is that we focus on uncovering classes or clusters corresponding to different regression models. As a consequence, these classes could then be exploited for profiling as well as forecasting in each class or for bottom-up forecasts in a unified view. We consider a real dataset of Irish individual consumers of 4,225 meters, each with 48 half-hourly meter reads per day over 1 year: from 1st January 2010 up to 31st December 2010, to demonstrate the feasibility of our approach.
In mixture experiments with noise variables or process variables that can not be controlled, investigate and try to control the variability of the response variable is very important for quality improvement in industrial processes. Thus, modeling the variability in mixture experiments with noise variables becomes necessary and has been considered in literature with approaches that require the choice of a quadratic loss function or by using the delta method. In this paper, we make use of the delta method and also propose an alternative approach, which is based on the Joint Modeling of Mean and Dispersion (JMMD). We consider a mixture experiment involving noise variables and we use the techniques of JMMD and of the delta method to get models for both mean and variance of the response variable. Following the Taguchis ideas about robust parameter design we build and solve an optimization problem for minimizing the variance while holding the mean on the target. At the end we provide a discussion about the two methodologies considered.
Forecasting has always been at the forefront of decision making and planning. The uncertainty that surrounds the future is both exciting and challenging, with individuals and organisations seeking to minimise risks and maximise utilities. The large number of forecasting applications calls for a diverse set of forecasting methods to tackle real-life challenges. This article provides a non-systematic review of the theory and the practice of forecasting. We provide an overview of a wide range of theoretical, state-of-the-art models, methods, principles, and approaches to prepare, produce, organise, and evaluate forecasts. We then demonstrate how such theoretical concepts are applied in a variety of real-life contexts. We do not claim that this review is an exhaustive list of methods and applications. However, we wish that our encyclopedic presentation will offer a point of reference for the rich work that has been undertaken over the last decades, with some key insights for the future of forecasting theory and practice. Given its encyclopedic nature, the intended mode of reading is non-linear. We offer cross-references to allow the readers to navigate through the various topics. We complement the theoretical concepts and applications covered by large lists of free or open-source software implementations and publicly-available databases.
Multi-parametric magnetic resonance imaging (mpMRI) plays an increasingly important role in the diagnosis of prostate cancer. Various computer-aided detection algorithms have been proposed for automated prostate cancer detection by combining information from various mpMRI data components. However, there exist other features of mpMRI, including the spatial correlation between voxels and between-patient heterogeneity in the mpMRI parameters, that have not been fully explored in the literature but could potentially improve cancer detection if leveraged appropriately. This paper proposes novel voxel-wise Bayesian classifiers for prostate cancer that account for the spatial correlation and between-patient heterogeneity in mpMRI. Modeling the spatial correlation is challenging due to the extreme high dimensionality of the data, and we consider three computationally efficient approaches using Nearest Neighbor Gaussian Process (NNGP), knot-based reduced-rank approximation, and a conditional autoregressive (CAR) model, respectively. The between-patient heterogeneity is accounted for by adding a subject-specific random intercept on the mpMRI parameter model. Simulation results show that properly modeling the spatial correlation and between-patient heterogeneity improves classification accuracy. Application to in vivo data illustrates that classification is improved by spatial modeling using NNGP and reduced-rank approximation but not the CAR model, while modeling the between-patient heterogeneity does not further improve our classifier. Among our proposed models, the NNGP-based model is recommended considering its robust classification accuracy and high computational efficiency.