Do you want to publish a course? Click here

Forecasting: theory and practice

83   0   0.0 ( 0 )
 Added by Fotios Petropoulos
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Forecasting has always been at the forefront of decision making and planning. The uncertainty that surrounds the future is both exciting and challenging, with individuals and organisations seeking to minimise risks and maximise utilities. The large number of forecasting applications calls for a diverse set of forecasting methods to tackle real-life challenges. This article provides a non-systematic review of the theory and the practice of forecasting. We provide an overview of a wide range of theoretical, state-of-the-art models, methods, principles, and approaches to prepare, produce, organise, and evaluate forecasts. We then demonstrate how such theoretical concepts are applied in a variety of real-life contexts. We do not claim that this review is an exhaustive list of methods and applications. However, we wish that our encyclopedic presentation will offer a point of reference for the rich work that has been undertaken over the last decades, with some key insights for the future of forecasting theory and practice. Given its encyclopedic nature, the intended mode of reading is non-linear. We offer cross-references to allow the readers to navigate through the various topics. We complement the theoretical concepts and applications covered by large lists of free or open-source software implementations and publicly-available databases.



rate research

Read More

Diabetes is a major public health challenge worldwide. Abnormal physiology in diabetes, particularly hypoglycemia, can cause driver impairments that affect safe driving. While diabetes driver safety has been previously researched, few studies link real-time physiologic changes in drivers with diabetes to objective real-world driver safety, particularly at high-risk areas like intersections. To address this, we investigated the role of acute physiologic changes in drivers with type 1 diabetes mellitus (T1DM) on safe stopping at stop intersections. 18 T1DM drivers (21-52 years, mean = 31.2 years) and 14 controls (21-55 years, mean = 33.4 years) participated in a 4-week naturalistic driving study. At induction, each participants vehicle was fitted with a camera and sensor system to collect driving data. Video was processed with computer vision algorithms detecting traffic elements. Stop intersections were geolocated with clustering methods, state intersection databases, and manual review. Videos showing driver stop intersection approaches were extracted and manually reviewed to classify stopping behavior (full, rolling, and no stop) and intersection traffic characteristics. Mixed-effects logistic regression models determined how diabetes driver stopping safety (safe vs. unsafe stop) was affected by 1) disease and 2) at-risk, acute physiology (hypo- and hyperglycemia). Diabetes drivers who were acutely hyperglycemic had 2.37 increased odds of unsafe stopping (95% CI: 1.26-4.47, p = 0.008) compared to those with normal physiology. Acute hypoglycemia did not associate with unsafe stopping (p = 0.537), however the lower frequency of hypoglycemia (vs. hyperglycemia) warrants a larger sample of drivers to investigate this effect. Critically, presence of diabetes alone did not associate with unsafe stopping, underscoring the need to evaluate driver physiology in licensing guidelines.
We review methods for monitoring multivariate time-between-events (TBE) data. We present some underlying complexities that have been overlooked in the literature. It is helpful to classify multivariate TBE monitoring applications into two fundamentally different scenarios. One scenario involves monitoring individual vectors of TBE data. The other involves the monitoring of several, possibly correlated, temporal point processes in which events could occur at different rates. We discuss performance measures and advise the use of time-between-signal based metrics for the design and comparison of methods. We re-evaluate an existing multivariate TBE monitoring method, offer some advice and some directions for future research.
83 - Shu Wang , Ji-Hyun Lee 2021
Nowadays, more and more clinical trials choose combinational agents as the intervention to achieve better therapeutic responses. However, dose-finding for combinational agents is much more complicated than single agent as the full order of combination dose toxicity is unknown. Therefore, regular phase I designs are not able to identify the maximum tolerated dose (MTD) of combinational agents. Motivated by such needs, plenty of novel phase I clinical trial designs for combinational agents were proposed. With so many available designs, research that compare their performances, explore parameters impacts, and provide recommendations is very limited. Therefore, we conducted a simulation study to evaluate multiple phase I designs that proposed to identify single MTD for combinational agents under various scenarios. We also explored influences of different design parameters. In the end, we summarized the pros and cons of each design, and provided a general guideline in design selection.
Some years ago, Snapinn and Jiang[1] considered the interpretation and pitfalls of absolute versus relative treatment effect measures in analyses of time-to-event outcomes. Through specific examples and analytical considerations based solely on the exponential and the Weibull distributions they reach two conclusions: 1) that the commonly used criteria for clinical effectiveness, the ARR (Absolute Risk Reduction) and the median (survival time) difference (MD) directly contradict each other and 2) cost-effectiveness depends only the hazard ratio(HR) and the shape parameter (in the Weibull case) but not the overall baseline risk of the population. Though provocative, the first conclusion does not apply to either the two special cases considered or even more generally, while the second conclusion is strictly correct only for the exponential case. Therefore, the implication inferred by the authors i.e. all measures of absolute treatment effect are of little value compared with the relative measure of the hazard ratio, is not of general validity and hence both absolute and relative measures should continue to be used when appraising clinical evidence.
79 - Xu Wu , Ziyu Xie , Farah Alsafadi 2021
Uncertainty Quantification (UQ) is an essential step in computational model validation because assessment of the model accuracy requires a concrete, quantifiable measure of uncertainty in the model predictions. The concept of UQ in the nuclear community generally means forward UQ (FUQ), in which the information flow is from the inputs to the outputs. Inverse UQ (IUQ), in which the information flow is from the model outputs and experimental data to the inputs, is an equally important component of UQ but has been significantly underrated until recently. FUQ requires knowledge in the input uncertainties which has been specified by expert opinion or user self-evaluation. IUQ is defined as the process to inversely quantify the input uncertainties based on experimental data. This review paper aims to provide a comprehensive and comparative discussion of the major aspects of the IUQ methodologies that have been used on the physical models in system thermal-hydraulics codes. IUQ methods can be categorized by three main groups: frequentist (deterministic), Bayesian (probabilistic), and empirical (design-of-experiments). We used eight metrics to evaluate an IUQ method, including solidity, complexity, accessibility, independence, flexibility, comprehensiveness, transparency, and tractability. Twelve IUQ methods are reviewed, compared, and evaluated based on these eight metrics. Such comparative evaluation will provide a good guidance for users to select a proper IUQ method based on the IUQ problem under investigation.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا