ترغب بنشر مسار تعليمي؟ اضغط هنا

Estimating the Competitive Storage Model: A Simulated Likelihood Approach

106   0   0.0 ( 0 )
 نشر من قبل Tore Selland Kleppe
 تاريخ النشر 2017
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper develops a particle filter maximum likelihood estimator for the competitive storage model. The estimator is suitable for inference problems in commodity markets where only reliable price data is available for estimation, and shocks are temporally dependent. The estimator efficiently utilizes the information present in the conditional distribution of prices when shocks are not iid. Compared to Deaton and Laroques composite quasi-maximum likelihood estimator, simulation experiments and real-data estimation show substantial improvements in both bias and precision. Simulation experiments also show that the precision of the particle filter estimator improves faster than for composite quasi-maximum likelihood with more price data. To demonstrate the estimator and its relevance to actual data, we fit the storage model to data set of monthly natural gas prices. It is shown that the storage model estimated with the particle filter estimator beats, in terms of log-likelihood, commonly used reduced form time-series models such as the linear AR(1), AR(1)-GARCH(1,1) and Markov Switching AR(1) models for this data set.



قيم البحث

اقرأ أيضاً

295 - Libo Sun , Chihoon Lee , 2013
We consider the problem of estimating parameters of stochastic differential equations (SDEs) with discrete-time observations that are either completely or partially observed. The transition density between two observations is generally unknown. We pr opose an importance sampling approach with an auxiliary parameter when the transition density is unknown. We embed the auxiliary importance sampler in a penalized maximum likelihood framework which produces more accurate and computationally efficient parameter estimates. Simulation studies in three different models illustrate promising improvements of the new penalized simulated maximum likelihood method. The new procedure is designed for the challenging case when some state variables are unobserved and moreover, observed states are sparse over time, which commonly arises in ecological studies. We apply this new approach to two epidemics of chronic wasting disease in mule deer.
Efficient estimation of population size from dependent dual-record system (DRS) remains a statistical challenge in capture-recapture type experiment. Owing to the nonidentifiability of the suitable Time-Behavioral Response Variation model (denoted as $M_{tb}$) under DRS, few methods are developed in Bayesian paradigm based on informative priors. Our contribution in this article is in developing integrated likelihood function from model $M_{tb}$ based on a novel approach developed by Severini (2007, Biometrika). Suitable weight function on nuisance parameter is derived under the assumption of availability of knowledge on the direction of behavioral dependency. Such pseudo-likelihood function is constructed so that the resulting estimator possess some desirable properties including invariance and negligible prior (or weight) sensitiveness. Extensive simulations explore the better performance of our proposed method in most of the situations than the existing Bayesian methods. Moreover, being a non-Bayesian estimator, it simply avoids heavy computational effort and time. Finally, illustration based on two real life data sets on epidemiology and economic census are presented.
This document is an invited chapter covering the specificities of ABC model choice, intended for the incoming Handbook of ABC by Sisson, Fan, and Beaumont (2017). Beyond exposing the potential pitfalls of ABC based posterior probabilities, the review emphasizes mostly the solution proposed by Pudlo et al. (2016) on the use of random forests for aggregating summary statistics and and for estimating the posterior probability of the most likely model via a secondary random fores.
Suppose an online platform wants to compare a treatment and control policy, e.g., two different matching algorithms in a ridesharing system, or two different inventory management algorithms in an online retail site. Standard randomized controlled tri als are typically not feasible, since the goal is to estimate policy performance on the entire system. Instead, the typical current practice involves dynamically alternating between the two policies for fixed lengths of time, and comparing the average performance of each over the intervals in which they were run as an estimate of the treatment effect. However, this approach suffers from *temporal interference*: one algorithm alters the state of the system as seen by the second algorithm, biasing estimates of the treatment effect. Further, the simple non-adaptive nature of such designs implies they are not sample efficient. We develop a benchmark theoretical model in which to study optimal experimental design for this setting. We view testing the two policies as the problem of estimating the steady state difference in reward between two unknown Markov chains (i.e., policies). We assume estimation of the steady state reward for each chain proceeds via nonparametric maximum likelihood, and search for consistent (i.e., asymptotically unbiased) experimental designs that are efficient (i.e., asymptotically minimum variance). Characterizing such designs is equivalent to a Markov decision problem with a minimum variance objective; such problems generally do not admit tractable solutions. Remarkably, in our setting, using a novel application of classical martingale analysis of Markov chains via Poissons equation, we characterize efficient designs via a succinct convex optimization problem. We use this characterization to propose a consistent, efficient online experimental design that adaptively samples the two Markov chains.
227 - Tamar Gadrich , Guy Katriel 2017
We consider the problem of estimating the rate of defects (mean number of defects per item), given the counts of defects detected by two independent imperfect inspectors on one sample of items. In contrast with the setting for the well-known method o f Capture-Recapture, we {it{do not}} have information regarding the number of defects jointly detected by {it{both}} inspectors. We solve this problem by constructing two types of estimators - a simple moment-type estimator, and a complicated maximum-likelihood estimator. The performance of these estimators is studied analytically and by means of simulations. It is shown that the maximum-likelihood estimator is superior to the moment-type estimator. A systematic comparison with the Capture-Recapture method is also made.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا