Do you want to publish a course? Click here

Adaptive Test of Conditional Moment Inequalities

128   0   0.0 ( 0 )
 Added by Denis Chetverikov
 Publication date 2011
and research's language is English




Ask ChatGPT about the research

In this paper, I construct a new test of conditional moment inequalities, which is based on studentized kernel estimates of moment functions with many different values of the bandwidth parameter. The test automatically adapts to the unknown smoothness of moment functions and has uniformly correct asymptotic size. The test has high power in a large class of models with conditional moment inequalities. Some existing tests have nontrivial power against n^{-1/2}-local alternatives in a certain class of these models whereas my method only allows for nontrivial testing against (n/log n)^{-1/2}-local alternatives in this class. There exist, however, other classes of models with conditional moment inequalities where the mentioned tests have much lower power in comparison with the test developed in this paper.



rate research

Read More

Conditional autoregressive (CAR) models are commonly used to capture spatial correlation in areal unit data, and are typically specified as a prior distribution for a set of random effects, as part of a hierarchical Bayesian model. The spatial correlation structure induced by these models is determined by geographical adjacency, so that two areas have correlated random effects if they share a common border. However, this correlation structure is too simplistic for real data, which are instead likely to include sub-regions of strong correlation as well as locations at which the response exhibits a step-change. Therefore this paper proposes an extension to CAR priors, which can capture such localised spatial correlation. The proposed approach takes the form of an iterative algorithm, which sequentially updates the spatial correlation structure in the data as well as estimating the remaining model parameters. The efficacy of the approach is assessed by simulation, and its utility is illustrated in a disease mapping context, using data on respiratory disease risk in Greater Glasgow, Scotland.
102 - Yong Ren , Jialian Li , Yucen Luo 2016
Maximum mean discrepancy (MMD) has been successfully applied to learn deep generative models for characterizing a joint distribution of variables via kernel mean embedding. In this paper, we present conditional generative moment- matching networks (CGMMN), which learn a conditional distribution given some input variables based on a conditional maximum mean discrepancy (CMMD) criterion. The learning is performed by stochastic gradient descent with the gradient calculated by back-propagation. We evaluate CGMMN on a wide range of tasks, including predictive modeling, contextual generation, and Bayesian dark knowledge, which distills knowledge from a Bayesian model by learning a relatively small CGMMN student network. Our results demonstrate competitive performance in all the tasks.
During the rapid development cycle for Internet products (websites and mobile apps), new features are developed and rolled out to users constantly. Features with code defects or design flaws can cause outages and significant degradation of user experience. The traditional method of code review and change management can be time-consuming and error-prone. In order to make the feature rollout process safe and fast, this paper proposes a methodology for rolling out features in an automated way using an adaptive experimental design. Under this framework, a feature is gradually ramped up from a small proportion of users to a larger population based on real-time evaluation of the performance of important metrics. If there are any regression detected during the ramp-up step, the ramp-up process stops and the feature developer is alerted. There are two main algorithm components powering this framework: 1) a continuous monitoring algorithm - using a variant of the sequential probability ratio test (SPRT) to monitor the feature performance metrics and alert feature developers when a metric degradation is detected, 2) an automated ramp-up algorithm - deciding when and how to ramp up to the next stage with larger sample size. This paper presents one monitoring algorithm and three ramping up algorithms including time-based, power-based, and risk-based (a Bayesian approach) schedules. These algorithms are evaluated and compared on both simulated data and real data. There are three benefits provided by this framework for feature rollout: 1) for defective features, it can detect the regression early and reduce negative effect, 2) for healthy features, it rolls out the feature quickly, 3) it reduces the need for manual intervention via the automation of the feature rollout process.
Let $gamma$ be the standard Gaussian measure on $mathbb{R}^n$ and let $mathcal{P}_{gamma}$ be the space of probability measures that are absolutely continuous with respect to $gamma$. We study lower bounds for the functional $mathcal{F}_{gamma}(mu) = {rm Ent}(mu) - frac{1}{2} W^2_2(mu, u)$, where $mu in mathcal{P}_{gamma}, u in mathcal{P}_{gamma}$, ${rm Ent}(mu) = int logbigl( frac{mu}{gamma}bigr) d mu$ is the relative Gaussian entropy, and $W_2$ is the quadratic Kantorovich distance. The minimizers of $mathcal{F}_{gamma}$ are solutions to a dimension-free Gaussian analog of the (real) Kahler-Einstein equation. We show that $mathcal{F}_{gamma}(mu) $ is bounded from below under the assumption that the Gaussian Fisher information of $ u$ is finite and prove a priori estimates for the minimizers. Our approach relies on certain stability estimates for the Gaussian log-Sobolev and Talagrand transportation inequalities.
Currently, the most prevalent way to evaluate an autonomous vehicle is to directly test it on the public road. However, because of recent accidents caused by autonomous vehicles, it becomes controversial about whether on-road tests should be the best approach. Alternatively, people use test tracks or simulation to assess the safety of autonomous vehicles. These approaches are time-efficient and less costly, however, their credibility varies. In this paper, we propose to use a co-Kriging model to synthesize the results from different evaluation approaches, which allows us to fully utilize the information and provides an accurate, affordable, and safe way to assess a design of an autonomous vehicle.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا