Do you want to publish a course? Click here

Density Estimation on a Network

70   0   0.0 ( 0 )
 Added by Yang Liu
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

This paper develops a novel approach to density estimation on a network. We formulate nonparametric density estimation on a network as a nonparametric regression problem by binning. Nonparametric regression using local polynomial kernel-weighted least squares have been studied rigorously, and its asymptotic properties make it superior to kernel estimators such as the Nadaraya-Watson estimator. When applied to a network, the best estimator near a vertex depends on the amount of smoothness at the vertex. Often, there are no compelling reasons to assume that a density will be continuous or discontinuous at a vertex, hence a data driven approach is proposed. To estimate the density in a neighborhood of a vertex, we propose a two-step procedure. The first step of this pretest estimator fits a separate local polynomial regression on each edge using data only on that edge, and then tests for equality of the estimates at the vertex. If the null hypothesis is not rejected, then the second step re-estimates the regression function in a small neighborhood of the vertex, subject to a joint equality constraint. Since the derivative of the density may be discontinuous at the vertex, we propose a piecewise polynomial local regression estimate to model the change in slope. We study in detail the special case of local piecewise linear regression and derive the leading bias and variance terms using weighted least squares theory. We show that the proposed approach will remove the bias near a vertex that has been noted for existing methods, which typically do not allow for discontinuity at vertices. For a fixed network, the proposed method scales sub-linearly with sample size and it can be extended to regression and varying coefficient models on a network. We demonstrate the workings of the proposed model by simulation studies and apply it to a dendrite network data set.



rate research

Read More

In the multivariate setting, defining extremal risk measures is important in many contexts, such as finance, environmental planning and structural engineering. In this paper, we review the literature on extremal bivariate return curves, a risk measure that is the natural bivariate extension to a return level, and propose new estimation methods based on multivariate extreme value models that can account for both asymptotic dependence and asymptotic independence. We identify gaps in the existing literature and propose novel tools for testing and validating return curves and comparing estimates from a range of multivariate models. These tools are then used to compare a selection of models through simulation and case studies. We conclude with a discussion and list some of the challenges.
We derive new estimators of an optimal joint testing and treatment regime under the no direct effect (NDE) assumption that a given laboratory, diagnostic, or screening test has no effect on a patients clinical outcomes except through the effect of the test results on the choice of treatment. We model the optimal joint strategy using an optimal regime structural nested mean model (opt-SNMM). The proposed estimators are more efficient than previous estimators of the parameters of an opt-SNMM because they efficiently leverage the `no direct effect (NDE) of testing assumption. Our methods will be of importance to decision scientists who either perform cost-benefit analyses or are tasked with the estimation of the `value of information supplied by an expensive diagnostic test (such as an MRI to screen for lung cancer).
266 - Libo Sun , Chihoon Lee , 2013
We consider the problem of estimating parameters of stochastic differential equations (SDEs) with discrete-time observations that are either completely or partially observed. The transition density between two observations is generally unknown. We propose an importance sampling approach with an auxiliary parameter when the transition density is unknown. We embed the auxiliary importance sampler in a penalized maximum likelihood framework which produces more accurate and computationally efficient parameter estimates. Simulation studies in three different models illustrate promising improvements of the new penalized simulated maximum likelihood method. The new procedure is designed for the challenging case when some state variables are unobserved and moreover, observed states are sparse over time, which commonly arises in ecological studies. We apply this new approach to two epidemics of chronic wasting disease in mule deer.
218 - Yan-Cheng Chao 2020
A small n, sequential, multiple assignment, randomized trial (snSMART) is a small sample, two-stage design where participants receive up to two treatments sequentially, but the second treatment depends on response to the first treatment. The treatment effect of interest in an snSMART is the first-stage response rate, but outcomes from both stages can be used to obtain more information from a small sample. A novel way to incorporate the outcomes from both stages applies power prior models, in which first stage outcomes from an snSMART are regarded as the primary data and second stage outcomes are regarded as supplemental. We apply existing power prior models to snSMART data, and we also develop new extensions of power prior models. All methods are compared to each other and to the Bayesian joint stage model (BJSM) via simulation studies. By comparing the biases and the efficiency of the response rate estimates among all proposed power prior methods, we suggest application of Fishers exact test or the Bhattacharyyas overlap measure to an snSMART to estimate the treatment effect in an snSMART, which both have performance mostly as good or better than the BJSM. We describe the situations where each of these suggested approaches is preferred.
Currently, the high-precision estimation of nonlinear parameters such as Gini indices, low-income proportions or other measures of inequality is particularly crucial. In the present paper, we propose a general class of estimators for such parameters that take into account univariate auxiliary information assumed to be known for every unit in the population. Through a nonparametric model-assisted approach, we construct a unique system of survey weights that can be used to estimate any nonlinear parameter associated with any study variable of the survey, using a plug-in principle. Based on a rigorous functional approach and a linearization principle, the asymptotic variance of the proposed estimators is derived, and variance estimators are shown to be consistent under mild assumptions. The theory is fully detailed for penalized B-spline estimators together with suggestions for practical implementation and guidelines for choosing the smoothing parameters. The validity of the method is demonstrated on data extracted from the French Labor Force Survey. Point and confidence intervals estimation for the Gini index and the low-income proportion are derived. Theoretical and empirical results highlight our interest in using a nonparametric approach versus a parametric one when estimating nonlinear parameters in the presence of auxiliary information.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا