ترغب بنشر مسار تعليمي؟ اضغط هنا

The primary analysis of randomized screening trials for cancer typically adheres to the intention-to-screen principle, measuring cancer-specific mortality reductions between screening and control arms. These mortality reductions result from a combina tion of the screening regimen, screening technology and the effect of the early, screening-induced, treatment. This motivates addressing these different aspects separately. Here we are interested in the causal effect of early versus delayed treatments on cancer mortality among the screening-detectable subgroup, which under certain assumptions is estimable from conventional randomized screening trial using instrumental variable type methods. To define the causal effect of interest, we formulate a simplified structural multi-state model for screening trials, based on a hypothetical intervention trial where screening detected individuals would be randomized into early versus delayed treatments. The cancer-specific mortality reductions after screening detection are quantified by a cause-specific hazard ratio. For this, we propose two estimators, based on an estimating equation and a likelihood expression. The methods extend existing instrumental variable methods for time-to-event and competing risks outcomes to time-dependent intermediate variables. Using the multi-state model as the basis of a data generating mechanism, we investigate the performance of the new estimators through simulation studies. In addition, we illustrate the proposed method in the context of CT screening for lung cancer using the US National Lung Screening Trial (NLST) data.
Persistence diagrams are important tools in the field of topological data analysis that describe the presence and magnitude of features in a filtered topological space. However, current approaches for comparing a persistence diagram to a set of other persistence diagrams is linear in the number of diagrams or do not offer performance guarantees. In this paper, we apply concepts from locality-sensitive hashing to support approximate nearest neighbor search in the space of persistence diagrams. Given a set $Gamma$ of $n$ $(M,m)$-bounded persistence diagrams, each with at most $m$ points, we snap-round the points of each diagram to points on a cubical lattice and produce a key for each possible snap-rounding. Specifically, we fix a grid over each diagram at several resolutions and consider the snap-roundings of each diagram to the four nearest lattice points. Then, we propose a data structure with $tau$ levels $mathbb{D}_{tau}$ that stores all snap-roundings of each persistence diagram in $Gamma$ at each resolution. This data structure has size $O(n5^mtau)$ to account for varying lattice resolutions as well as snap-roundings and the deletion of points with low persistence. To search for a persistence diagram, we compute a key for a query diagram by snapping each point to a lattice and deleting points of low persistence. Furthermore, as the lattice parameter decreases, searching our data structure yields a six-approximation of the nearest diagram in $Gamma$ in $O((mlog{n}+m^2)logtau)$ time and a constant factor approximation of the $k$th nearest diagram in $O((mlog{n}+m^2+k)logtau)$ time.
141 - Zhihui Liu , Zhonghua Qiao 2018
We establish a general theory of optimal strong error estimation for numerical approximations of a second-order parabolic stochastic partial differential equation with monotone drift driven by a multiplicative infinite-dimensional Wiener process. The equation is spatially discretized by Galerkin methods and temporally discretized by drift-implicit Euler and Milstein schemes. By the monotone and Lyapunov assumptions, we use both the variational and semigroup approaches to derive a spatial Sobolev regularity under the $L_omega^p L_t^infty dot H^{1+gamma}$-norm and a temporal Holder regularity under the $L_omega^p L_x^2$-norm for the solution of the proposed equation with an $dot H^{1+gamma}$-valued initial datum for $gammain [0,1]$. Then we make full use of the monotonicity of the equation and tools from stochastic calculus to derive the sharp strong convergence rates $O(h^{1+gamma}+tau^{1/2})$ and $O(h^{1+gamma}+tau^{(1+gamma)/2})$ for the Galerkin-based Euler and Milstein schemes, respectively.
For semilinear stochastic evolution equations whose coefficients are more general than the classical global Lipschitz, we present results on the strong convergence rates of numerical discretizations. The proof of them provides a new approach to stron g convergence analysis of numerical discretizations for a large family of second order parabolic stochastic partial differential equations driven by space-time white noises. We apply these results to the stochastic advection-diffusion-reaction equation with a gradient term and multiplicative white noise, and show that the strong convergence rate of a fully discrete scheme constructed by spectral Galerkin approximation and explicit exponential integrator is exactly $frac12$ in space and $frac14$ in time. Compared with the optimal regularity of the mild solution, it indicates that the spetral Galerkin approximation is superconvergent and the convergence rate of the exponential integrator is optimal. Numerical experiments support our theoretical analysis.
The rule-based OWL reasoning is to compute the deductive closure of an ontology by applying RDF/RDFS and OWL entailment rules. The performance of the rule-based OWL reasoning is often sensitive to the rule execution order. In this paper, we present a n approach to enhancing the performance of the rule-based OWL reasoning on Spark based on a locally optimal executable strategy. Firstly, we divide all rules (27 in total) into four main classes, namely, SPO rules (5 rules), type rules (7 rules), sameAs rules (7 rules), and schema rules (8 rules) since, as we investigated, those triples corresponding to the first three classes of rules are overwhelming (e.g., over 99% in the LUBM dataset) in our practical world. Secondly, based on the interdependence among those entailment rules in each class, we pick out an optimal rule executable order of each class and then combine them into a new rule execution order of all rules. Finally, we implement the new rule execution order on Spark in a prototype called RORS. The experimental results show that the running time of RORS is improved by about 30% as compared to Kim & Parks algorithm (2015) using the LUBM200 (27.6 million triples).
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا