ترغب بنشر مسار تعليمي؟ اضغط هنا

Dynamic positron emission tomography (dPET) is currently a widely used medical imaging technique for the clinical diagnosis, staging and therapy guidance of all kinds of human cancers. Higher temporal imaging resolution for the early stage of radiotr acer metabolism is desired; however, in this case, the reconstructed images with short frame durations always suffer from a limited image signal-to-noise ratio (SNR), which results in unsatisfactory image spatial resolution. In this work, we proposed a dPET processing method that denoises images with short frame durations via pixel-level time-activity curve (TAC) correction based on third-order Hermite interpolation (Pitch-In). The proposed method was validated using total-body dynamic PET image data and compared to several state-of-the-art methods to demonstrate its superior performance in terms of high temporal resolution dPET image noise reduction and imaging contrast. Higher stability and feasibility of the proposed Pitch-In method for future clinical application with high temporal resolution (HTR) dPET imaging can be expected.
59 - Yi Shen , Yizao Wang , Na Zhang 2021
An aggregated model is proposed, of which the partial-sum process scales to the Karlin stable processes recently investigated in the literature. The limit extremes of the proposed model, when having regularly-varying tails, are characterized by the c onvergence of the corresponding point processes. The proposed model is an extension of an aggregated model proposed by Enriquez (2004) in order to approximate fractional Brownian motions with Hurst index $Hin(0,1/2)$, and is of a different nature of the other recently investigated Karlin models which are essentially based on infinite urn schemes.
mmWave radars offer excellent depth resolution owing to their high bandwidth at mmWave radio frequencies. Yet, they suffer intrinsically from poor angular resolution, that is an order-of-magnitude worse than camera systems, and are therefore not a ca pable 3-D imaging solution in isolation. We propose Metamoran, a system that combines the complimentary strengths of radar and camera systems to obtain depth images at high azimuthal resolutions at distances of several tens of meters with high accuracy, all from a single fixed vantage point. Metamoran enables rich long-range depth imaging outdoors with applications to roadside safety infrastructure, surveillance and wide-area mapping. Our key insight is to use the high azimuth resolution from cameras using computer vision techniques, including image segmentation and monocular depth estimation, to obtain object shapes and use these as priors for our novel specular beamforming algorithm. We also design this algorithm to work in cluttered environments with weak reflections and in partially occluded scenarios. We perform a detailed evaluation of Metamorans depth imaging and sensing capabilities in 200 diverse scenes at a major U.S. city. Our evaluation shows that Metamoran estimates the depth of an object up to 60~m away with a median error of 28~cm, an improvement of 13$times$ compared to a naive radar+camera baseline and 23$times$ compared to monocular depth estimation.
We introduce an ensemble Markov chain Monte Carlo approach to sampling from a probability density with known likelihood. This method upgrades an underlying Markov chain by allowing an ensemble of such chains to interact via a process in which one cha ins state is cloned as anothers is deleted. This effective teleportation of states can overcome issues of metastability in the underlying chain, as the scheme enjoys rapid mixing once the modes of the target density have been populated. We derive a mean-field limit for the evolution of the ensemble. We analyze the global and local convergence of this mean-field limit, showing asymptotic convergence independent of the spectral gap of the underlying Markov chain, and moreover we interpret the limiting evolution as a gradient flow. We explain how interaction can be applied selectively to a subset of state variables in order to maintain advantage on very high-dimensional problems. Finally we present the application of our methodology to Bayesian hyperparameter estimation for Gaussian process regression.
Off-policy evaluation (OPE) is the task of estimating the expected reward of a given policy based on offline data previously collected under different policies. Therefore, OPE is a key step in applying reinforcement learning to real-world domains suc h as medical treatment, where interactive data collection is expensive or even unsafe. As the observed data tends to be noisy and limited, it is essential to provide rigorous uncertainty quantification, not just a point estimation, when applying OPE to make high stakes decisions. This work considers the problem of constructing non-asymptotic confidence intervals in infinite-horizon off-policy evaluation, which remains a challenging open question. We develop a practical algorithm through a primal-dual optimization-based approach, which leverages the kernel Bellman loss (KBL) of Feng et al.(2019) and a new martingale concentration inequality of KBL applicable to time-dependent data with unknown mixing conditions. Our algorithm makes minimum assumptions on the data and the function class of the Q-function, and works for the behavior-agnostic settings where the data is collected under a mix of arbitrary unknown behavior policies. We present empirical results that clearly demonstrate the advantages of our approach over existing methods.
In this paper we derive a new capability for robots to measure relative direction, or Angle-of-Arrival (AOA), to other robots operating in non-line-of-sight and unmapped environments with occlusions, without requiring external infrastructure. We do s o by capturing all of the paths that a WiFi signal traverses as it travels from a transmitting to a receiving robot, which we term an AOA profile. The key intuition is to emulate antenna arrays in the air as the robots move in 3D space, a method akin to Synthetic Aperture Radar (SAR). The main contributions include development of i) a framework to accommodate arbitrary 3D trajectories, as well as continuous mobility all robots, while computing AOA profiles and ii) an accompanying analysis that provides a lower bound on variance of AOA estimation as a function of robot trajectory geometry based on the Cramer Rao Bound. This is a critical distinction with previous work on SAR that restricts robot mobility to prescribed motion patterns, does not generalize to 3D space, and/or requires transmitting robots to be static during data acquisition periods. Our method results in more accurate AOA profiles and thus better AOA estimation, and formally characterizes this observation as the informativeness of the trajectory; a computable quantity for which we derive a closed form. All theoretical developments are substantiated by extensive simulation and hardware experiments. We also show that our formulation can be used with an off-the-shelf trajectory estimation sensor. Finally, we demonstrate the performance of our system on a multi-robot dynamic rendezvous task.
We study the impact of weak identification in discrete choice models, and provide insights into the determinants of identification strength in these models. Using these insights, we propose a novel test that can consistently detect weak identificatio n in commonly applied discrete choice models, such as probit, logit, and many of their extensions. Furthermore, we demonstrate that when the null hypothesis of weak identification is rejected, Wald-based inference can be carried out using standard formulas and critical values. A Monte Carlo study compares our proposed testing approach against commonly applied weak identification tests. The results simultaneously demonstrate the good performance of our approach and the fundamental failure of using conventional weak identification tests for linear models in the discrete choice model context. Furthermore, we compare our approach against those commonly applied in the literature in two empirical examples: married women labor force participation, and US food aid and civil conflicts.
Off-policy evaluation provides an essential tool for evaluating the effects of different policies or treatments using only observed data. When applied to high-stakes scenarios such as medical diagnosis or financial decision-making, it is crucial to p rovide provably correct upper and lower bounds of the expected reward, not just a classical single point estimate, to the end-users, as executing a poor policy can be very costly. In this work, we propose a provably correct method for obtaining interval bounds for off-policy evaluation in a general continuous setting. The idea is to search for the maximum and minimum values of the expected reward among all the Lipschitz Q-functions that are consistent with the observations, which amounts to solving a constrained optimization problem on a Lipschitz function space. We go on to introduce a Lipschitz value iteration method to monotonically tighten the interval, which is simple yet efficient and provably convergent. We demonstrate the practical efficiency of our method on a range of benchmarks.
This paper studies the instrument identification power for the average treatment effect (ATE) in partially identified binary outcome models with an endogenous binary treatment. We propose a novel approach to measure the instrument identification powe r by their ability to reduce the width of the ATE bounds. We show that instrument strength, as determined by the extreme values of the conditional propensity score, and its interplays with the degree of endogeneity and the exogenous covariates all play a role in bounding the ATE. We decompose the ATE identification gains into a sequence of measurable components, and construct a standardized quantitative measure for the instrument identification power ($IIP$). The decomposition and the $IIP$ evaluation are illustrated with finite-sample simulation studies and an empirical example of childbearing and womens labor supply. Our simulations show that the $IIP$ is a useful tool for detecting irrelevant instruments.
Recently, we have theoretically proposed and experimentally demonstrated an exact and efficient quantum simulation of photosynthetic light harvesting in nuclear magnetic resonance (NMR), cf. B. X. Wang, textit{et al.} npj Quantum Inf.~textbf{4}, 52 ( 2018). In this paper, we apply this approach to simulate the open quantum dynamics in various photosynthetic systems with different Hamiltonians. By numerical simulations, we show that for Drude-Lorentz spectral density the dimerized geometries with strong couplings within the donor and acceptor clusters respectively exhibit significantly-improved efficiency. Based on the optimal geometry, we also demonstrate that the overall energy transfer can be further optimized when the energy gap between the donor and acceptor clusters matches the peak of the spectral density. Moreover, by exploring the quantum dynamics for different types of spectral densities, e.g. Ohmic, sub-Ohmic, and super-Ohmic spectral densities, we show that our approach can be generalized to effectively simulate open quantum dynamics for various Hamiltonians and spectral densities. Because $log_{2}N$ qubits are required for quantum simulation of an $N$-dimensional quantum system, this quantum simulation approach can greatly reduce the computational complexity compared with popular numerically-exact methods.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا