ترغب بنشر مسار تعليمي؟ اضغط هنا

ASMOOTH: A simple and efficient algorithm for adaptive kernel smoothing of two-dimensional imaging data

79   0   0.0 ( 0 )
 نشر من قبل Harald Ebeling
 تاريخ النشر 2006
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

An efficient algorithm for adaptive kernel smoothing (AKS) of two-dimensional imaging data has been developed and implemented using the Interactive Data Language (IDL). The functional form of the kernel can be varied (top-hat, Gaussian etc.) to allow different weighting of the event counts registered within the smoothing region. For each individual pixel the algorithm increases the smoothing scale until the signal-to-noise ratio (s.n.r.) within the kernel reaches a preset value. Thus, noise is suppressed very efficiently, while at the same time real structure, i.e. signal that is locally significant at the selected s.n.r. level, is preserved on all scales. In particular, extended features in noise-dominated regions are visually enhanced. The ASMOOTH algorithm differs from other AKS routines in that it allows a quantitative assessment of the goodness of the local signal estimation by producing adaptively smoothed images in which all pixel values share the same signal-to-noise ratio above the background. We apply ASMOOTH to both real observational data (an X-ray image of clusters of galaxies obtained with the Chandra X-ray Observatory) and to a simulated data set. We find the ASMOOTHed images to be fair representations of the input data in the sense that the residuals are consistent with pure noise, i.e. they possess Poissonian variance and a near-Gaussian distribution around a mean of zero, and are spatially uncorrelated.

قيم البحث

اقرأ أيضاً

Data cube materialization is a classical database operator introduced in Gray et al.~(Data Mining and Knowledge Discovery, Vol.~1), which is critical for many analysis tasks. Nandi et al.~(Transactions on Knowledge and Data Engineering, Vol.~6) first studied cube materialization for large scale datasets using the MapReduce framework, and proposed a sophisticated modification of a simple broadcast algorithm to handle a dataset with a 216GB cube size within 25 minutes with 2k machines in 2012. We take a different approach, and propose a simple MapReduce algorithm which (1) minimizes the total number of copy-add operations, (2) leverages locality of computation, and (3) balances work evenly across machines. As a result, the algorithm shows excellent performance, and materialized a real dataset with a cube size of 35.0G tuples and 1.75T bytes in 54 minutes, with 0.4k machines in 2014.
We determine the expected error by smoothing the data locally. Then we optimize the shape of the kernel smoother to minimize the error. Because the optimal estimator depends on the unknown function, our scheme automatically adjusts to the unknown fun ction. By self-consistently adjusting the kernel smoother, the total estimator adapts to the data. Goodness of fit estimators select a kernel halfwidth by minimizing a function of the halfwidth which is based on the average square residual fit error: $ASR(h)$. A penalty term is included to adjust for using the same data to estimate the function and to evaluate the mean square error. Goodness of fit estimators are relatively simple to implement, but the minimum (of the goodness of fit functional) tends to be sensitive to small perturbations. To remedy this sensitivity problem, we fit the mean square error %goodness of fit functional to a two parameter model prior to determining the optimal halfwidth. Plug-in derivative estimators estimate the second derivative of the unknown function in an initial step, and then substitute this estimate into the asymptotic formula.
Understanding dissipation in 2D quantum many-body systems is a remarkably difficult open challenge. Here we show how numerical simulations for this problem are possible by means of a tensor network algorithm that approximates steady-states of 2D quan tum lattice dissipative systems in the thermodynamic limit. Our method is based on the intuition that strong dissipation kills quantum entanglement before it gets too large to handle. We test its validity by simulating a dissipative quantum Ising model, relevant for dissipative systems of interacting Rydberg atoms, and benchmark our simulations with a variational algorithm based on product and correlated states. Our results support the existence of a first order transition in this model, with no bistable region. We also simulate a dissipative spin-1/2 XYZ model, showing that there is no re-entrance of the ferromagnetic phase. Our method enables the computation of steady states in 2D quantum lattice systems.
Recently, records on stereo matching benchmarks are constantly broken by end-to-end disparity networks. However, the domain adaptation ability of these deep models is quite poor. Addressing such problem, we present a novel domain-adaptive pipeline ca lled AdaStereo that aims to align multi-level representations for deep stereo matching networks. Compared to previous methods for adaptive stereo matching, our AdaStereo realizes a more standard, complete and effective domain adaptation pipeline. Firstly, we propose a non-adversarial progressive color transfer algorithm for input image-level alignment. Secondly, we design an efficient parameter-free cost normalization layer for internal feature-level alignment. Lastly, a highly related auxiliary task, self-supervised occlusion-aware reconstruction is presented to narrow down the gaps in output space. Our AdaStereo models achieve state-of-the-art cross-domain performance on multiple stereo benchmarks, including KITTI, Middlebury, ETH3D, and DrivingStereo, even outperforming disparity networks finetuned with target-domain ground-truths.
We propose the adversarially robust kernel smoothing (ARKS) algorithm, combining kernel smoothing, robust optimization, and adversarial training for robust learning. Our methods are motivated by the convex analysis perspective of distributionally rob ust optimization based on probability metrics, such as the Wasserstein distance and the maximum mean discrepancy. We adapt the integral operator using supremal convolution in convex analysis to form a novel function majorant used for enforcing robustness. Our method is simple in form and applies to general loss functions and machine learning models. Furthermore, we report experiments with general machine learning models, such as deep neural networks, to demonstrate that ARKS performs competitively with the state-of-the-art methods based on the Wasserstein distance.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا