Do you want to publish a course? Click here

Drift Estimation in Sparse Sequential Dynamic Imaging: with Application to Nanoscale Fluorescence Microscopy

403   0   0.0 ( 0 )
 Added by Stephan Huckemann
 Publication date 2014
and research's language is English




Ask ChatGPT about the research

A major challenge in many modern superresolution fluorescence microscopy techniques at the nanoscale lies in the correct alignment of long sequences of sparse but spatially and temporally highly resolved images. This is caused by the temporal drift of the protein structure, e.g. due to temporal thermal inhomogeneity of the object of interest or its supporting area during the observation process. We develop a simple semiparametric model for drift correction in SMS microscopy. Then we propose an M-estimator for the drift and show its asymptotic normality. This is used to correct the final image and it is shown that this purely statistical method is competitive with state of the art calibration techniques which require to incorporate fiducial markers into the specimen. Moreover, a simple bootstrap algorithm allows to quantify the precision of the drift estimate and its effect on the final image estimation. We argue that purely statistical drift correction is even more robust than fiducial tracking rendering the latter superfluous in many applications. The practicability of our method is demonstrated by a simulation study and by an SMS application. This serves as a prototype for many other typical imaging techniques where sparse observations with highly temporal resolution are blurred by motion of the object to be reconstructed.



rate research

Read More

Femtosecond electron microscopy produces real-space images of matter in a series of ultrafast snapshots. Pulses of electrons self-disperse under space-charge broadening, so without compression, the ideal operation mode is a single electron per pulse. Here, we demonstrate for the first time femtosecond single-electron point projection microscopy (fs-ePPM) in a laser-pump fs-e-probe configuration. The electrons have an energy of only 150 eV and take tens of picoseconds to propagate to the object under study. Nonetheless, we achieve a temporal resolution with a standard deviation of 120 fs, combined with a spatial resolution of 100 nm, applied to a localized region of charge at the apex of a nanoscale metal tip induced by 30 fs 800 nm laser pulses at 50 kHz. These observations demonstrate real-space imaging of reversible processes such as tracking charge distributions is feasible whilst maintaining femtosecond resolution. Our findings could find application as a characterization method, which, depending on geometry could resolve tens of femtoseconds and tens of nanometres. Dynamically imaging electric and magnetic fields and charge distributions on sub-micron length scales opens new avenues of ultrafast dynamics. Furthermore, through the use of active compression, such pulses are an ideal seed for few-femtosecond to attosecond imaging applications which will access sub-optical cycle processes in nanoplasmonics.
Colocalization analysis aims to study complex spatial associations between bio-molecules via optical imaging techniques. However, existing colocalization analysis workflows only assess an average degree of colocalization within a certain region of interest and ignore the unique and valuable spatial information offered by microscopy. In the current work, we introduce a new framework for colocalization analysis that allows us to quantify colocalization levels at each individual location and automatically identify pixels or regions where colocalization occurs. The framework, referred to as spatially adaptive colocalization analysis (SACA), integrates a pixel-wise local kernel model for colocalization quantification and a multi-scale adaptive propagation-separation strategy for utilizing spatial information to detect colocalization in a spatially adaptive fashion. Applications to simulated and real biological datasets demonstrate the practical merits of SACA in what we hope to be an easily applicable and robust colocalization analysis method. In addition, theoretical properties of SACA are investigated to provide rigorous statistical justification.
The article considers the problem of estimating a high-dimensional sparse parameter in the presence of side information that encodes the sparsity structure. We develop a general framework that involves first using an auxiliary sequence to capture the side information, and then incorporating the auxiliary sequence in inference to reduce the estimation risk. The proposed method, which carries out adaptive SURE-thresholding using side information (ASUS), is shown to have robust performance and enjoy optimality properties. We develop new theories to characterize regimes in which ASUS far outperforms competitive shrinkage estimators, and establish precise conditions under which ASUS is asymptotically optimal. Simulation studies are conducted to show that ASUS substantially improves the performance of existing methods in many settings. The methodology is applied for analysis of data from single cell virology studies and microarray time course experiments.
We present a geometrical method for analyzing sequential estimating procedures. It is based on the design principle of the second-order efficient sequential estimation provided in Okamoto, Amari and Takeuchi (1991). By introducing a dual conformal curvature quantity, we clarify the conditions for the covariance minimization of sequential estimators. These conditions are further elabolated for the multidimensional curved exponential family. The theoretical results are then numerically examined by using typical statistical models, von Mises-Fisher and hyperboloid models.
Its conceptual appeal and effectiveness has made latent factor modeling an indispensable tool for multivariate analysis. Despite its popularity across many fields, there are outstanding methodological challenges that have hampered practical deployments. One major challenge is the selection of the number of factors, which is exacerbated for dynamic factor models, where factors can disappear, emerge, and/or reoccur over time. Existing tools that assume a fixed number of factors may provide a misguided representation of the data mechanism, especially when the number of factors is crudely misspecified. Another challenge is the interpretability of the factor structure, which is often regarded as an unattainable objective due to the lack of identifiability. Motivated by a topical macroeconomic application, we develop a flexible Bayesian method for dynamic factor analysis (DFA) that can simultaneously accommodate a time-varying number of factors and enhance interpretability without strict identifiability constraints. To this end, we turn to dynamic sparsity by employing Dynamic Spike-and-Slab (DSS) priors within DFA. Scalable Bayesian EM estimation is proposed for fast posterior mode identification via rotations to sparsity, enabling Bayesian data analysis at scales that would have been previously time-consuming. We study a large-scale balanced panel of macroeconomic variables covering multiple facets of the US economy, with a focus on the Great Recession, to highlight the efficacy and usefulness of our proposed method.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا