ترغب بنشر مسار تعليمي؟ اضغط هنا

221 - Jeremie Kellner 2015
We propose a new one-sample test for normality in a Reproducing Kernel Hilbert Space (RKHS). Namely, we test the null-hypothesis of belonging to a given family of Gaussian distributions. Hence our procedure may be applied either to test data for norm ality or to test parameters (mean and covariance) if data are assumed Gaussian. Our test is based on the same principle as the MMD (Maximum Mean Discrepancy) which is usually used for two-sample tests such as homogeneity or independence testing. Our method makes use of a special kind of parametric bootstrap (typical of goodness-of-fit tests) which is computationally more efficient than standard parametric bootstrap. Moreover, an upper bound for the Type-II error highlights the dependence on influential quantities. Experiments illustrate the practical improvement allowed by our test in high-dimensional settings where common normality tests are known to fail. We also consider an application to covariance rank selection through a sequential procedure.
In this paper, we study the numerical approximation of a system of partial dif-ferential equations describing the corrosion of an iron based alloy in a nuclear waste repository. In particular, we are interested in the convergence of a numerical schem e consisting in an implicit Euler scheme in time and a Scharfetter-Gummel finite volume scheme in space.
74 - Max Fathi 2015
The discretization of overdamped Langevin dynamics, through schemes such as the Euler-Maruyama method, can be corrected by some acceptance/rejection rule, based on a Metropolis-Hastings criterion for instance. In this case, the invariant measure samp led by the Markov chain is exactly the Boltzmann-Gibbs measure. However, rejections perturb the dynamical consistency of the resulting numerical method with the reference dynamics. We present in this work some modifications of the standard correction of discretizations of overdamped Langevin dynamics on compact spaces by a Metropolis-Hastings procedure, which allow us to either improve the strong order of the numerical method, or to decrease the bias in the estimation of transport coefficients characterizing the effective dynamical behavior of the dynamics. For the latter approach, we rely on modified numerical schemes together with a Barker rule for the acceptance/rejection criterion.
31 - Bugra Kabil 2015
In the present contribution we investigate some features of dynamical lattice systems near periodic traveling waves. First, following the formal averaging method of Whitham, we derive modulation systems expected to drive at main order the time evolut ion of slowly modulated wavetrains. Then, for waves whose period is commensurable to the lattice, we prove that the formally-derived first-order averaged system must be at least weakly hyperbolic if the background waves are to be spectrally stable, and, when weak hyperbolicity is met, the characteristic velocities of the modulation system provide group velocities of the original system. Historically, for dynamical evolutions obeying partial differential equations, this has been proved, according to increasing level of algebraic complexity, first for systems of reaction-diffusion type, then for generic systems of balance laws, at last for Hamiltonian systems. Here, for their semi-discrete counterparts, we give at once simultaneous proofs for all these cases. Our main analytical tool is the discrete Bloch transform, a discrete analogous to the continuous Bloch transform. Nevertheless , we needed to overcome the absence of genuine space-translation invariance, a key ingredient of continuous analyses.
We show that accelerated gradient descent, averaged gradient descent and the heavy-ball method for non-strongly-convex problems may be reformulated as constant parameter second-order difference equation algorithms, where stability of the system is eq uivalent to convergence at rate O(1/n 2), where n is the number of iterations. We provide a detailed analysis of the eigenvalues of the corresponding linear dynamical system , showing various oscillatory and non-oscillatory behaviors, together with a sharp stability result with explicit constants. We also consider the situation where noisy gradients are available, where we extend our general convergence result, which suggests an alternative algorithm (i.e., with different step sizes) that exhibits the good aspects of both averaging and acceleration.
Continuous-time random walks are generalisations of random walks frequently used to account for the consistent observations that many molecules in living cells undergo anomalous diffusion, i.e. subdiffusion. Here, we describe the subdiffusive continu ous-time random walk using age-structured partial differential equations with age renewal upon each walker jump, where the age of a walker is the time elapsed since its last jump. In the spatially-homogeneous (zero-dimensional) case, we follow the evolution in time of the age distribution. An approach inspired by relative entropy techniques allows us to obtain quantitative explicit rates for the convergence of the age distribution to a self-similar profile, which corresponds to convergence to a stationnary profile for the rescaled variables. An important difficulty arises from the fact that the equation in self-similar variables is not autonomous and we do not have a specific analyitcal solution. Therefore, in order to quantify the latter convergence, we estimate attraction to a time-dependent pseudo-equilibrium, which in turn converges to the stationnary profile.
In this paper we study the so-called spin-boson system, namely {a two-level system} in interaction with a distinguished mode of a quantized bosonic field. We give a brief description of the controlled Rabi and Jaynes--Cummings models and we discuss t heir appearance in the mathematics and physics literature. We then study the controllability of the Rabi model when the control is an external field acting on the bosonic part. Applying geometric control techniques to the Galerkin approximation and using perturbation theory to guarantee non-resonance of the spectrum of the drift operator, we prove approximate controllability of the system, for almost every value of the interaction parameter.
The availability of a large number of assembled genomes opens the way to study the evolution of syntenic character within a phylogenetic context. The DeCo algorithm, recently introduced by B{e}rard et al. allows the computation of parsimonious evolut ionary scenarios for gene adjacencies, from pairs of reconciled gene trees. Following the approach pioneered by Sturmfels and Pachter, we describe how to modify the DeCo dynamic programming algorithm to identify classes of cost schemes that generates similar parsimonious evolutionary scenarios for gene adjacencies, as well as the robustness to changes to the cost scheme of evolutionary events of the presence or absence of specific ancestral gene adjacencies. We apply our method to six thousands mammalian gene families, and show that computing the robustness to changes to cost schemes provides new and interesting insights on the evolution of gene adjacencies and the DeCo model.
Energy minimization has been an intensely studied core problem in computer vision. With growing image sizes (2D and 3D), it is now highly desirable to run energy minimization algorithms in parallel. But many existing algorithms, in particular, some e fficient combinatorial algorithms, are difficult to par-allelize. By exploiting results from convex and submodular theory, we reformulate the quadratic energy minimization problem as a total variation denoising problem, which, when viewed geometrically, enables the use of projection and reflection based convex methods. The resulting min-cut algorithm (and code) is conceptually very simple, and solves a sequence of TV denoising problems. We perform an extensive empirical evaluation comparing state-of-the-art combinatorial algorithms and convex optimization techniques. On small problems the iterative convex methods match the combinatorial max-flow algorithms, while on larger problems they offer other flexibility and important gains: (a) their memory footprint is small; (b) their straightforward parallelizability fits multi-core platforms; (c) they can easily be warm-started; and (d) they quickly reach approximately good solutions, thereby enabling faster inexact solutions. A key consequence of our approach based on submodularity and convexity is that it is allows to combine any arbitrary combinatorial or convex methods as subroutines, which allows one to obtain hybrid combinatorial and convex optimization algorithms that benefit from the strengths of both.
Performing k-space variable density sampling is a popular way of reducing scanning time in Magnetic Resonance Imaging (MRI). Unfortunately, given a sampling trajectory, it is not clear how to traverse it using gradient waveforms. In this paper, we ac tually show that existing methods [1, 2] can yield large traversal time if the trajectory contains high curvature areas. Therefore, we consider here a new method for gradient waveform design which is based on the projection of unrealistic initial trajectory onto the set of hardware constraints. Next, we show on realistic simulations that this algorithm allows implementing variable density trajectories resulting from the piecewise linear solution of the Travelling Salesman Problem in a reasonable time. Finally, we demonstrate the application of this approach to 2D MRI reconstruction and 3D angiography in the mouse brain.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا