Do you want to publish a course? Click here

Asymptotic Theory of $ell_1$-Regularized PDE Identification from a Single Noisy Trajectory

182   0   0.0 ( 0 )
 Added by Namjoon Suh
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

We prove the support recovery for a general class of linear and nonlinear evolutionary partial differential equation (PDE) identification from a single noisy trajectory using $ell_1$ regularized Pseudo-Least Squares model~($ell_1$-PsLS). In any associative $mathbb{R}$-algebra generated by finitely many differentiation operators that contain the unknown PDE operator, applying $ell_1$-PsLS to a given data set yields a family of candidate models with coefficients $mathbf{c}(lambda)$ parameterized by the regularization weight $lambdageq 0$. The trace of ${mathbf{c}(lambda)}_{lambdageq 0}$ suffers from high variance due to data noises and finite difference approximation errors. We provide a set of sufficient conditions which guarantee that, from a single trajectory data denoised by a Local-Polynomial filter, the support of $mathbf{c}(lambda)$ asymptotically converges to the true signed-support associated with the underlying PDE for sufficiently many data and a certain range of $lambda$. We also show various numerical experiments to validate our theory.



rate research

Read More

A recent result of Freeman, Odell, Sari, and Zheng states that whenever a separable Banach space not containing $ell_1$ has the property that all asymptotic models generated by weakly null sequences are equivalent to the unit vector basis of $c_0$ then the space is Asymptotic $c_0$. We show that if we replace $c_0$ with $ell_1$ then this result is no longer true. Moreover, a stronger result of B. Maurey - H. P. Rosenthal type is presented, namely, there exists a reflexive Banach space with an unconditional basis admitting $ell_1$ as a unique asymptotic model whereas any subsequence of the basis generates a non-Asymptotic $ell_1$ subspace.
The simulation of long, nonlinear dispersive waves in bounded domains usually requires the use of slip-wall boundary conditions. Boussinesq systems appearing in the literature are generally not well-posed when such boundary conditions are imposed, or if they are well-posed it is very cumbersome to implement the boundary conditions in numerical approximations. In the present paper a new Boussinesq system is proposed for the study of long waves of small amplitude in a basin when slip-wall boundary conditions are required. The new system is derived using asymptotic techniques under the assumption of small bathymetric variations, and a mathematical proof of well-posedness for the new system is developed. The new system is also solved numerically using a Galerkin finite-element method, where the boundary conditions are imposed with the help of Nitsches method. Convergence of the numerical method is analyzed, and precise error estimates are provided. The method is then implemented, and the convergence is verified using numerical experiments. Numerical simulations for solitary waves shoaling on a plane slope are also presented. The results are compared to experimental data, and excellent agreement is found.
The increasing availability of data presents an opportunity to calibrate unknown parameters which appear in complex models of phenomena in the biomedical, physical and social sciences. However, model complexity often leads to parameter-to-data maps which are expensive to evaluate and are only available through noisy approximations. This paper is concerned with the use of interacting particle systems for the solution of the resulting inverse problems for parameters. Of particular interest is the case where the available forward model evaluations are subject to rapid fluctuations, in parameter space, superimposed on the smoothly varying large scale parametric structure of interest. Multiscale analysis is used to study the behaviour of interacting particle system algorithms when such rapid fluctuations, which we refer to as noise, pollute the large scale parametric dependence of the parameter-to-data map. Ensemble Kalman methods (which are derivative-free) and Langevin-based methods (which use the derivative of the parameter-to-data map) are compared in this light. The ensemble Kalman methods are shown to behave favourably in the presence of noise in the parameter-to-data map, whereas Langevin methods are adversely affected. On the other hand, Langevin methods have the correct equilibrium distribution in the setting of noise-free forward models, whilst ensemble Kalman methods only provide an uncontrolled approximation, except in the linear case. Therefore a new class of algorithms, ensemble Gaussian process samplers, which combine the benefits of both ensemble Kalman and Langevin methods, are introduced and shown to perform favourably.
Thin surfaces, such as the leaves of a plant, pose a significant challenge for implicit surface reconstruction techniques, which typically assume a closed, orientable surface. We show that by approximately interpolating a point cloud of the surface (augmented with off-surface points) and restricting the evaluation of the interpolant to a tight domain around the point cloud, we need only require an orientable surface for the reconstruction. We use polyharmonic smoothing splines to fit approximate interpolants to noisy data, and a partition of unity method with an octree-like strategy for choosing subdomains. This method enables us to interpolate an N-point dataset in O(N) operations. We present results for point clouds of capsicum and tomato plants, scanned with a handheld device. An important outcome of the work is that sufficiently smooth leaf surfaces are generated that are amenable for droplet spreading simulations.
240 - Tianyi Chen , Tianyu Ding , Bo Ji 2020
Sparsity-inducing regularization problems are ubiquitous in machine learning applications, ranging from feature selection to model compression. In this paper, we present a novel stochastic method -- Orthant Based Proximal Stochastic Gradient Method (OBProx-SG) -- to solve perhaps the most popular instance, i.e., the l1-regularized problem. The OBProx-SG method contains two steps: (i) a proximal stochastic gradient step to predict a support cover of the solution; and (ii) an orthant step to aggressively enhance the sparsity level via orthant face projection. Compared to the state-of-the-art methods, e.g., Prox-SG, RDA and Prox-SVRG, the OBProx-SG not only converges to the global optimal solutions (in convex scenario) or the stationary points (in non-convex scenario), but also promotes the sparsity of the solutions substantially. Particularly, on a large number of convex problems, OBProx-SG outperforms the existing methods comprehensively in the aspect of sparsity exploration and objective values. Moreover, the experiments on non-convex deep neural networks, e.g., MobileNetV1 and ResNet18, further demonstrate its superiority by achieving the solutions of much higher sparsity without sacrificing generalization accuracy.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا