Do you want to publish a course? Click here

Stability and continuity of functions of least gradient

89   0   0.0 ( 0 )
 Publication date 2014
  fields
and research's language is English




Ask ChatGPT about the research

In this note we prove that on metric measure spaces, functions of least gradient, as well as local minimizers of the area functional (after modification on a set of measure zero) are continuous everywhere outside their jump sets. As a tool, we develop some stability properties of sequences of least gradient functions. We also apply these tools to prove a maximum principle for functions of least gradient that arise as solutions to a Dirichlet problem.



rate research

Read More

In this note we prove in the nonlinear setting of $CD(K,infty)$ spaces the stability of the Krasnoselskii spectrum of the Laplace operator $-Delta$ under measured Gromov-Hausdorff convergence, under an additional compactness assumption satisfied, for instance, by sequences of $CD^*(K,N)$ metric measure spaces with uniformly bounded diameter. Additionally, we show that every element $lambda$ in the Krasnoselskii spectrum is indeed an eigenvalue, namely there exists a nontrivial $u$ satisfying the eigenvalue equation $- Delta u = lambda u$.
For a given domain $Omega subset Bbb{R}^n$, we consider the variational problem of minimizing the $L^1$-norm of the gradient on $Omega$ of a function $u$ with prescribed continuous boundary values and satisfying a continuous lower obstacle condition $uge Psi$ inside $Omega$. Under the assumption of strictly positive mean curvature of the boundary $partialOmega$, we show existence of a continuous solution, with Holder exponent half of that of data and obstacle. This generalizes previous results obtained for the unconstrained and double-obstacle problems. The main new feature in the present analysis is the need to extend various maximum principles from the case of two area-minimizing sets to the case of one sub- and one superminimizing set. This we accomplish subject to a weak regularity assumption on one of the sets, sufficient to carry out the analysis. Interesting open questions include the uniqueness of solutions and a complete analysis of the regularity properties of area superminimizing sets. We provide some preliminary results in the latter direction, namely a new monotonicity principle for superminimizing sets, and the existence of ``foamy superminimizers in two dimensions.
68 - Pisheng Ding 2018
For a harmonic function u on Euclidean space, this note shows that its gradient is essentially determined by the geometry of its level hypersurfaces. Specifically, the factor by which |grad(u)| changes along a gradient flow is completely determined by the mean curvature of the level hypersurfaces intersecting the flow.
194 - Alnur Ali , Edgar Dobriban , 2020
We study the implicit regularization of mini-batch stochastic gradient descent, when applied to the fundamental problem of least squares regression. We leverage a continuous-time stochastic differential equation having the same moments as stochastic gradient descent, which we call stochastic gradient flow. We give a bound on the excess risk of stochastic gradient flow at time $t$, over ridge regression with tuning parameter $lambda = 1/t$. The bound may be computed from explicit constants (e.g., the mini-batch size, step size, number of iterations), revealing precisely how these quantities drive the excess risk. Numerical examples show the bound can be small, indicating a tight relationship between the two estimators. We give a similar result relating the coefficients of stochastic gradient flow and ridge. These results hold under no conditions on the data matrix $X$, and across the entire optimization path (not just at convergence).
We prove that the energy dissipation property of gradient flows extends to the semigroup maximal operators in various settings. In particular, we show that the vertical maximal function relative to the $p$-parabolic extension does not increase the $dot{W}^{1,p}$ norm of $dot{W}^{1,p}(mathbb{R}^n) cap L^{2}(mathbb{R}^n)$ functions when $p > 2$. We also obtain analogous results in the setting of uniformly parabolic and elliptic equations with bounded, measurable, real and symmetric coefficients, where the solutions do not have a representation formula via a convolution.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا