No Arabic abstract
We study growth rates for strongly continuous semigroups. We prove that a growth rate for the resolvent on imaginary lines implies a corresponding growth rate for the semigroup if either the underlying space is a Hilbert space, or the semigroup is asymptotically analytic, or if the semigroup is positive and the underlying space is an $L^{p}$-space or a space of continuous functions. We also prove variations of the main results on fractional domains; these are valid on more general Banach spaces. In the second part of the article we apply our main theorem to prove optimality in a classical example by Renardy of a perturbed wave equation which exhibits unusual spectral behavior.
We study polynomial and exponential stability for $C_{0}$-semigroups using the recently developed theory of operator-valued $(L^{p},L^{q})$ Fourier multipliers. We characterize polynomial decay of orbits of a $C_{0}$-semigroup in terms of the $(L^{p},L^{q})$ Fourier multiplier properties of its resolvent. Using this characterization we derive new polynomial decay rates which depend on the geometry of the underlying space. We do not assume that the semigroup is uniformly bounded, our results depend only on spectral properties of the generator. As a corollary of our work on polynomial stability we reprove and unify various existing results on exponential stability, and we also obtain a new theorem on exponential stability for positive semigroups.
We survey some known results about operator semigroup generated by operator matrices with diagonal or coupled domain. These abstract results are applied to the characterization of well-/ill-posedness for a class of evolution equations with dynamic boundary conditions on domains or metric graphs. In particular, our ill-posedness results on the heat equation with general Wentzell-type boundary conditions complement those previously obtained by, among others, Bandle-von Below-Reichel and Vitillaro-Vazquez.
For each $n$, let $text{RD}(n)$ denote the minimum $d$ for which there exists a formula for the general polynomial of degree $n$ in algebraic functions of at most $d$ variables. In 1945, Segre called for a better understanding of the large $n$ behavior of $text{RD}(n)$. In this paper, we provide improved thresholds for upper bounds on $text{RD}(n)$. Our techniques build upon classical algebraic geometry to provide new upper bounds for small $n$ and, in doing so, fix gaps in the proofs of A. Wiman and G.N. Chebotarev in [Wim1927] and [Che1954].
Often in the analysis of first-order methods, assuming the existence of a quadratic growth bound (a generalization of strong convexity) facilitates much stronger convergence analysis. Hence the analysis is done twice, once for the general case and once for the growth bounded case. We give a meta-theorem for deriving general convergence rates from those assuming a growth lower bound. Applying this simple but conceptually powerful tool to the proximal point method, the subgradient method, and the bundle method immediately recovers their known convergence rates for general convex optimization problems from their specialized rates. Future works studying first-order methods can assume growth bounds for the sake of analysis without hampering the generality of the results. Our results can be applied to lift any rate based on a Holder growth bound. As a consequence, guarantees for minimizing sharp functions imply guarantees for both general functions and those satisfying quadratic growth.
Often in the analysis of first-order methods for both smooth and nonsmooth optimization, assuming the existence of a growth/error bound or a KL condition facilitates much stronger convergence analysis. Hence the analysis is done twice, once for the general case and once for the growth bounded case. We give meta-theorems for deriving general convergence rates from those assuming a growth lower bound. Applying this simple but conceptually powerful tool to the proximal point method, subgradient method, bundle method, gradient descent and universal accelerated method immediately recovers their known convergence rates for general convex optimization problems from their specialized rates. Our results apply to lift any rate based on Holder continuity of the objectives gradient and Holder growth bounds to apply to any problem with a weaker growth bound or when no growth bound is assumed.