ترغب بنشر مسار تعليمي؟ اضغط هنا

Supremum-Norm Convergence for Step-Asynchronous Successive Overrelaxation on M-matrices

96   0   0.0 ( 0 )
 نشر من قبل Sebastiano Vigna
 تاريخ النشر 2014
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Sebastiano Vigna




اسأل ChatGPT حول البحث

Step-asynchronous successive overrelaxation updates the values contained in a single vector using the usual Gauss-Seidel-like weighted rule, but arbitrarily mixing old and new values, the only constraint being temporal coherence: you cannot use a value before it has been computed. We show that given a nonnegative real matrix $A$, a $sigmageqrho(A)$ and a vector $boldsymbol w>0$ such that $Aboldsymbol wleqsigmaboldsymbol w$, every iteration of step-asynchronous successive overrelaxation for the problem $(sI- A)boldsymbol x=boldsymbol b$, with $s >sigma$, reduces geometrically the $boldsymbol w$-norm of the current error by a factor that we can compute explicitly. Then, we show that given a $sigma>rho(A)$ it is in principle always possible to compute such a $boldsymbol w$. This property makes it possible to estimate the supremum norm of the absolute error at each iteration without any additional hypothesis on $A$, even when $A$ is so large that computing the product $Aboldsymbol x$ is feasible, but estimating the supremum norm of $(sI-A)^{-1}$ is not.

قيم البحث

اقرأ أيضاً

We give improved algorithms for the $ell_{p}$-regression problem, $min_{x} |x|_{p}$ such that $A x=b,$ for all $p in (1,2) cup (2,infty).$ Our algorithms obtain a high accuracy solution in $tilde{O}_{p}(m^{frac{|p-2|}{2p + |p-2|}}) le tilde{O}_{p}(m^ {frac{1}{3}})$ iterations, where each iteration requires solving an $m times m$ linear system, $m$ being the dimension of the ambient space. By maintaining an approximate inverse of the linear systems that we solve in each iteration, we give algorithms for solving $ell_{p}$-regression to $1 / text{poly}(n)$ accuracy that run in time $tilde{O}_p(m^{max{omega, 7/3}}),$ where $omega$ is the matrix multiplication constant. For the current best value of $omega > 2.37$, we can thus solve $ell_{p}$ regression as fast as $ell_{2}$ regression, for all constant $p$ bounded away from $1.$ Our algorithms can be combined with fast graph Laplacian linear equation solvers to give minimum $ell_{p}$-norm flow / voltage solutions to $1 / text{poly}(n)$ accuracy on an undirected graph with $m$ edges in $tilde{O}_{p}(m^{1 + frac{|p-2|}{2p + |p-2|}}) le tilde{O}_{p}(m^{frac{4}{3}})$ time. For sparse graphs and for matrices with similar dimensions, our iteration counts and running times improve on the $p$-norm regression algorithm by [Bubeck-Cohen-Lee-Li STOC`18] and general-purpose convex optimization algorithms. At the core of our algorithms is an iterative refinement scheme for $ell_{p}$-norms, using the smoothed $ell_{p}$-norms introduced in the work of Bubeck et al. Given an initial solution, we construct a problem that seeks to minimize a quadratically-smoothed $ell_{p}$ norm over a subspace, such that a crude solution to this problem allows us to improve the initial solution by a constant factor, leading to algorithms with fast convergence.
We present faster high-accuracy algorithms for computing $ell_p$-norm minimizing flows. On a graph with $m$ edges, our algorithm can compute a $(1+1/text{poly}(m))$-approximate unweighted $ell_p$-norm minimizing flow with $pm^{1+frac{1}{p-1}+o(1)}$ o perations, for any $p ge 2,$ giving the best bound for all $pgtrsim 5.24.$ Combined with the algorithm from the work of Adil et al. (SODA 19), we can now compute such flows for any $2le ple m^{o(1)}$ in time at most $O(m^{1.24}).$ In comparison, the previous best running time was $Omega(m^{1.33})$ for large constant $p.$ For $psimdelta^{-1}log m,$ our algorithm computes a $(1+delta)$-approximate maximum flow on undirected graphs using $m^{1+o(1)}delta^{-1}$ operations, matching the current best bound, albeit only for unit-capacity graphs. We also give an algorithm for solving general $ell_{p}$-norm regression problems for large $p.$ Our algorithm makes $pm^{frac{1}{3}+o(1)}log^2(1/varepsilon)$ calls to a linear solver. This gives the first high-accuracy algorithm for computing weighted $ell_{p}$-norm minimizing flows that runs in time $o(m^{1.5})$ for some $p=m^{Omega(1)}.$ Our key technical contribution is to show that smoothed $ell_p$-norm problems introduced by Adil et al., are interreducible for different values of $p.$ No such reduction is known for standard $ell_p$-norm problems.
Hyperparameter optimization (HPO) is increasingly used to automatically tune the predictive performance (e.g., accuracy) of machine learning models. However, in a plethora of real-world applications, accuracy is only one of the multiple -- often conf licting -- performance criteria, necessitating the adoption of a multi-objective (MO) perspective. While the literature on MO optimization is rich, few prior studies have focused on HPO. In this paper, we propose algorithms that extend asynchronous successive halving (ASHA) to the MO setting. Considering multiple evaluation metrics, we assess the performance of these methods on three real world tasks: (i) Neural architecture search, (ii) algorithmic fairness and (iii) language model optimization. Our empirical analysis shows that MO ASHA enables to perform MO HPO at scale. Further, we observe that that taking the entire Pareto front into account for candidate selection consistently outperforms multi-fidelity HPO based on MO scalarization in terms of wall-clock time. Our algorithms (to be open-sourced) establish new baselines for future research in the area.
The paper proposes a novel event-triggered control scheme for nonlinear systems based on the input-delay method. Specifically, the closed-loop system is associated with a pair of auxiliary input and output. The auxiliary output is defined as the deri vative of the continuous-time input function, while the auxiliary input is defined as the input disturbance caused by the sampling or equivalently the integral of the auxiliary output over the sampling period. As a result, a cyclic mapping forms from the input to the output via the system dynamics and back from the output to the input via the integral. The event-triggering law is constructed to make the mapping contractive such that the stabilization is achieved and an easy-to-check Zeno-free condition is provided. With this idea, we develop a theorem for the event-triggered control of interconnected nonlinear systems which is employed to solve the event-triggered control for lower-triangular systems with dynamic uncertainties.
Although the operator (spectral) norm is one of the most widely used metrics for covariance estimation, comparatively little is known about the fluctuations of error in this norm. To be specific, let $hatSigma$ denote the sample covariance matrix of $n$ observations in $mathbb{R}^p$ that arise from a population matrix $Sigma$, and let $T_n=sqrt{n}|hatSigma-Sigma|_{text{op}}$. In the setting where the eigenvalues of $Sigma$ have a decay profile of the form $lambda_j(Sigma)asymp j^{-2beta}$, we analyze how well the bootstrap can approximate the distribution of $T_n$. Our main result shows that up to factors of $log(n)$, the bootstrap can approximate the distribution of $T_n$ at the dimension-free rate of $n^{-frac{beta-1/2}{6beta+4}}$, with respect to the Kolmogorov metric. Perhaps surprisingly, a result of this type appears to be new even in settings where $p< n$. More generally, we discuss the consequences of this result beyond covariance matrices and show how the bootstrap can be used to estimate the errors of sketching algorithms in randomized numerical linear algebra (RandNLA). An illustration of these ideas is also provided with a climate data example.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا