Do you want to publish a course? Click here

Optimal oracle inequalities for solving projected fixed-point equations

128   0   0.0 ( 0 )
 Added by Wenlong Mou
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Linear fixed point equations in Hilbert spaces arise in a variety of settings, including reinforcement learning, and computational methods for solving differential and integral equations. We study methods that use a collection of random observations to compute approximate solutions by searching over a known low-dimensional subspace of the Hilbert space. First, we prove an instance-dependent upper bound on the mean-squared error for a linear stochastic approximation scheme that exploits Polyak--Ruppert averaging. This bound consists of two terms: an approximation error term with an instance-dependent approximation factor, and a statistical error term that captures the instance-specific complexity of the noise when projected onto the low-dimensional subspace. Using information theoretic methods, we also establish lower bounds showing that both of these terms cannot be improved, again in an instance-dependent sense. A concrete consequence of our characterization is that the optimal approximation factor in this problem can be much larger than a universal constant. We show how our results precisely characterize the error of a class of temporal difference learning methods for the policy evaluation problem with linear function approximation, establishing their optimality.

rate research

Read More

253 - Stefan Hetzl 2021
We consider constrained Horn clause solving from the more general point of view of solving formula equations. Constrained Horn clauses correspond to the subclass of Horn formula equations. We state and prove a fixed-point theorem for Horn formula equations which is based on expressing the fixed-point computation of a minimal model of a set of Horn clauses on the object level as a formula in first-order logic with a least fixed point operator. We describe several corollaries of this fixed-point theorem, in particular concerning the logical foundations of program verification, and sketch how to generalise it to incorporate abstract interpretations.
In this paper we show how to recover a spectral approximations to broad classes of structured matrices using only a polylogarithmic number of adaptive linear measurements to either the matrix or its inverse. Leveraging this result we obtain faster algorithms for variety of linear algebraic problems. Key results include: $bullet$ A nearly linear time algorithm for solving the inverse of symmetric $M$-matrices, a strict superset of Laplacians and SDD matrices. $bullet$ An $tilde{O}(n^2)$ time algorithm for solving $n times n$ linear systems that are constant spectral approximations of Laplacians or more generally, SDD matrices. $bullet$ An $tilde{O}(n^2)$ algorithm to recover a spectral approximation of a $n$-vertex graph using only $tilde{O}(1)$ matrix-vector multiplies with its Laplacian matrix. The previous best results for each problem either used a trivial number of queries to exactly recover the matrix or a trivial $O(n^omega)$ running time, where $omega$ is the matrix multiplication constant. We achieve these results by generalizing recent semidefinite programming based linear sized sparsifier results of Lee and Sun (2017) and providing iterative methods inspired by the semistreaming sparsification results of Kapralov, Lee, Musco, Musco and Sidford (2014) and input sparsity time linear system solving results of Li, Miller, and Peng (2013). We hope that by initiating study of these natural problems, expanding the robustness and scope of recent nearly linear time linear system solving research, and providing general matrix recovery machinery this work may serve as a stepping stone for faster algorithms.
We note that known methods achieving the optimal oracle complexity for first order convex optimization require quadratic memory, and ask whether this is necessary, and more broadly seek to characterize the minimax number of first order queries required to optimize a convex Lipschitz function subject to a memory constraint.
We propose a semismooth Newton algorithm for pathwise optimization (SNAP) for the LASSO and Enet in sparse, high-dimensional linear regression. SNAP is derived from a suitable formulation of the KKT conditions based on Newton derivatives. It solves the semismooth KKT equations efficiently by actively and continuously seeking the support of the regression coefficients along the solution path with warm start. At each knot in the path, SNAP converges locally superlinearly for the Enet criterion and achieves an optimal local convergence rate for the LASSO criterion, i.e., SNAP converges in one step at the cost of two matrix-vector multiplication per iteration. Under certain regularity conditions on the design matrix and the minimum magnitude of the nonzero elements of the target regression coefficients, we show that SNAP hits a solution with the same signs as the regression coefficients and achieves a sharp estimation error bound in finite steps with high probability. The computational complexity of SNAP is shown to be the same as that of LARS and coordinate descent algorithms per iteration. Simulation studies and real data analysis support our theoretical results and demonstrate that SNAP is faster and accurate than LARS and coordinate descent algorithms.
We study the problem of bounding path-dependent expectations (within any finite time horizon $d$) over the class of discrete-time martingales whose marginal distributions lie within a prescribed tolerance of a given collection of benchmark marginal distributions. This problem is a relaxation of the martingale optimal transport (MOT) problem and is motivated by applications to super-hedging in financial markets. We show that the empirical version of our relaxed MOT problem can be approximated within $Oleft( n^{-1/2}right)$ error where $n$ is the number of samples of each of the individual marginal distributions (generated independently) and using a suitably constructed finite-dimensional linear programming problem.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا