ترغب بنشر مسار تعليمي؟ اضغط هنا

Autoregressive exogenous (ARX) systems are the general class of input-output dynamical systems used for modeling stochastic linear dynamical systems (LDS) including partially observable LDS such as LQG systems. In this work, we study the problem of s ystem identification and adaptive control of unknown ARX systems. We provide finite-time learning guarantees for the ARX systems under both open-loop and closed-loop data collection. Using these guarantees, we design adaptive control algorithms for unknown ARX systems with arbitrary strongly convex or convex quadratic regulating costs. Under strongly convex cost functions, we design an adaptive control algorithm based on online gradient descent to design and update the controllers that are constructed via a convex controller reparametrization. We show that our algorithm has $tilde{mathcal{O}}(sqrt{T})$ regret via explore and commit approach and if the model estimates are updated in epochs using closed-loop data collection, it attains the optimal regret of $text{polylog}(T)$ after $T$ time-steps of interaction. For the case of convex quadratic cost functions, we propose an adaptive control algorithm that deploys the optimism in the face of uncertainty principle to design the controller. In this setting, we show that the explore and commit approach has a regret upper bound of $tilde{mathcal{O}}(T^{2/3})$, and the adaptive control with continuous model estimate updates attains $tilde{mathcal{O}}(sqrt{T})$ regret after $T$ time-steps.
In this paper, a general algorithm is proposed for rate analysis and code design of linear index coding problems. Specifically a solution for minimum rank matrix completion problem over finite fields representing the linear index coding problem is de vised in order to find the optimum transmission rate given vector length and size of the field. The new approach can be applied to both scalar and vector linear index coding.
Minimizing the rank of a matrix subject to constraints is a challenging problem that arises in many applications in control theory, machine learning, and discrete geometry. This class of optimization problems, known as rank minimization, is NP-HARD, and for most practical problems there are no efficient algorithms that yield exact solutions. A popular heuristic algorithm replaces the rank function with the nuclear norm--equal to the sum of the singular values--of the decision variable. In this paper, we provide a necessary and sufficient condition that quantifies when this heuristic successfully finds the minimum rank solution of a linear constraint set. We additionally provide a probability distribution over instances of the affine rank minimization problem such that instances sampled from this distribution satisfy our conditions for success with overwhelming probability provided the number of constraints is appropriately large. Finally, we give empirical evidence that these probabilistic bounds provide accurate predictions of the heuristics performance in non-asymptotic scenarios.
Expander graphs have been recently proposed to construct efficient compressed sensing algorithms. In particular, it has been shown that any $n$-dimensional vector that is $k$-sparse (with $kll n$) can be fully recovered using $O(klogfrac{n}{k})$ meas urements and only $O(klog n)$ simple recovery iterations. In this paper we improve upon this result by considering expander graphs with expansion coefficient beyond 3/4 and show that, with the same number of measurements, only $O(k)$ recovery iterations are required, which is a significant improvement when $n$ is large. In fact, full recovery can be accomplished by at most $2k$ very simple iterations. The number of iterations can be made arbitrarily close to $k$, and the recovery algorithm can be implemented very efficiently using a simple binary search tree. We also show that by tolerating a small penalty on the number of measurements, and not on the number of recovery iterations, one can use the efficient construction of a family of expander graphs to come up with explicit measurement matrices for this method. We compare our result with other recently developed expander-graph-based methods and argue that it compares favorably both in terms of the number of required measurements and in terms of the recovery time complexity. Finally we will show how our analysis extends to give a robust algorithm that finds the position and sign of the $k$ significant elements of an almost $k$-sparse signal and then, using very simple optimization techniques, finds in sublinear time a $k$-sparse signal which approximates the original signal with very high precision.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا