ﻻ يوجد ملخص باللغة العربية
We consider the problem of controlling an unknown linear time-invariant dynamical system from a single chain of black-box interactions, with no access to resets or offline simulation. Under the assumption that the system is controllable, we give the first efficient algorithm that is capable of attaining sublinear regret in a single trajectory under the setting of online nonstochastic control. This resolves an open problem on the stochastic LQR problem, and in a more challenging setting that allows for adversarial perturbations and adversarially chosen and changing convex loss functions. We give finite-time regret bounds for our algorithm on the order of $2^{tilde{O}(mathcal{L})} + tilde{O}(text{poly}(mathcal{L}) T^{2/3})$ for general nonstochastic control, and $2^{tilde{O}(mathcal{L})} + tilde{O}(text{poly}(mathcal{L}) sqrt{T})$ for black-box LQR, where $mathcal{L}$ is the system size which is an upper bound on the dimension. The crucial step is a new system identification method that is robust to adversarial noise, but incurs exponential cost. To complete the picture, we investigate the complexity of the online black-box control problem, and give a matching lower bound of $2^{Omega(mathcal{L})}$ on the regret, showing that the additional exponential cost is inevitable. This lower bound holds even in the noiseless setting, and applies to any, randomized or deterministic, black-box control method.
Most existing black-box optimization methods assume that all variables in the system being optimized have equal cost and can change freely at each iteration. However, in many real world systems, inputs are passed through a sequence of different opera
In many practical applications, heuristic or approximation algorithms are used to efficiently solve the task at hand. However their solutions frequently do not satisfy natural monotonicity properties of optimal solutions. In this work we develop algo
Machine learning based decision making systems are increasingly affecting humans. An individual can suffer an undesirable outcome under such decision making systems (e.g. denied credit) irrespective of whether the decision is fair or accurate. Indivi
We derive an optimal policy for adaptively restarting a randomized algorithm, based on observed features of the run-so-far, so as to minimize the expected time required for the algorithm to successfully terminate. Given a suitable Bayesian prior, thi
Transfer learning has become a common practice for training deep learning models with limited labeled data in a target domain. On the other hand, deep models are vulnerable to adversarial attacks. Though transfer learning has been widely applied, its