Do you want to publish a course? Click here

Energy-aware adaptive bi-Lipschitz embeddings

146   0   0.0 ( 0 )
 Added by Bubacarr Bah
 Publication date 2013
and research's language is English




Ask ChatGPT about the research

We propose a dimensionality reducing matrix design based on training data with constraints on its Frobenius norm and number of rows. Our design criteria is aimed at preserving the distances between the data points in the dimensionality reduced space as much as possible relative to their distances in original data space. This approach can be considered as a deterministic Bi-Lipschitz embedding of the data points. We introduce a scalable learning algorithm, dubbed AMUSE, and provide a rigorous estimation guarantee by leveraging game theoretic tools. We also provide a generalization characterization of our matrix based on our sample data. We use compressive sensing problems as an example application of our problem, where the Frobenius norm design constraint translates into the sensing energy.



rate research

Read More

A quasiconformal tree is a doubling metric tree in which the diameter of each arc is bounded above by a fixed multiple of the distance between its endpoints. In this paper we show that every quasiconformal tree bi-Lipschitz embeds in some Euclidean space, with the ambient dimension and the bi-Lipschitz constant depending only on the doubling and bounded turning constants of the tree. This answers Question 1.6 in cite{DV} (arXiv:2007.12297).
The problem of computing a bi-Lipschitz embedding of a graphical metric into the line with minimum distortion has received a lot of attention. The best-known approximation algorithm computes an embedding with distortion $O(c^2)$, where $c$ denotes the optimal distortion [Bu{a}doiu etal~2005]. We present a bi-criteria approximation algorithm that extends the above results to the setting of emph{outliers}. Specifically, we say that a metric space $(X,rho)$ admits a $(k,c)$-embedding if there exists $Ksubset X$, with $|K|=k$, such that $(Xsetminus K, rho)$ admits an embedding into the line with distortion at most $c$. Given $kgeq 0$, and a metric space that admits a $(k,c)$-embedding, for some $cgeq 1$, our algorithm computes a $({mathsf p}{mathsf o}{mathsf l}{mathsf y}(k, c, log n), {mathsf p}{mathsf o}{mathsf l}{mathsf y}(c))$-embedding in polynomial time. This is the first algorithmic result for outlier bi-Lipschitz embeddings. Prior to our work, comparable outlier embeddings where known only for the case of additive distortion.
140 - Ruida Zhou , Chao Gan , Jing Yan 2018
In this paper, we propose a cost-aware cascading bandits model, a new variant of multi-armed ban- dits with cascading feedback, by considering the random cost of pulling arms. In each step, the learning agent chooses an ordered list of items and examines them sequentially, until certain stopping condition is satisfied. Our objective is then to max- imize the expected net reward in each step, i.e., the reward obtained in each step minus the total cost in- curred in examining the items, by deciding the or- dered list of items, as well as when to stop examina- tion. We study both the offline and online settings, depending on whether the state and cost statistics of the items are known beforehand. For the of- fline setting, we show that the Unit Cost Ranking with Threshold 1 (UCR-T1) policy is optimal. For the online setting, we propose a Cost-aware Cas- cading Upper Confidence Bound (CC-UCB) algo- rithm, and show that the cumulative regret scales in O(log T ). We also provide a lower bound for all {alpha}-consistent policies, which scales in {Omega}(log T ) and matches our upper bound. The performance of the CC-UCB algorithm is evaluated with both synthetic and real-world data.
In this paper we consider self-supervised representation learning to improve sample efficiency in reinforcement learning (RL). We propose a forward prediction objective for simultaneously learning embeddings of states and action sequences. These embeddings capture the structure of the environments dynamics, enabling efficient policy learning. We demonstrate that our action embeddings alone improve the sample efficiency and peak performance of model-free RL on control from low-dimensional states. By combining state and action embeddings, we achieve efficient learning of high-quality policies on goal-conditioned continuous control from pixel observations in only 1-2 million environment steps.
Lipschitz bandits is a prominent version of multi-armed bandits that studies large, structured action spaces such as the [0,1] interval, where similar actions are guaranteed to have similar rewards. A central theme here is the adaptive discretization of the action space, which gradually ``zooms in on the more promising regions thereof. The goal is to take advantage of ``nicer problem instances, while retaining near-optimal worst-case performance. While the stochastic version of the problem is well-understood, the general version with adversarial rewards is not. We provide the first algorithm for adaptive discretization in the adversarial version, and derive instance-dependent regret bounds. In particular, we recover the worst-case optimal regret bound for the adversarial version, and the instance-dependent regret bound for the stochastic version. Further, an application of our algorithm to dynamic pricing (where a seller repeatedly adjusts prices for a product) enjoys these regret bounds without any smoothness assumptions.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا