ﻻ يوجد ملخص باللغة العربية
We propose a new geometric method for measuring the quality of representations obtained from deep learning. Our approach, called Random Polytope Descriptor, provides an efficient description of data points based on the construction of random convex polytopes. We demonstrate the use of our technique by qualitatively comparing the behavior of classic and regularized autoencoders. This reveals that applying regularization to autoencoder networks may decrease the out-of-distribution detection performance in latent space. While our technique is similar in spirit to $k$-means clustering, we achieve significantly better false positive/negative balance in clustering tasks on autoencoded datasets.
A two-step model for generating random polytopes is considered. For parameters $d$, $m$, and $p$, the first step is to generate a simple polytope $P$ whose facets are given by $m$ uniform random hyperplanes tangent to the unit sphere in $mathbb{R}^d$
Let $K$ be a convex body in $mathbb{R}^n$ and $f : partial K rightarrow mathbb{R}_+$ a continuous, strictly positive function with $intlimits_{partial K} f(x) d mu_{partial K}(x) = 1$. We give an upper bound for the approximation of $K$ in the symmet
Suppose we choose $N$ points uniformly randomly from a convex body in $d$ dimensions. How large must $N$ be, asymptotically with respect to $d$, so that the convex hull of the points is nearly as large as the convex body itself? It was shown by Dyer-
We present an improved algorithm for properly learning convex polytopes in the realizable PAC setting from data with a margin. Our learning algorithm constructs a consistent polytope as an intersection of about $t log t$ halfspaces with margins in ti
We consider the problem of learning representations that achieve group and subgroup fairness with respect to multiple sensitive attributes. Taking inspiration from the disentangled representation learning literature, we propose an algorithm for learn