ﻻ يوجد ملخص باللغة العربية
Rate distortion theory is concerned with optimally encoding a given signal class $mathcal{S}$ using a budget of $R$ bits, as $Rtoinfty$. We say that $mathcal{S}$ can be compressed at rate $s$ if we can achieve an error of $mathcal{O}(R^{-s})$ for encoding $mathcal{S}$; the supremal compression rate is denoted $s^ast(mathcal{S})$. Given a fixed coding scheme, there usually are elements of $mathcal{S}$ that are compressed at a higher rate than $s^ast(mathcal{S})$ by the given coding scheme; we study the size of this set of signals. We show that for certain nice signal classes $mathcal{S}$, a phase transition occurs: We construct a probability measure $mathbb{P}$ on $mathcal{S}$ such that for every coding scheme $mathcal{C}$ and any $s >s^ast(mathcal{S})$, the set of signals encoded with error $mathcal{O}(R^{-s})$ by $mathcal{C}$ forms a $mathbb{P}$-null-set. In particular our results apply to balls in Besov and Sobolev spaces that embed compactly into $L^2(Omega)$ for a bounded Lipschitz domain $Omega$. As an application, we show that several existing sharpness results concerning function approximation using deep neural networks are generically sharp. We also provide quantitative and non-asymptotic bounds on the probability that a random $finmathcal{S}$ can be encoded to within accuracy $varepsilon$ using $R$ bits. This result is applied to the problem of approximately representing $finmathcal{S}$ to within accuracy $varepsilon$ by a (quantized) neural network that is constrained to have at most $W$ nonzero weights and is generated by an arbitrary learning procedure. We show that for any $s >s^ast(mathcal{S})$ there are constants $c,C$ such that, no matter how we choose the learning procedure, the probability of success is bounded from above by $minbig{1,2^{Ccdot Wlceillog_2(1+W)rceil^2 -ccdotvarepsilon^{-1/s}}big}$.
Rate-Distortion Optimized Quantization (RDOQ) has played an important role in the coding performance of recent video compression standards such as H.264/AVC, H.265/HEVC, VP9 and AV1. This scheme yields significant reductions in bit-rate at the expens
Click-through rate (CTR) estimation plays as a core function module in various personalized online services, including online advertising, recommender systems, and web search etc. From 2015, the success of deep learning started to benefit CTR estimat
Deep learning has been the engine powering many successes of data science. However, the deep neural network (DNN), as the basic model of deep learning, is often excessively over-parameterized, causing many difficulties in training, prediction and int
We develop a new theoretical framework to analyze the generalization error of deep learning, and derive a new fast learning rate for two representative algorithms: empirical risk minimization and Bayesian deep learning. The series of theoretical anal
In the context of lossy compression, Blau & Michaeli (2019) adopt a mathematical notion of perceptual quality and define the information rate-distortion-perception function, generalizing the classical rate-distortion tradeoff. We consider the notion