ترغب بنشر مسار تعليمي؟ اضغط هنا

Minimax Lower Bounds on Dictionary Learning for Tensor Data

71   0   0.0 ( 0 )
 نشر من قبل Zahra Shakeri
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper provides fundamental limits on the sample complexity of estimating dictionaries for tensor data. The specific focus of this work is on $K$th-order tensor data and the case where the underlying dictionary can be expressed in terms of $K$ smaller dictionaries. It is assumed the data are generated by linear combinations of these structured dictionary atoms and observed through white Gaussian noise. This work first provides a general lower bound on the minimax risk of dictionary learning for such tensor data and then adapts the proof techniques for specialized results in the case of sparse and sparse-Gaussian linear combinations. The results suggest the sample complexity of dictionary learning for tensor data can be significantly lower than that for unstructured data: for unstructured data it scales linearly with the product of the dictionary dimensions, whereas for tensor-structured data the bound scales linearly with the sum of the product of the dimensions of the (smaller) component dictionaries. A partial converse is provided for the case of 2nd-order tensor data to show that the bounds in this paper can be tight. This involves developing an algorithm for learning highly-structured dictionaries from noisy tensor data. Finally, numerical experiments highlight the advantages associated with explicitly accounting for tensor data structure during dictionary learning.



قيم البحث

اقرأ أيضاً

Dictionary learning is the problem of estimating the collection of atomic elements that provide a sparse representation of measured/collected signals or data. This paper finds fundamental limits on the sample complexity of estimating dictionaries for tensor data by proving a lower bound on the minimax risk. This lower bound depends on the dimensions of the tensor and parameters of the generative model. The focus of this paper is on second-order tensor data, with the underlying dictionaries constructed by taking the Kronecker product of two smaller dictionaries and the observed data generated by sparse linear combinations of dictionary atoms observed through white Gaussian noise. In this regard, the paper provides a general lower bound on the minimax risk and also adapts the proof techniques for equivalent results using sparse and Gaussian coefficient models. The reported results suggest that the sample complexity of dictionary learning for tensor data can be significantly lower than that for unstructured data.
120 - Xiaohan Kang , Bruce Hajek 2021
Recovery of the causal structure of dynamic networks from noisy measurements has long been a problem of intense interest across many areas of science and engineering. Many algorithms have been proposed, but there is no work that compares the performa nce of the algorithms to converse bounds in a non-asymptotic setting. As a step to address this problem, this paper gives lower bounds on the error probability for causal network support recovery in a linear Gaussian setting. The bounds are based on the use of the Bhattacharyya coefficient for binary hypothesis testing problems with mixture probability distributions. Comparison of the bounds and the performance achieved by two representative recovery algorithms are given for sparse random networks based on the ErdH{o}s-Renyi model.
98 - Ray Li , Mary Wootters 2021
Batch codes are a useful notion of locality for error correcting codes, originally introduced in the context of distributed storage and cryptography. Many constructions of batch codes have been given, but few lower bound (limitation) results are know n, leaving gaps between the best known constructions and best known lower bounds. Towards determining the optimal redundancy of batch codes, we prove a new lower bound on the redundancy of batch codes. Specifically, we study (primitive, multiset) linear batch codes that systematically encode $n$ information symbols into $N$ codeword symbols, with the requirement that any multiset of $k$ symbol requests can be obtained in disjoint ways. We show that such batch codes need $Omega(sqrt{Nk})$ symbols of redundancy, improving on the previous best lower bounds of $Omega(sqrt{N}+k)$ at all $k=n^varepsilon$ with $varepsilonin(0,1)$. Our proof follows from analyzing the dimension of the order-$O(k)$ tensor of the batch codes dual code.
This paper studies pliable index coding, in which a sender broadcasts information to multiple receivers through a shared broadcast medium, and the receivers each have some message a priori and want any message they do not have. An approach, based on receivers that are absent from the problem, was previously proposed to find lower bounds on the optimal broadcast rate. In this paper, we introduce new techniques to obtained better lower bounds, and derive the optimal broadcast rates for new classes of the problems, including all problems with up to four absent receivers.
97 - Or Ordentlich 2016
Recently, Samorodnitsky proved a strengthened version of Mrs. Gerbers Lemma, where the output entropy of a binary symmetric channel is bounded in terms of the average entropy of the input projected on a random subset of coordinates. Here, this result is applied for deriving novel lower bounds on the entropy rate of binary hidden Markov processes. For symmetric underlying Markov processes, our bound improves upon the best known bound in the very noisy regime. The nonsymmetric case is also considered, and explicit bounds are derived for Markov processes that satisfy the $(1,infty)$-RLL constraint.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا