ترغب بنشر مسار تعليمي؟ اضغط هنا

Improved batch code lower bounds

99   0   0.0 ( 0 )
 نشر من قبل Ray Li
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Batch codes are a useful notion of locality for error correcting codes, originally introduced in the context of distributed storage and cryptography. Many constructions of batch codes have been given, but few lower bound (limitation) results are known, leaving gaps between the best known constructions and best known lower bounds. Towards determining the optimal redundancy of batch codes, we prove a new lower bound on the redundancy of batch codes. Specifically, we study (primitive, multiset) linear batch codes that systematically encode $n$ information symbols into $N$ codeword symbols, with the requirement that any multiset of $k$ symbol requests can be obtained in disjoint ways. We show that such batch codes need $Omega(sqrt{Nk})$ symbols of redundancy, improving on the previous best lower bounds of $Omega(sqrt{N}+k)$ at all $k=n^varepsilon$ with $varepsilonin(0,1)$. Our proof follows from analyzing the dimension of the order-$O(k)$ tensor of the batch codes dual code.



قيم البحث

اقرأ أيضاً

This paper studies pliable index coding, in which a sender broadcasts information to multiple receivers through a shared broadcast medium, and the receivers each have some message a priori and want any message they do not have. An approach, based on receivers that are absent from the problem, was previously proposed to find lower bounds on the optimal broadcast rate. In this paper, we introduce new techniques to obtained better lower bounds, and derive the optimal broadcast rates for new classes of the problems, including all problems with up to four absent receivers.
148 - Jinbao Zhu , Qifa Yan , 2021
This paper investigates the problem of Secure Multi-party Batch Matrix Multiplication (SMBMM), where a user aims to compute the pairwise products $mathbf{A}divideontimesmathbf{B}triangleq(mathbf{A}^{(1)}mathbf{B}^{(1)},ldots,mathbf{A}^{(M)}mathbf{B}^ {(M)})$ of two batch of massive matrices $mathbf{A}$ and $mathbf{B}$ that are generated from two sources, through $N$ honest but curious servers which share some common randomness. The matrices $mathbf{A}$ (resp. $mathbf{B}$) must be kept secure from any subset of up to $X_{mathbf{A}}$ (resp. $X_mathbf{B}$) servers even if they collude, and the user must not obtain any information about $(mathbf{A},mathbf{B})$ beyond the products $mathbf{A}divideontimesmathbf{B}$. A novel computation strategy for single secure matrix multiplication problem (i.e., the case $M=1$) is first proposed, and then is generalized to the strategy for SMBMM by means of cross subspace alignment. The SMBMM strategy focuses on the tradeoff between recovery threshold (the number of successful computing servers that the user needs to wait for), system cost (upload cost, the amount of common randomness, and download cost) and system complexity (encoding, computing, and decoding complexities). Notably, compared with the known result by Chen et al., the strategy for the degraded case $X= X_{mathbf{A}}=X_{mathbf{B}}$ achieves better recovery threshold, amount of common randomness, download cost and decoding complexity when $X$ is less than some parameter threshold, while the performance with respect to other measures remain identical.
128 - Bingchen Qian , Xin Wang , 2021
Secure codes are widely-studied combinatorial structures which were introduced for traitor tracing in broadcast encryption. To determine the maximum size of such structures is the main research objective. In this paper, we investigate the lower bound s for secure codes and their related structures. First, we give some improved lower bounds for the rates of $2$-frameproof codes and $overline{2}$-separable codes for slightly large alphabet size. Then we improve the lower bounds for the rate of some related structures, i.e., strongly $2$-separable matrices and $2$-cancellative set families. Finally, we give a general method to derive new lower bounds for strongly $t$-separable matrices and $t$-cancellative set families for $tge 3.$
159 - Michael B. Baer 2011
We present new lower and upper bounds for the compression rate of binary prefix codes optimized over memoryless sources according to two related exponential codeword length objectives. The objectives explored here are exponential-average length and e xponential-average redundancy. The first of these relates to various problems involving queueing, uncertainty, and lossless communications, and it can be reduced to the second, which has properties more amenable to analysis. These bounds, some of which are tight, are in terms of a form of entropy and/or the probability of an input symbol, improving on recently discovered bounds of similar form. We also observe properties of optimal codes over the exponential-average redundancy utility.
This paper provides fundamental limits on the sample complexity of estimating dictionaries for tensor data. The specific focus of this work is on $K$th-order tensor data and the case where the underlying dictionary can be expressed in terms of $K$ sm aller dictionaries. It is assumed the data are generated by linear combinations of these structured dictionary atoms and observed through white Gaussian noise. This work first provides a general lower bound on the minimax risk of dictionary learning for such tensor data and then adapts the proof techniques for specialized results in the case of sparse and sparse-Gaussian linear combinations. The results suggest the sample complexity of dictionary learning for tensor data can be significantly lower than that for unstructured data: for unstructured data it scales linearly with the product of the dictionary dimensions, whereas for tensor-structured data the bound scales linearly with the sum of the product of the dimensions of the (smaller) component dictionaries. A partial converse is provided for the case of 2nd-order tensor data to show that the bounds in this paper can be tight. This involves developing an algorithm for learning highly-structured dictionaries from noisy tensor data. Finally, numerical experiments highlight the advantages associated with explicitly accounting for tensor data structure during dictionary learning.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا