No Arabic abstract
This paper studies pliable index coding, in which a sender broadcasts information to multiple receivers through a shared broadcast medium, and the receivers each have some message a priori and want any message they do not have. An approach, based on receivers that are absent from the problem, was previously proposed to find lower bounds on the optimal broadcast rate. In this paper, we introduce new techniques to obtained better lower bounds, and derive the optimal broadcast rates for new classes of the problems, including all problems with up to four absent receivers.
We characterise the optimal broadcast rate for a few classes of pliable-index-coding problems. This is achieved by devising new lower bounds that utilise the set of absent receivers to construct decoding chains with skipped messages. This work complements existing works by considering problems that are not complete-S, i.e., problems considered in this work do not require that all receivers with a certain side-information cardinality to be either present or absent from the problem. We show that for a certain class, the set of receivers is critical in the sense that adding any receiver strictly increases the broadcast rate.
Batch codes are a useful notion of locality for error correcting codes, originally introduced in the context of distributed storage and cryptography. Many constructions of batch codes have been given, but few lower bound (limitation) results are known, leaving gaps between the best known constructions and best known lower bounds. Towards determining the optimal redundancy of batch codes, we prove a new lower bound on the redundancy of batch codes. Specifically, we study (primitive, multiset) linear batch codes that systematically encode $n$ information symbols into $N$ codeword symbols, with the requirement that any multiset of $k$ symbol requests can be obtained in disjoint ways. We show that such batch codes need $Omega(sqrt{Nk})$ symbols of redundancy, improving on the previous best lower bounds of $Omega(sqrt{N}+k)$ at all $k=n^varepsilon$ with $varepsilonin(0,1)$. Our proof follows from analyzing the dimension of the order-$O(k)$ tensor of the batch codes dual code.
We study the fundamental problem of index coding under an additional privacy constraint that requires each receiver to learn nothing more about the collection of messages beyond its demanded messages from the server and what is available to it as side information. To enable such private communication, we allow the use of a collection of independent secret keys, each of which is shared amongst a subset of users and is known to the server. The goal is to study properties of the key access structures which make the problem feasible and then design encoding and decoding schemes efficient in the size of the server transmission as well as the sizes of the secret keys. We call this the private index coding problem. We begin by characterizing the key access structures that make private index coding feasible. We also give conditions to check if a given linear scheme is a valid private index code. For up to three users, we characterize the rate region of feasible server transmission and key rates, and show that all feasible rates can be achieved using scalar linear coding and time sharing; we also show that scalar linear codes are sub-optimal for four receivers. The outer bounds used in the case of three users are extended to arbitrary number of users and seen as a generalized version of the well-known polymatroidal bounds for the standard non-private index coding. We also show that the presence of common randomness and private randomness does not change the rate region. Furthermore, we study the case where no keys are shared among the users and provide some necessary and sufficient conditions for feasibility in this setting under a weaker notion of privacy. If the server has the ability to multicast to any subset of users, we demonstrate how this flexibility can be used to provide privacy and characterize the minimum number of server multicasts required.
This paper provides upper and lower bounds on the optimal guessing moments of a random variable taking values on a finite set when side information may be available. These moments quantify the number of guesses required for correctly identifying the unknown object and, similarly to Arikans bounds, they are expressed in terms of the Arimoto-Renyi conditional entropy. Although Arikans bounds are asymptotically tight, the improvement of the bounds in this paper is significant in the non-asymptotic regime. Relationships between moments of the optimal guessing function and the MAP error probability are also established, characterizing the exact locus of their attainable values. The bounds on optimal guessing moments serve to improve non-asymptotic bounds on the cumulant generating function of the codeword lengths for fixed-to-variable optimal lossless source coding without prefix constraints. Non-asymptotic bounds on the reliability function of discrete memoryless sources are derived as well. Relying on these techniques, lower bounds on the cumulant generating function of the codeword lengths are derived, by means of the smooth Renyi entropy, for source codes that allow decoding errors.
Index coding, or broadcasting with side information, is a network coding problem of most fundamental importance. In this problem, given a directed graph, each vertex represents a user with a need of information, and the neighborhood of each vertex represents the side information availability to that user. The aim is to find an encoding to minimum number of bits (optimal rate) that, when broadcasted, will be sufficient to the need of every user. Not only the optimal rate is intractable, but it is also very hard to characterize with some other well-studied graph parameter or with a simpler formulation, such as a linear program. Recently there have been a series of works that address this question and provide explicit schemes for index coding as the optimal value of a linear program with rate given by well-studied properties such as local chromatic number or partial clique-covering number. There has been a recent attempt to combine these existing notions of local chromatic number and partial clique covering into a unified notion denoted as the local partial clique cover (Arbabjolfaei and Kim, 2014). We present a generalized novel upper-bound (encoding scheme) - in the form of the minimum value of a linear program - for optimal index coding. Our bound also combines the notions of local chromatic number and partial clique covering into a new definition of the local partial clique cover, which outperforms both the previous bounds, as well as beats the previous attempt to combination. Further, we look at the upper bound derived recently by Thapa et al., 2015, and extend their $n$-$mathsf{GIC}$ (Generalized Interlinked Cycle) construction to $(k,n)$-$mathsf{GIC}$ graphs, which are a generalization of $k$-partial cliques.