No Arabic abstract
Index coding, a source coding problem over broadcast channels, has been a subject of both theoretical and practical interest since its introduction (by Birk and Kol, 1998). In short, the problem can be defined as follows: there is an input $textbf{x} triangleq (textbf{x}_1, dots, textbf{x}_n)$, a set of $n$ clients who each desire a single symbol $textbf{x}_i$ of the input, and a broadcaster whose goal is to send as few messages as possible to all clients so that each one can recover its desired symbol. Additionally, each client has some predetermined side information, corresponding to certain symbols of the input $textbf{x}$, which we represent as the side information graph $mathcal{G}$. The graph $mathcal{G}$ has a vertex $v_i$ for each client and a directed edge $(v_i, v_j)$ indicating that client $i$ knows the $j$th symbol of the input. Given a fixed side information graph $mathcal{G}$, we are interested in determining or approximating the broadcast rate of index coding on the graph, i.e. the fewest number of messages the broadcaster can transmit so that every client gets their desired information. Using index coding schemes based on linear programs (LPs), we take a two-pronged approach to approximating the broadcast rate. First, extending earlier work on planar graphs, we focus on approximating the broadcast rate for special graph families such as graphs with small chromatic number and disk graphs. In certain cases, we are able to show that simple LP-based schemes give constant-factor approximations of the broadcast rate, which seem extremely difficult to obtain in the general case. Second, we provide several LP-based schemes for the general case which are not constant-factor approximations, but which strictly improve on the prior best-known schemes.
In this paper, a general algorithm is proposed for rate analysis and code design of linear index coding problems. Specifically a solution for minimum rank matrix completion problem over finite fields representing the linear index coding problem is devised in order to find the optimum transmission rate given vector length and size of the field. The new approach can be applied to both scalar and vector linear index coding.
While Shannons mutual information has widespread applications in many disciplines, for practical applications it is often difficult to calculate its value accurately for high-dimensional variables because of the curse of dimensionality. This paper is focused on effective approximation methods for evaluating mutual information in the context of neural population coding. For large but finite neural populations, we derive several information-theoretic asymptotic bounds and approximation formulas that remain valid in high-dimensional spaces. We prove that optimizing the population density distribution based on these approximation formulas is a convex optimization problem which allows efficient numerical solutions. Numerical simulation results confirmed that our asymptotic formulas were highly accurate for approximating mutual information for large neural populations. In special cases, the approximation formulas are exactly equal to the true mutual information. We also discuss techniques of variable transformation and dimensionality reduction to facilitate computation of the approximations.
We study the fundamental problem of index coding under an additional privacy constraint that requires each receiver to learn nothing more about the collection of messages beyond its demanded messages from the server and what is available to it as side information. To enable such private communication, we allow the use of a collection of independent secret keys, each of which is shared amongst a subset of users and is known to the server. The goal is to study properties of the key access structures which make the problem feasible and then design encoding and decoding schemes efficient in the size of the server transmission as well as the sizes of the secret keys. We call this the private index coding problem. We begin by characterizing the key access structures that make private index coding feasible. We also give conditions to check if a given linear scheme is a valid private index code. For up to three users, we characterize the rate region of feasible server transmission and key rates, and show that all feasible rates can be achieved using scalar linear coding and time sharing; we also show that scalar linear codes are sub-optimal for four receivers. The outer bounds used in the case of three users are extended to arbitrary number of users and seen as a generalized version of the well-known polymatroidal bounds for the standard non-private index coding. We also show that the presence of common randomness and private randomness does not change the rate region. Furthermore, we study the case where no keys are shared among the users and provide some necessary and sufficient conditions for feasibility in this setting under a weaker notion of privacy. If the server has the ability to multicast to any subset of users, we demonstrate how this flexibility can be used to provide privacy and characterize the minimum number of server multicasts required.
We consider communication over a noisy network under randomized linear network coding. Possible error mechanism include node- or link- failures, Byzantine behavior of nodes, or an over-estimate of the network min-cut. Building on the work of Koetter and Kschischang, we introduce a probabilistic model for errors. We compute the capacity of this channel and we define an error-correction scheme based on random sparse graphs and a low-complexity decoding algorithm. By optimizing over the code degree profile, we show that this construction achieves the channel capacity in complexity which is jointly quadratic in the number of coded information bits and sublogarithmic in the error probability.
Index coding, or broadcasting with side information, is a network coding problem of most fundamental importance. In this problem, given a directed graph, each vertex represents a user with a need of information, and the neighborhood of each vertex represents the side information availability to that user. The aim is to find an encoding to minimum number of bits (optimal rate) that, when broadcasted, will be sufficient to the need of every user. Not only the optimal rate is intractable, but it is also very hard to characterize with some other well-studied graph parameter or with a simpler formulation, such as a linear program. Recently there have been a series of works that address this question and provide explicit schemes for index coding as the optimal value of a linear program with rate given by well-studied properties such as local chromatic number or partial clique-covering number. There has been a recent attempt to combine these existing notions of local chromatic number and partial clique covering into a unified notion denoted as the local partial clique cover (Arbabjolfaei and Kim, 2014). We present a generalized novel upper-bound (encoding scheme) - in the form of the minimum value of a linear program - for optimal index coding. Our bound also combines the notions of local chromatic number and partial clique covering into a new definition of the local partial clique cover, which outperforms both the previous bounds, as well as beats the previous attempt to combination. Further, we look at the upper bound derived recently by Thapa et al., 2015, and extend their $n$-$mathsf{GIC}$ (Generalized Interlinked Cycle) construction to $(k,n)$-$mathsf{GIC}$ graphs, which are a generalization of $k$-partial cliques.