ترغب بنشر مسار تعليمي؟ اضغط هنا

Traditionally, quantization is designed to minimize the reconstruction error of a data source. When considering downstream classification tasks, other measures of distortion can be of interest; such as the 0-1 classification loss. Furthermore, it is desirable that the performance of these quantizers not deteriorate once they are deployed into production, as relearning the scheme online is not always possible. In this work, we present a class of algorithms that learn distributed quantization schemes for binary classification tasks. Our method performs well on unseen data, and is faster than previous methods proportional to a quadratic term of the dataset size. It works by regularizing the 0-1 loss with the reconstruction error. We present experiments on synthetic mixture and bivariate Gaussian data and compare training, testing, and generalization errors with a family of benchmark quantization schemes from the literature. Our method is called Regularized Classification-Aware Quantization.
Applications where multiple users communicate with a common server and desire low latency are common and increasing. This paper studies a network with two source nodes, one relay node and a destination node, where each source nodes wishes to transmit a sequence of messages, through the relay, to the destination, who is required to decode the messages with a strict delay constraint $T$. The network with a single source node has been studied in cite{Silas2019}. We start by introducing two important tools: the delay spectrum, which generalizes delay-constrained point-to-point transmission, and concatenation, which, similar to time sharing, allows combinations of different codes in order to achieve a desired regime of operation. Using these tools, we are able to generalize the two schemes previously presented in cite{Silas2019}, and propose a novel scheme which allows us to achieve optimal rates under a set of well-defined conditions. Such novel scheme is further optimized in order to improve the achievable rates in the scenarios where the conditions for optimality are not met.
This paper studies low-latency streaming codes for the multi-hop network. The source is transmitting a sequence of messages (streaming messages) to a destination through a chain of relays where each hop is subject to packet erasures. Every source mes sage has to be recovered perfectly at the destination within a delay constraint of $T$ time slots. In any sliding window of $T+1$ time slots, we assume no more than $N_j$ erasures introduced by the $j$th hop channel. The capacity in case of a single relay (a three-node network) was derived by Fong [1], et al. While the converse derived for the three-node case can be extended to any number of nodes using a similar technique (analyzing the case where erasures on other links are consecutive), we demonstrate next that the achievable scheme, which suggested a clever symbol-wise decode and forward strategy, can not be straightforwardly extended without a loss in performance. The coding scheme for the three-node network, which was shown to achieve the upper bound, was ``state-independent (i.e., it does not depend on specific erasure pattern). While this is a very desirable property, in this paper, we suggest a ``state-dependent (i.e., a scheme which depends on specific erasure pattern) and show that it achieves the upper bound up to the size of an additional header. Since, as we show, the size of the header does not depend on the field size, the gap between the achievable rate and the upper bound decreases as the field size increases.
121 - Elad Domanovitz , Uri Erez 2019
Communication over the i.i.d. Rayleigh slow-fading MAC is considered, where all terminals are equipped with a single antenna. Further, a communication protocol is considered where all users transmit at (just below) the symmetric capacity (per user) o f the channel, a rate which is fed back (dictated) to the users by the base station. Tight bounds are established on the distribution of the rate attained by the protocol. In particular, these bounds characterize the probability that the dominant face of the MAC capacity region contains a symmetric rate point, i.e., that the considered protocol strictly attains the sum capacity of the channel. The analysis provides a non-asymptotic counterpart to the diversity-multiplexing tradeoff of the multiple access channel. Finally, a practical scheme based on integer-forcing and space-time precoding is shown to be an effective coding architecture for this communication scenario.
123 - Elad Domanovitz , Uri Erez 2019
A simple method is proposed for use in a scenario involving a single-antenna source node communicating with a destination node that is equipped with two antennas via multiple single-antenna relay nodes, where each relay is subject to an individual po wer constraint. Furthermore, ultra-reliable and low-latency communication are desired. The latter requirement translates to considering only schemes that make use of local channel state information. Whereas for a receiver equipped with a single antenna, distributed beamforming is a well known and adequate solution, no straightforward extension is known. In this paper, a scheme is proposed based on a space-time diversity transformation that is applied as a front-end operation at the destination node. This results in an effective unitary channel matrix replacing the scalar coefficient corresponding to each user. Each relay node then inverts its associated channel matrix, which is the generalization of undoing the channel phase in the classical case of distributed beamforming to a single-antenna receiver, and then repeats the message over the resulting gain-only channel. In comparison to a single-antenna destination node, the method doubles the diversity order without requiring any channel state information at the receiver while at the same time retaining the array gain offered by the relays.
This paper considers the transmission of an infinite sequence of messages (a streaming source) over a packet erasure channel, where every source message must be recovered perfectly at the destination subject to a fixed decoding delay. While the capac ity of a channel that introduces only bursts of erasures is well known, only recently, the capacity of a channel with either one burst of erasures or multiple arbitrary erasures in any fixed-sized sliding window has been established. However, the codes shown to achieve this capacity are either non-explicit constructions (proven to exist) or explicit constructions that require large field size that scales exponentially with the delay. This work describes an explicit rate-optimal construction for admissible channel and delay parameters over a field size that scales only quadratically with the delay.
167 - Elad Domanovitz , Uri Erez 2019
The maximal correlation coefficient is a well-established generalization of the Pearson correlation coefficient for measuring non-linear dependence between random variables. It is appealing from a theoretical standpoint, satisfying Renyis axioms for a measure of dependence. It is also attractive from a computational point of view due to the celebrated alternating conditional expectation algorithm, allowing to compute its empirical version directly from observed data. Nevertheless, from the outset, it was recognized that the maximal correlation coefficient suffers from some fundamental deficiencies, limiting its usefulness as an indicator of estimation quality. Another well-known measure of dependence is the correlation ratio which also suffers from some drawbacks. Specifically, the maximal correlation coefficient equals one too easily whereas the correlation ratio equals zero too easily. The present work recounts some attempts that have been made in the past to alter the definition of the maximal correlation coefficient in order to overcome its weaknesses and then proceeds to suggest a natural variant of the maximal correlation coefficient as well as a modified conditional expectation algorithm to compute it. The proposed dependence measure at the same time resolves the major weakness of the correlation ratio measure and may be viewed as a bridge between these two classical measures.
213 - Elad Domanovitz , Uri Erez 2017
Integer-forcing source coding has been proposed as a low-complexity method for compression of distributed correlated Gaussian sources. In this scheme, each encoder quantizes its observation using the same fine lattice and reduces the result modulo a coarse lattice. Rather than directly recovering the individual quantized signals, the decoder first recovers a full-rank set of judiciously chosen integer linear combinations of the quantized signals, and then inverts it. It has been observed that the method works very well for most but not all source covariance matrices. The present work quantifies the measure of bad covariance matrices by studying the probability that integer-forcing source coding fails as a function of the allocated rate, %in excess of the %Berger-Tung benchmark, where the probability is with respect to a random orthonormal transformation that is applied to the sources prior to quantization. For the important case where the signals to be compressed correspond to the antenna inputs of relays in an i.i.d. Rayleigh fading environment, this orthonormal transformation can be viewed as being performed by nature. Hence, the results provide performance guarantees for distributed source coding via integer forcing in this scenario.
Receiver diversity combining methods play a key role in combating the detrimental effects of fadingin wireless communication and other applications. A novel diversity combining method is proposedwhere a universal, i.e., channel independent, orthogona l dimension-reducing space-time transformationis applied prior to quantization of the signals. The scheme may be considered as the counterpart ofAlamouti modulation, and more generally of orthogonal space-time block codes.
The performance of integer-forcing equalization for communication over the compound multiple-input multipleoutput channel is investigated. An upper bound on the resulting outage probability as a function of the gap to capacity has been derived previo usly, assuming a random precoding matrix drawn from the circular unitary ensemble is applied prior to transmission. In the present work a simple and explicit lower bound on the worst-case outage probability is derived for the case of a system with two transmit antennas and two or more receive antennas, leveraging the properties of the Jacobi ensemble. The derived lower bound is also extended to random space-time precoding, and may serve as a useful benchmark for assessing the relative merits of various algebraic space-time precoding schemes. We further show that the lower bound may be adapted to the case of a $1 times N_t$ system. As an application of this, we derive closed-form bounds for the symmetric-rate capacity of the Rayleigh fading multiple-access channel where all terminals are equipped with a single antenna. Lastly, we demonstrate that the integer-forcing equalization coupled with distributed space-time coding is able to approach these bounds.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا