ترغب بنشر مسار تعليمي؟ اضغط هنا

On Scaling Laws of Diversity Schemes in Decentralized Estimation

148   0   0.0 ( 0 )
 نشر من قبل Alex Leong
 تاريخ النشر 2010
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper is concerned with decentralized estimation of a Gaussian source using multiple sensors. We consider a diversity scheme where only the sensor with the best channel sends their measurements over a fading channel to a fusion center, using the analog amplify and forwarding technique. The fusion centre reconstructs an MMSE estimate of the source based on the received measurements. A distributed version of the diversity scheme where sensors decide whether to transmit based only on their local channel information is also considered. We derive asymptotic expressions for the expected distortion (of the MMSE estimate at the fusion centre) of these schemes as the number of sensors becomes large. For comparison, asymptotic expressions for the expected distortion for a coherent multi-access scheme and an orthogonal access scheme are derived. We also study for the diversity schemes, the optimal power allocation for minimizing the expected distortion subject to average total power constraints. The effect of optimizing the probability of transmission on the expected distortion in the distributed scenario is also studied. It is seen that as opposed to the coherent multi-access scheme and the orthogonal scheme (where the expected distortion decays as 1/M, M being the number of sensors), the expected distortion decays only as 1/ln(M) for the diversity schemes. This reduction of the decay rate can be seen as a tradeoff between the simplicity of the diversity schemes and the strict synchronization and large bandwidth requirements for the coherent multi-access and the orthogonal schemes, respectively.

قيم البحث

اقرأ أيضاً

The scaling laws of the achievable communication rates and the corresponding upper bounds of distributed reception in the presence of an interfering signal are investigated. The scheme includes one transmitter communicating to a remote destination vi a two relays, which forward messages to the remote destination through reliable links with finite capacities. The relays receive the transmission along with some unknown interference. We focus on three common settings for distributed reception, wherein the scaling laws of the capacity (the pre-log as the power of the transmitter and the interference are taken to infinity) are completely characterized. It is shown in most cases that in order to overcome the interference, a definite amount of information about the interference needs to be forwarded along with the desired message, to the destination. It is exemplified in one scenario that the cut-set upper bound is strictly loose. The results are derived using the cut-set along with a new bounding technique, which relies on multi letter expressions. Furthermore, lattices are found to be a useful communication technique in this setting, and are used to characterize the scaling laws of achievable rates.
We consider a cognitive network consisting of n random pairs of cognitive transmitters and receivers communicating simultaneously in the presence of multiple primary users. Of interest is how the maximum throughput achieved by the cognitive users sca les with n. Furthermore, how far these users must be from a primary user to guarantee a given primary outage. Two scenarios are considered for the network scaling law: (i) when each cognitive transmitter uses constant power to communicate with a cognitive receiver at a bounded distance away, and (ii) when each cognitive transmitter scales its power according to the distance to a considered primary user, allowing the cognitive transmitter-receiver distances to grow. Using single-hop transmission, suitable for cognitive devices of opportunistic nature, we show that, in both scenarios, with path loss larger than 2, the cognitive network throughput scales linearly with the number of cognitive users. We then explore the radius of a primary exclusive region void of cognitive transmitters. We obtain bounds on this radius for a given primary outage constraint. These bounds can help in the design of a primary network with exclusive regions, outside of which cognitive users may transmit freely. Our results show that opportunistic secondary spectrum access using single-hop transmission is promising.
148 - Jithin Ravi , Tobias Koch 2020
This paper considers a Gaussian multiple-access channel with random user activity where the total number of users $ell_n$ and the average number of active users $k_n$ may grow with the blocklength $n$. For this channel, it studies the maximum number of bits that can be transmitted reliably per unit-energy as a function of $ell_n$ and $k_n$. When all users are active with probability one, i.e., $ell_n = k_n$, it is demonstrated that if $k_n$ is of an order strictly below $n/log n$, then each user can achieve the single-user capacity per unit-energy $(log e)/N_0$ (where $N_0/ 2$ is the noise power) by using an orthogonal-access scheme. In contrast, if $k_n$ is of an order strictly above $n/log n$, then the capacity per unit-energy is zero. Consequently, there is a sharp transition between orders of growth where interference-free communication is feasible and orders of growth where reliable communication at a positive rate per unit-energy is infeasible. It is further demonstrated that orthogonal-access schemes in combination with orthogonal codebooks, which achieve the capacity per unit-energy when the number of users is bounded, can be strictly suboptimal. When the user activity is random, i.e., when $ell_n$ and $k_n$ are different, it is demonstrated that if $k_nlog ell_n$ is sublinear in $n$, then each user can achieve the single-user capacity per unit-energy $(log e)/N_0$. Conversely, if $k_nlog ell_n$ is superlinear in $n$, then the capacity per unit-energy is zero. Consequently, there is again a sharp transition between orders of growth where interference-free communication is feasible and orders of growth where reliable communication at a positive rate is infeasible that depends on the asymptotic behaviours of both $ell_n$ and $k_n$. It is further demonstrated that orthogonal-access schemes, which are optimal when $ell_n = k_n$, can be strictly suboptimal.
We address the optimization of the sum rate performance in multicell interference-limited singlehop networks where access points are allowed to cooperate in terms of joint resource allocation. The resource allocation policies considered here combine power control and user scheduling. Although very promising from a conceptual point of view, the optimization of the sum of per-link rates hinges, in principle, on tough issues such as computational complexity and the requirement for heavy receiver-to-transmitter channel information feedback across all network cells. In this paper, we show that, in fact, distributed algorithms are actually obtainable in the asymptotic regime where the numbers of users per cell is allowed to grow large. Additionally, using extreme value theory, we provide scaling laws for upper and lower bounds for the network capacity (sum of single user rates over all cells), corresponding to zero-interference and worst-case interference scenarios. We show that the scaling is either dominated by path loss statistics or by small-scale fading, depending on the regime and user location scenario. We show that upper and lower rate bounds behave in fact identically, asymptotically. This remarkable result suggests not only that distributed resource allocation is practically possible but also that the impact of multicell interference on the capacity (in terms of scaling) actually vanishes asymptotically.
Private information retrieval has been reformulated in an information-theoretic perspective in recent years. The two most important parameters considered for a PIR scheme in a distributed storage system are the storage overhead and PIR rate. The comp lexity of the computations done by the servers for the various tasks of the distributed storage system is an important parameter in such systems which didnt get enough attention in PIR schemes. As a consequence, we take into consideration a third parameter, the access complexity of a PIR scheme, which characterizes the total amount of data to be accessed by the servers for responding to the queries throughout a PIR scheme. We use a general covering codes approach as the main tool for improving the access complexity. With a given amount of storage overhead, the ultimate objective is to characterize the tradeoff between the rate and access complexity of a PIR scheme. This covering codes approach raises a new interesting coding problem of generalized coverings similarly to the well-known generalized Hamming weights.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا