ترغب بنشر مسار تعليمي؟ اضغط هنا

Complexity of Networks (reprise)

52   0   0.0 ( 0 )
 نشر من قبل Russell K. Standish
 تاريخ النشر 2009
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Network or graph structures are ubiquitous in the study of complex systems. Often, we are interested in complexity trends of these system as it evolves under some dynamic. An example might be looking at the complexity of a food web as species enter an ecosystem via migration or speciation, and leave via extinction. In a previous paper, a complexity measure of networks was proposed based on the {em complexity is information content} paradigm. To apply this paradigm to any object, one must fix two things: a representation language, in which strings of symbols from some alphabet describe, or stand for the objects being considered; and a means of determining when two such descriptions refer to the same object. With these two things set, the information content of an object can be computed in principle from the number of equivalent descriptions describing a particular object. The previously proposed representation language had the deficiency that the fully connected and empty networks were the most complex for a given number of nodes. A variation of this measure, called zcomplexity, applied a compression algorithm to the resulting bitstring representation, to solve this problem. Unfortunately, zcomplexity proved too computationally expensive to be practical. In this paper, I propose a new representation language that encodes the number of links along with the number of nodes and a representation of the linklist. This, like zcomplexity, exhibits minimal complexity for fully connected and empty networks, but is as tractable as the original measure. ...


قيم البحث

اقرأ أيضاً

162 - G. Maierbacher , J. Barros 2008
We consider the distributed source coding problem in which correlated data picked up by scattered sensors has to be encoded separately and transmitted to a common receiver, subject to a rate-distortion constraint. Although near-tooptimal solutions ba sed on Turbo and LDPC codes exist for this problem, in most cases the proposed techniques do not scale to networks of hundreds of sensors. We present a scalable solution based on the following key elements: (a) distortion-optimized index assignments for low-complexity distributed quantization, (b) source-optimized hierarchical clustering based on the Kullback-Leibler distance and (c) sum-product decoding on specific factor graphs exploiting the correlation of the data.
We propose a low complexity antenna selection algorithm for low target rate users in cloud radio access networks. The algorithm consists of two phases: In the first phase, each remote radio head (RRH) determines whether to be included in a candidate set by using a predefined selection threshold. In the second phase, RRHs are randomly selected within the candidate set made in the first phase. To analyze the performance of the proposed algorithm, we model RRHs and users locations by a homogeneous Poisson point process, whereby the signal-to-interference ratio (SIR) complementary cumulative distribution function is derived. By approximating the derived expression, an approximate optimum selection threshold that maximizes the SIR coverage probability is obtained. Using the obtained threshold, we characterize the performance of the algorithm in an asymptotic regime where the RRH density goes to infinity. The obtained threshold is then modified depending on various algorithm options. A distinguishable feature of the proposed algorithm is that the algorithm complexity keeps constant independent to the RRH density, so that a user is able to connect to a network without heavy computation at baseband units.
This work considers the use of Total variation (TV) minimization in the recovery of a given gradient sparse vector from Gaussian linear measurements. It has been shown in recent studies that there exist a sharp phase transition behavior in TV minimiz ation in asymptotic regimes. The phase transition curve specifies the boundary of success and failure of TV minimization for large number of measurements. It is a challenging task to obtain a theoretical bound that reflects this curve. In this work, we present a novel upper-bound that suitably approximates this curve and is asymptotically sharp. Numerical results show that our bound is closer to the empirical TV phase transition curve than the previously known bound obtained by Kabanava.
Private information retrieval has been reformulated in an information-theoretic perspective in recent years. The two most important parameters considered for a PIR scheme in a distributed storage system are the storage overhead and PIR rate. The comp lexity of the computations done by the servers for the various tasks of the distributed storage system is an important parameter in such systems which didnt get enough attention in PIR schemes. As a consequence, we take into consideration a third parameter, the access complexity of a PIR scheme, which characterizes the total amount of data to be accessed by the servers for responding to the queries throughout a PIR scheme. We use a general covering codes approach as the main tool for improving the access complexity. With a given amount of storage overhead, the ultimate objective is to characterize the tradeoff between the rate and access complexity of a PIR scheme. This covering codes approach raises a new interesting coding problem of generalized coverings similarly to the well-known generalized Hamming weights.
160 - Yin Teng , Jiayu Li , Lin Liu 2020
In this paper, we present a novel scenario for directional modulation (DM) networks with a full-duplex (FD) malicious attacker (Mallory), where Mallory can eavesdrop the confidential message from Alice to Bob and simultaneously interfere Bob by sendi ng a jamming signal. Considering that the jamming plus noise at Bob is colored, an enhanced receive beamforming (RBF), whitening-filter-based maximum ratio combining (MRC) (WFMRC), is proposed. Subsequently, two RBFs of maximizing the secrecy rate (Max-SR) and minimum mean square error (MMSE) are presented to show the same performance as WFMRC. To reduce the computational complexity of conventional MMSE, a low-complexity MMSE is also proposed. Eventually, to completely remove the jamming signal from Mallory and transform the residual interference plus noise to a white one, a new RBF, null-space projection (NSP) based maximizing WF receive power, called NSP-based Max-WFRP, is also proposed. From simulation results, we find that the proposed Max-SR, WFMRC, and low-complexity MMSE have the same SR performance as conventional MMSE, and achieve the best performance while the proposed NSP-based Max-WFRP performs better than MRC in the medium and high signal-to-noise ratio regions. Due to its low-complexity,the proposed low-complexity MMSE is very attractive. More important, the proposed methods are robust to the change in malicious jamming power compared to conventional MRC.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا