ترغب بنشر مسار تعليمي؟ اضغط هنا

Every known artificial deep neural network (DNN) corresponds to an object in a canonical Grothendiecks topos; its learning dynamic corresponds to a flow of morphisms in this topos. Invariance structures in the layers (like CNNs or LSTMs) correspond t o Girauds stacks. This invariance is supposed to be responsible of the generalization property, that is extrapolation from learning data under constraints. The fibers represent pre-semantic categories (Culioli, Thom), over which artificial languages are defined, with internal logics, intuitionist, classical or linear (Girard). Semantic functioning of a network is its ability to express theories in such a language for answering questions in output about input data. Quantities and spaces of semantic information are defined by analogy with the homological interpretation of Shannons entropy (P.Baudot and D.B. 2015). They generalize the measures found by Carnap and Bar-Hillel (1952). Amazingly, the above semantical structures are classified by geometric fibrant objects in a closed model category of Quillen, then they give rise to homotopical invariants of DNNs and of their semantic functioning. Intentional type theories (Martin-Loef) organize these objects and fibrations between them. Information contents and exchanges are analyzed by Grothendiecks derivators.
Interference Alignment is a new solution to over- come the problem of interference in multiuser wireless com- munication systems. Recently, the Compute-and-Forward (CF) transform has been proposed to approximate the capacity of K- user Gaussian Symme tric Interference Channel and practically perform Interference Alignment in wireless networks. However, this technique shows a random behavior in the achievable sum- rate, especially at high SNR. In this work, the origin of this random behavior is analyzed and a novel precoding technique based on the Golden Ratio is proposed to scale down the fadings experiences by the achievable sum-rate at high SNR.
We propose a new coding scheme using only one lattice that achieves the $frac{1}{2}log(1+SNR)$ capacity of the additive white Gaussian noise (AWGN) channel with lattice decoding, when the signal-to-noise ratio $SNR>e-1$. The scheme applies a discrete Gaussian distribution over an AWGN-good lattice, but otherwise does not require a shaping lattice or dither. Thus, it significantly simplifies the default lattice coding scheme of Erez and Zamir which involves a quantization-good lattice as well as an AWGN-good lattice. Using the flatness factor, we show that the error probability of the proposed scheme under minimum mean-square error (MMSE) lattice decoding is almost the same as that of Erez and Zamir, for any rate up to the AWGN channel capacity. We introduce the notion of good constellations, which carry almost the same mutual information as that of continuous Gaussian inputs. We also address the implementation of Gaussian shaping for the proposed lattice Gaussian coding scheme.
We propose a new scheme of wiretap lattice coding that achieves semantic security and strong secrecy over the Gaussian wiretap channel. The key tool in our security proof is the flatness factor which characterizes the convergence of the conditional o utput distributions corresponding to different messages and leads to an upper bound on the information leakage. We not only introduce the notion of secrecy-good lattices, but also propose the {flatness factor} as a design criterion of such lattices. Both the modulo-lattice Gaussian channel and the genuine Gaussian channel are considered. In the latter case, we propose a novel secrecy coding scheme based on the discrete Gaussian distribution over a lattice, which achieves the secrecy capacity to within a half nat under mild conditions. No textit{a priori} distribution of the message is assumed, and no dither is used in our proposed schemes.
In this paper, we address the design of high spectral-efficiency Barnes-Wall (BW) lattice codes which are amenable to low-complexity decoding in additive white Gaussian noise (AWGN) channels. We propose a new method of constructing complex BW lattice codes from linear codes over polynomial rings, and show that the proposed construction provides an explicit method of bit-labeling complex BW lattice codes. To decode the code, we adapt the low-complexity sequential BW lattice decoder (SBWD) recently proposed by Micciancio and Nicolosi. First, we study the error performance of SBWD in decoding the infinite lattice, wherein we analyze the noise statistics in the algorithm, and propose a new upper bound on its error performance. We show that the SBWD is powerful in making correct decisions well beyond the packing radius. Subsequently, we use the SBWD to decode lattice codes through a novel noise-trimming technique. This is the first work that showcases the error performance of SBWD in decoding BW lattice codes of large block lengths.
In a recent work, Nazer and Gastpar proposed the Compute-and-Forward strategy as a physical-layer network coding scheme. They described a code structure based on nested lattices whose algebraic structure makes the scheme reliable and efficient. In th is work, we consider the implementation of their scheme for real Gaussian channels and one dimensional lattices. We relate the maximization of the transmission rate to the lattice shortest vector problem. We explicit, in this case, the maximum likelihood criterion and show that it can be implemented by using an Inhomogeneous Diophantine Approximation algorithm.
Shannon gave a lower bound in 1959 on the binary rate of spherical codes of given minimum Euclidean distance $rho$. Using nonconstructive codes over a finite alphabet, we give a lower bound that is weaker but very close for small values of $rho$. The construction is based on the Yaglom map combined with some finite sphere packings obtained from nonconstructive codes for the Euclidean metric. Concatenating geometric codes meeting the TVZ bound with a Lee metric BCH code over $GF(p),$ we obtain spherical codes that are polynomial time constructible. Their parameters outperform those obtained by Lachaud and Stern in 1994. At very high rate they are above 98 per cent of the Shannon bound.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا