ترغب بنشر مسار تعليمي؟ اضغط هنا

The problem of data-driven joint design of transmitted waveform and detector in a radar system is addressed in this paper. We propose two novel learning-based approaches to waveform and detector design based on end-to-end training of the radar system . The first approach consists of alternating supervised training of the detector for a fixed waveform and reinforcement learning of the transmitter for a fixed detector. In the second approach, the transmitter and detector are trained simultaneously. Various operational waveform constraints, such as peak-to-average-power ratio (PAR) and spectral compatibility, are incorporated into the design. Unlike traditional radar design methods that rely on rigid mathematical models with limited applicability, it is shown that radar learning can be robustified by training the detector with synthetic data generated from multiple statistical models of the environment. Theoretical considerations and results show that the proposed methods are capable of adapting the transmitted waveform to environmental conditions while satisfying design constraints.
An end-to-end learning approach is proposed for the joint design of transmitted waveform and detector in a radar system. Detector and transmitted waveform are trained alternately: For a fixed transmitted waveform, the detector is trained using superv ised learning so as to approximate the Neyman-Pearson detector; and for a fixed detector, the transmitted waveform is trained using reinforcement learning based on feedback from the receiver. No prior knowledge is assumed about the target and clutter models. Both transmitter and receiver are implemented as feedforward neural networks. Numerical results show that the proposed end-to-end learning approach is able to obtain a more robust radar performance in clutter and colored noise of arbitrary probability density functions as compared to conventional methods, and to successfully adapt the transmitted waveform to environmental conditions.
The problem of modulation classification for a multiple-antenna (MIMO) system employing orthogonal frequency division multiplexing (OFDM) is investigated under the assumption of unknown frequency-selective fading channels and signal-to-noise ratio (S NR). The classification problem is formulated as a Bayesian inference task, and solutions are proposed based on Gibbs sampling and mean field variational inference. The proposed methods rely on a selection of the prior distributions that adopts a latent Dirichlet model for the modulation type and on the Bayesian network formalism. The Gibbs sampling method converges to the optimal Bayesian solution and, using numerical results, its accuracy is seen to improve for small sample sizes when switching to the mean field variational inference technique after a number of iterations. The speed of convergence is shown to improve via annealing and random restarts. While most of the literature on modulation classification assume that the channels are flat fading, that the number of receive antennas is no less than that of transmit antennas, and that a large number of observed data symbols are available, the proposed methods perform well under more general conditions. Finally, the proposed Bayesian methods are demonstrated to improve over existing non-Bayesian approaches based on independent component analysis and on prior Bayesian methods based on the `superconstellation method.
Localization of radio frequency sources over multipath channels is a difficult problem arising in applications such as outdoor or indoor gelocation. Common approaches that combine ad-hoc methods for multipath mitigation with indirect localization rel ying on intermediary parameters such as time-of-arrivals, time difference of arrivals or received signal strengths, provide limited performance. This work models the localization of known waveforms over unknown multipath channels in a sparse framework, and develops a direct approach in which multiple sources are localized jointly, directly from observations obtained at distributed sources. The proposed approach exploits channel properties that enable to distinguish line-of-sight (LOS) from non-LOS signal paths. Theoretical guarantees are established for correct recovery of the sources locations by atomic norm minimization. A second-order cone-based algorithm is developed to produce the optimal atomic decomposition, and it is shown to produce high accuracy location estimates over complex scenes, in which sources are subject to diverse multipath conditions, including lack of LOS.
A multistatic radar set-up is considered in which distributed receive antennas are connected to a Fusion Center (FC) via limited-capacity backhaul links. Similar to cloud radio access networks in communications, the receive antennas quantize the rece ived baseband signal before transmitting it to the FC. The problem of maximizing the detection performance at the FC jointly over the code vector used by the transmitting antenna and over the statistics of the noise introduced by backhaul quantization is investigated. Specifically, adopting the information-theoretic criterion of the Bhattacharyya distance to evaluate the detection performance at the FC and information-theoretic measures of the quantization rate, the problem at hand is addressed via a Block Coordinate Descent (BCD) method coupled with Majorization-Minimization (MM). Numerical results demonstrate the advantages of the proposed joint optimization approach over more conventional solutions that perform separate optimization.
A novel Bayesian modulation classification scheme is proposed for a single-antenna system over frequency-selective fading channels. The method is based on Gibbs sampling as applied to a latent Dirichlet Bayesian network (BN). The use of the proposed latent Dirichlet BN provides a systematic solution to the convergence problem encountered by the conventional Gibbs sampling approach for modulation classification. The method generalizes, and is shown to improve upon, the state of the art.
This work studies the throughput scaling laws of ad hoc wireless networks in the limit of a large number of nodes. A random connections model is assumed in which the channel connections between the nodes are drawn independently from a common distribu tion. Transmitting nodes are subject to an on-off strategy, and receiving nodes employ conventional single-user decoding. The following results are proven: 1) For a class of connection models with finite mean and variance, the throughput scaling is upper-bounded by $O(n^{1/3})$ for single-hop schemes, and $O(n^{1/2})$ for two-hop (and multihop) schemes. 2) The $Theta (n^{1/2})$ throughput scaling is achievable for a specific connection model by a two-hop opportunistic relaying scheme, which employs full, but only local channel state information (CSI) at the receivers, and partial CSI at the transmitters. 3) By relaxing the constraints of finite mean and variance of the connection model, linear throughput scaling $Theta (n)$ is achievable with Pareto-type fading models.
Relay networks having $n$ source-to-destination pairs and $m$ half-duplex relays, all operating in the same frequency band in the presence of block fading, are analyzed. This setup has attracted significant attention and several relaying protocols ha ve been reported in the literature. However, most of the proposed solutions require either centrally coordinated scheduling or detailed channel state information (CSI) at the transmitter side. Here, an opportunistic relaying scheme is proposed, which alleviates these limitations. The scheme entails a two-hop communication protocol, in which sources communicate with destinations only through half-duplex relays. The key idea is to schedule at each hop only a subset of nodes that can benefit from emph{multiuser diversity}. To select the source and destination nodes for each hop, it requires only CSI at receivers (relays for the first hop, and destination nodes for the second hop) and an integer-value CSI feedback to the transmitters. For the case when $n$ is large and $m$ is fixed, it is shown that the proposed scheme achieves a system throughput of $m/2$ bits/s/Hz. In contrast, the information-theoretic upper bound of $(m/2)log log n$ bits/s/Hz is achievable only with more demanding CSI assumptions and cooperation between the relays. Furthermore, it is shown that, under the condition that the product of block duration and system bandwidth scales faster than $log n$, the achievable throughput of the proposed scheme scales as $Theta ({log n})$. Notably, this is proven to be the optimal throughput scaling even if centralized scheduling is allowed, thus proving the optimality of the proposed scheme in the scaling law sense.
This paper presents an analysis of target localization accuracy, attainable by the use of MIMO (Multiple-Input Multiple-Output) radar systems, configured with multiple transmit and receive sensors, widely distributed over a given area. The Cramer-Rao lower bound (CRLB) for target localization accuracy is developed for both coherent and non-coherent processing. Coherent processing requires a common phase reference for all transmit and receive sensors. The CRLB is shown to be inversely proportional to the signal effective bandwidth in the non-coherent case, but is approximately inversely proportional to the carrier frequency in the coherent case. We further prove that optimization over the sensors positions lowers the CRLB by a factor equal to the product of the number of transmitting and receiving sensors. The best linear unbiased estimator (BLUE) is derived for the MIMO target localization problem. The BLUEs utility is in providing a closed form localization estimate that facilitates the analysis of the relations between sensors locations, target location, and localization accuracy. Geometric dilution of precision (GDOP) contours are used to map the relative performance accuracy for a given layout of radars over a given geographic area.
A network consisting of $n$ source-destination pairs and $m$ relays is considered. Focusing on the large system limit (large $n$), the throughput scaling laws of two-hop relaying protocols are studied for Rayleigh fading channels. It is shown that, u nder the practical constraints of single-user encoding-decoding scheme, and partial channel state information (CSI) at the transmitters (via integer-value feedback from the receivers), the maximal throughput scales as $log n$ even if full relay cooperation is allowed. Furthermore, a novel decentralized opportunistic relaying scheme with receiver CSI, partial transmitter CSI, and no relay cooperation, is shown to achieve the optimal throughput scaling law of $log n$.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا