Do you want to publish a course? Click here

Sub-Nyquist Sampling: Bridging Theory and Practice

143   0   0.0 ( 0 )
 Added by Moshe Mishali
 Publication date 2011
and research's language is English




Ask ChatGPT about the research

Sampling theory encompasses all aspects related to the conversion of continuous-time signals to discrete streams of numbers. The famous Shannon-Nyquist theorem has become a landmark in the development of digital signal processing. In modern applications, an increasingly number of functions is being pushed forward to sophisticated software algorithms, leaving only those delicate finely-tuned tasks for the circuit level. In this paper, we review sampling strategies which target reduction of the ADC rate below Nyquist. Our survey covers classic works from the early 50s of the previous century through recent publications from the past several years. The prime focus is bridging theory and practice, that is to pinpoint the potential of sub-Nyquist strategies to emerge from the math to the hardware. In that spirit, we integrate contemporary theoretical viewpoints, which study signal modeling in a union of subspaces, together with a taste of practical aspects, namely how the avant-garde modalities boil down to concrete signal processing systems. Our hope is that this presentation style will attract the interest of both researchers and engineers in the hope of promoting the sub-Nyquist premise into practical applications, and encouraging further research into this exciting new frontier.



rate research

Read More

Conventional sub-Nyquist sampling methods for analog signals exploit prior information about the spectral support. In this paper, we consider the challenging problem of blind sub-Nyquist sampling of multiband signals, whose unknown frequency support occupies only a small portion of a wide spectrum. Our primary design goals are efficient hardware implementation and low computational load on the supporting digital processing. We propose a system, named the modulated wideband converter, which first multiplies the analog signal by a bank of periodic waveforms. The product is then lowpass filtered and sampled uniformly at a low rate, which is orders of magnitude smaller than Nyquist. Perfect recovery from the proposed samples is achieved under certain necessary and sufficient conditions. We also develop a digital architecture, which allows either reconstruction of the analog input, or processing of any band of interest at a low rate, that is, without interpolating to the high Nyquist rate. Numerical simulations demonstrate many engineering aspects: robustness to noise and mismodeling, potential hardware simplifications, realtime performance for signals with time-varying support and stability to quantization effects. We compare our system with two previous approaches: periodic nonuniform sampling, which is bandwidth limited by existing hardware devices, and the random demodulator, which is restricted to discrete multitone signals and has a high computational load. In the broader context of Nyquist sampling, our scheme has the potential to break through the bandwidth barrier of state-of-the-art analog conversion technologies such as interleaved converters.
As technology grows, higher frequency signals are required to be processed in various applications. In order to digitize such signals, conventional analog to digital convertors are facing implementation challenges due to the higher sampling rates. Hence, lower sampling rates (i.e., sub-Nyquist) are considered to be cost efficient. A well-known approach is to consider sparse signals that have fewer nonzero frequency components compared to the highest frequency component. For the prior knowledge of the sparse positions, well-established methods already exist. However, there are applications where such information is not available. For such cases, a number of approaches have recently been proposed. In this paper, we propose several random sampling recovery algorithms which do not require any anti-aliasing filter. Moreover, we offer certain conditions under which these recovery techniques converge to the signal. Finally, we also confirm the performance of the above methods through extensive simulations.
Multiple input multiple output (MIMO) radar exhibits several advantages with respect to traditional radar array systems in terms of flexibility and performance. However, MIMO radar poses new challenges for both hardware design and digital processing. In particular, achieving high azimuth resolution requires a large number of transmit and receive antennas. In addition, the digital processing is performed on samples of the received signal, from each transmitter to each receiver, at its Nyquist rate, which can be prohibitively large when high resolution is needed. Overcoming the rate bottleneck, sub-Nyquist sampling methods have been proposed that break the link between radar signal bandwidth and sampling rate. In this work, we extend these methods to MIMO configurations and propose a sub-Nyquist MIMO radar (SUMMeR) system that performs both time and spatial compression. We present a range-azimuth-Doppler recovery algorithm from sub-Nyquist samples obtained from a reduced number of transmitters and receivers, that exploits the sparsity of the recovered targets parameters. This allows us to achieve reduction in the number of deployed antennas and the number of samples per receiver, without degrading the time and spatial resolutions. Simulations illustrate the detection performance of SUMMeR for different compression levels and shows that both time and spatial resolution are preserved, with respect to classic Nyquist MIMO configurations. We also examine the impact of design parameters, such as antennas locations and carrier frequencies, on the detection performance, and provide guidelines for their choice.
The Internet of Bio-nano Things is a significant development for next generation communication technologies. Because conventional wireless communication technologies face challenges in realizing new applications (e.g., in-body area networks for health monitoring) and necessitate the substitution of information carriers, researchers have shifted their interest to molecular communications (MC). Although remarkable progress has been made in this field over the last decade, advances have been far from acceptable for the achievement of its application objectives. A crucial problem of MC is the low data rate and high error rate inherent in particle dynamics specifications, in contrast to wave-based conventional communications. Therefore, it is important to investigate the resources by which MC can obtain additional information paths and provide strategies to exploit these resources. This study aims to examine techniques involving resource aggregation and exploitation to provide prospective directions for future progress in MC. In particular, we focus on state-of-the-art studies on multiple-input multiple-output (MIMO) systems. We discuss the possible advantages of applying MIMO to various MC system models. Furthermore, we survey various studies that aimed to achieve MIMO gains for the respective models, from theoretical background to prototypes. Finally, we conclude this study by summarizing the challenges that need to be addressed.
Donoho and Stark have shown that a precise deterministic recovery of missing information contained in a time interval shorter than the time-frequency uncertainty limit is possible. We analyze this signal recovery mechanism from a physics point of view and show that the well-known Shannon-Nyquist sampling theorem, which is fundamental in signal processing, also uses essentially the same mechanism. The uncertainty relation in the context of information theory, which is based on Fourier analysis, provides a criterion to distinguish Shannon-Nyquist sampling from compressed sensing. A new signal recovery formula, which is analogous to Donoho-Stark formula, is given using the idea of Shannon-Nyquist sampling; in this formulation, the smearing of information below the uncertainty limit as well as the recovery of information with specified bandwidth take place. We also discuss the recovery of states from the domain below the uncertainty limit of coordinate and momentum in quantum mechanics and show that in principle the state recovery works by assuming ideal measurement procedures. The recovery of the lost information in the sub-uncertainty domain means that the loss of information in such a small domain is not fatal, which is in accord with our common understanding of the uncertainty principle, although its precise recovery is something we are not used to in quantum mechanics. The uncertainty principle provides a universal sampling criterion covering both the classical Shannon-Nyquist sampling theorem and the quantum mechanical measurement.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا