ترغب بنشر مسار تعليمي؟ اضغط هنا

Approximate Computation of DFT without Performing Any Multiplications: Applications to Radar Signal Processing

317   0   0.0 ( 0 )
 نشر من قبل Alican Bozkurt
 تاريخ النشر 2014
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In many practical problems it is not necessary to compute the DFT in a perfect manner including some radar problems. In this article a new multiplication free algorithm for approximate computation of the DFT is introduced. All multiplications $(atimes b)$ in DFT are replaced by an operator which computes $sign(atimes b)(|a|+|b|)$. The new transform is especially useful when the signal processing algorithm requires correlations. Ambiguity function in radar signal processing requires high number of multiplications to compute the correlations. This new additive operator is used to decrease the number of multiplications. Simulation examples involving passive radars are presented.

قيم البحث

اقرأ أيضاً

Submodularity is a discrete domain functional property that can be interpreted as mimicking the role of the well-known convexity/concavity properties in the continuous domain. Submodular functions exhibit strong structure that lead to efficient optim ization algorithms with provable near-optimality guarantees. These characteristics, namely, efficiency and provable performance bounds, are of particular interest for signal processing (SP) and machine learning (ML) practitioners as a variety of discrete optimization problems are encountered in a wide range of applications. Conventionally, two general approaches exist to solve discrete problems: $(i)$ relaxation into the continuous domain to obtain an approximate solution, or $(ii)$ development of a tailored algorithm that applies directly in the discrete domain. In both approaches, worst-case performance guarantees are often hard to establish. Furthermore, they are often complex, thus not practical for large-scale problems. In this paper, we show how certain scenarios lend themselves to exploiting submodularity so as to construct scalable solutions with provable worst-case performance guarantees. We introduce a variety of submodular-friendly applications, and elucidate the relation of submodularity to convexity and concavity which enables efficient optimization. With a mixture of theory and practice, we present different flavors of submodularity accompanying illustrative real-world case studies from modern SP and ML. In all cases, optimization algorithms are presented, along with hints on how optimality guarantees can be established.
Synergistic design of communications and radar systems with common spectral and hardware resources is heralding a new era of efficiently utilizing a limited radio-frequency spectrum. Such a joint radar-communications (JRC) model has advantages of low -cost, compact size, less power consumption, spectrum sharing, improved performance, and safety due to enhanced information sharing. Today, millimeter-wave (mm-wave) communications have emerged as the preferred technology for short distance wireless links because they provide transmission bandwidth that is several gigahertz wide. This band is also promising for short-range radar applications, which benefit from the high-range resolution arising from large transmit signal bandwidths. Signal processing techniques are critical in implementation of mmWave JRC systems. Major challenges are joint waveform design and performance criteria that would optimally trade-off between communications and radar functionalities. Novel multiple-input-multiple-output (MIMO) signal processing techniques are required because mmWave JRC systems employ large antenna arrays. There are opportunities to exploit recent advances in cognition, compressed sensing, and machine learning to reduce required resources and dynamically allocate them with low overheads. This article provides a signal processing perspective of mmWave JRC systems with an emphasis on waveform design.
Joint communication and radar sensing (JCR) represents an emerging research field aiming to integrate the above two functionalities into a single system, sharing a majority of hardware and signal processing modules and, in a typical case, sharing a s ingle transmitted signal. It is recognised as a key approach in significantly improving spectrum efficiency, reducing device size, cost and power consumption, and improving performance thanks to potential close cooperation of the two functions. Advanced signal processing techniques are critical for making the integration efficient, from transmission signal design to receiver processing. This paper provides a comprehensive overview of JCR systems from the signal processing perspective, with a focus on state-of-the-art. A balanced coverage on both transmitter and receiver is provided for three types of JCR systems, communication-centric, radar-centric, and joint design and optimization.
We give improved separations for the query complexity analogue of the log-approximate-rank conjecture i.e. we show that there are a plethora of total Boolean functions on $n$ input bits, each of which has approximate Fourier sparsity at most $O(n^3)$ and randomized parity decision tree complexity $Theta(n)$. This improves upon the recent work of Chattopadhyay, Mande and Sherif (JACM 20) both qualitatively (in terms of designing a large number of examples) and quantitatively (improving the gap from quartic to cubic). We leave open the problem of proving a randomized communication complexity lower bound for XOR compositions of our examples. A linear lower bound would lead to new and improved refutations of the log-approximate-rank conjecture. Moreover, if any of these compositions had even a sub-linear cost randomized communication protocol, it would demonstrate that randomized parity decision tree complexity does not lift to randomized communication complexity in general (with the XOR gadget).
The likelihood-free sequential Approximate Bayesian Computation (ABC) algorithms, are increasingly popular inference tools for complex biological models. Such algorithms proceed by constructing a succession of probability distributions over the param eter space conditional upon the simulated data lying in an $epsilon$--ball around the observed data, for decreasing values of the threshold $epsilon$. While in theory, the distributions (starting from a suitably defined prior) will converge towards the unknown posterior as $epsilon$ tends to zero, the exact sequence of thresholds can impact upon the computational efficiency and success of a particular application. In particular, we show here that the current preferred method of choosing thresholds as a pre-determined quantile of the distances between simulated and observed data from the previous population, can lead to the inferred posterior distribution being very different to the true posterior. Threshold selection thus remains an important challenge. Here we propose an automated and adaptive method that allows us to balance the need to minimise the threshold with computational efficiency. Moreover, our method which centres around predicting the threshold - acceptance rate curve using the unscented transform, enables us to avoid local minima - a problem that has plagued previous threshold schemes.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا