No Arabic abstract
This paper presents a radar cross-section (RCS)-based statistical recognition system for identifying/ classifying unmanned aerial vehicles (UAVs) at microwave frequencies. First, the paper presents the results of the vertical (VV) and horizontal (HH) polarization RCS measurement of six commercial UAVs at 15 GHz and 25 GHz in a compact range anechoic chamber. The measurement results show that the average RCS of the UAVs depends on shape, size, material composition of the target UAV as well as the azimuth angle, frequency, and polarization of the illuminating radar. Afterward, radar characterization of the target UAVs is achieved by fitting the RCS measurement data to 11 different statistical models. From the model selection analysis, we observe that the lognormal, generalized extreme value, and gamma distributions are most suitable for modeling the RCS of the commercial UAVs while the Gaussian distribution performed relatively poorly. The best UAV radar statistics forms the class conditional probability densities for the proposed UAV statistical recognition system. The performance of the UAV statistical recognition system is evaluated at different signal noise ratio (SNR) with the aid of Monte Carlo analysis. At an SNR of 10 dB, the average classification accuracy of 97.43% or better is achievable.
This work presents a simulation framework to generate human micro-Dopplers in WiFi based passive radar scenarios, wherein we simulate IEEE 802.11g complaint WiFi transmissions using MATLABs WLAN toolbox and human animation models derived from a marker-based motion capture system. We integrate WiFi transmission signals with the human animation data to generate the micro-Doppler features that incorporate the diversity of human motion characteristics, and the sensor parameters. In this paper, we consider five human activities. We uniformly benchmark the classification performance of multiple machine learning and deep learning models against a common dataset. Further, we validate the classification performance using the real radar data captured simultaneously with the motion capture system. We present experimental results using simulations and measurements demonstrating good classification accuracy of $geq$ 95% and $approx$ 90%, respectively.
This paper presents a sparse denoising autoencoder (SDAE)-based deep neural network (DNN) for the direction finding (DF) of small unmanned aerial vehicles (UAVs). It is motivated by the practical challenges associated with classical DF algorithms such as MUSIC and ESPRIT. The proposed DF scheme is practical and low-complex in the sense that a phase synchronization mechanism, an antenna calibration mechanism, and the analytical model of the antenna radiation pattern are not essential. Also, the proposed DF method can be implemented using a single-channel RF receiver. The paper validates the proposed method experimentally as well.
Radar and optical simultaneous observations of meteors are important to understand the size distribution of the interplanetary dust. However, faint meteors detected by high power large aperture radar observations, which are typically as faint as 10 mag. in optical, have not been detected until recently in optical observations, mainly due to insufficient sensitivity of the optical observations. In this paper, two radar and optical simultaneous observations were organized. The first observation was carried out in 2009 to 2010 using Middle and Upper Atmosphere Radar (MU radar) and an image-intensified CCD camera. The second observation was carried out in 2018 using the MU radar and a mosaic CMOS camera, Tomo-e Gozen, mounted on the 1.05-m Kiso Schmidt Telescope. In total, 331 simultaneous meteors were detected. The relationship between radar cross sections and optical V-band magnitudes was well approximated by a linear function. A transformation function from the radar cross section to the V-band magnitude was derived for sporadic meteors. The transformation function was applied to about 150,000 meteors detected by the MU radar in 2009--2015, large part of which are sporadic, and a luminosity function was derived in the magnitude range of $-1.5$ to $9.5$ mag. The luminosity function was well approximated by a single power-law function with the population index of $r = 3.52{pm}0.12$. The present observation indicates that the MU radar has capability to detect interplanetary dust of $10^{-5}$ to $10^{0}$ g in mass as meteors.
In this paper, a real-time signal processing frame-work based on a 60 GHz frequency-modulated continuous wave (FMCW) radar system to recognize gestures is proposed. In order to improve the robustness of the radar-based gesture recognition system, the proposed framework extracts a comprehensive hand profile, including range, Doppler, azimuth and elevation, over multiple measurement-cycles and encodes them into a feature cube. Rather than feeding the range-Doppler spectrum sequence into a deep convolutional neural network (CNN) connected with recurrent neural networks, the proposed framework takes the aforementioned feature cube as input of a shallow CNN for gesture recognition to reduce the computational complexity. In addition, we develop a hand activity detection (HAD) algorithm to automatize the detection of gestures in real-time case. The proposed HAD can capture the time-stamp at which a gesture finishes and feeds the hand profile of all the relevant measurement-cycles before this time-stamp into the CNN with low latency. Since the proposed framework is able to detect and classify gestures at limited computational cost, it could be deployed in an edge-computing platform for real-time applications, whose performance is notedly inferior to a state-of-the-art personal computer. The experimental results show that the proposed framework has the capability of classifying 12 gestures in real-time with a high F1-score.
In the context of electroencephalogram (EEG)-based driver drowsiness recognition, it is still a challenging task to design a calibration-free system, since there exists a significant variability of EEG signals among different subjects and recording sessions. As deep learning has received much research attention in recent years, many efforts have been made to use deep learning methods for EEG signal recognition. However, existing works mostly treat deep learning models as blackbox classifiers, while what have been learned by the models and to which extent they are affected by the noise from EEG data are still underexplored. In this paper, we develop a novel convolutional neural network that can explain its decision by highlighting the local areas of the input sample that contain important information for the classification. The network has a compact structure for ease of interpretation and takes advantage of separable convolutions to process the EEG signals in a spatial-temporal sequence. Results show that the model achieves an average accuracy of 78.35% on 11 subjects for leave-one-out cross-subject drowsiness recognition, which is higher than the conventional baseline methods of 53.4%-72.68% and state-of-art deep learning methods of 63.90%-65.61%. Visualization results show that the model has learned to recognize biologically explainable features from EEG signals, e.g., Alpha spindles, as strong indicators of drowsiness across different subjects. In addition, we also explore reasons behind some wrongly classified samples and how the model is affected by artifacts and noise in the data. Our work illustrates a promising direction on using interpretable deep learning models to discover meaning patterns related to different mental states from complex EEG signals.