No Arabic abstract
Millimeter-wave radars are being increasingly integrated into commercial vehicles to support advanced driver-assistance system features. A key shortcoming for present-day vehicular radar imaging is poor azimuth resolution (for side-looking operation) due to the form factor limits on antenna size and placement. In this paper, we propose a solution via a new multiple-input and multiple-output synthetic aperture radar (MIMO-SAR) imaging technique, that applies coherent SAR principles to vehicular MIMO radar to improve the side-view (angular) resolution. The proposed 2-stage hierarchical MIMO-SAR processing workflow drastically reduces the computation load while preserving image resolution. To enable coherent processing over the synthetic aperture, we integrate a radar odometry algorithm that estimates the trajectory of ego-radar. The MIMO-SAR algorithm is validated by both simulations and real experiment data collected by a vehicle-mounted radar platform.
Traditional multiple input multiple output radars, which transmit orthogonal coded waveforms, suffer from range-azimuth resolution trade-off. In this work, we adopt a frequency division multiple access (FDMA) approach that breaks this conflict. We combine narrow individual bandwidth for high azimuth resolution and large overall total bandwidth for high range resolution. We process all channels jointly to overcome the FDMA range resolution limitation to a single bandwidth, and address range-azimuth coupling using a random array configuration.
Millimeter-wave (mmWave) radars are being increasingly integrated in commercial vehicles to support new Adaptive Driver Assisted Systems (ADAS) features that require accurate location and Doppler velocity estimates of objects, independent of environmental conditions. To explore radar-based ADAS applications, we have updated our test-bed with Texas Instruments 4-chip cascaded FMCW radar (TIDEP-01012) that forms a non-uniform 2D MIMO virtual array. In this paper, we develop the necessary received signal models for applying different direction of arrival (DoA) estimation algorithms and experimentally validating their performance on formed virtual array under controlled scenarios. To test the robustness of mmWave radars under adverse weather conditions, we collected raw radar dataset (I-Q samples post demodulated) for various objects by a driven vehicle-mounted platform, specifically for snowy and foggy situations where cameras are largely ineffective. Initial results from radar imaging algorithms to this dataset are presented.
In this paper, we develop two high-resolution channel estimation schemes based on the estimating signal parameters via the rotational invariance techniques (ESPRIT) method for frequency-selective millimeter wave (mmWave) massive MIMO systems. The first scheme is based on two-dimensional ESPRIT (TDE), which includes three stages of pilot transmission. This scheme first estimates the angles of arrival (AoA) and angles of departure (AoD) and then pairs the AoA and AoD. The other scheme reduces the pilot transmission from three stages to two stages and therefore reduces the pilot overhead. It is based on one-dimensional ESPRIT and minimum searching (EMS). It first estimates the AoD of each channel path and then searches the minimum from the identified mainlobe. To guarantee the robust channel estimation performance, we also develop a hybrid precoding and combining matrices design method so that the received signal power keeps almost the same for any AoA and AoD. Finally, we demonstrate that the proposed two schemes outperform the existing channel estimation schemes in terms of computational complexity and performance.
With autonomous driving developing in a booming stage, accurate object detection in complex scenarios attract wide attention to ensure the safety of autonomous driving. Millimeter wave (mmWave) radar and vision fusion is a mainstream solution for accurate obstacle detection. This article presents a detailed survey on mmWave radar and vision fusion based obstacle detection methods. Firstly, we introduce the tasks, evaluation criteria and datasets of object detection for autonomous driving. Then, the process of mmWave radar and vision fusion is divided into three parts: sensor deployment, sensor calibration and sensor fusion, which are reviewed comprehensively. Especially, we classify the fusion methods into data level, decision level and feature level fusion methods. Besides, we introduce the fusion of lidar and vision in autonomous driving in the aspects of obstacle detection, object classification and road segmentation, which is promising in the future. Finally, we summarize this article.
This paper presents an inertial sensor aided technique for beam alignment and tracking in massive multiple-input multiple-output (MIMO) vehicle-to-vehicle (V2V) communications based on millimeter waves (mmWave). Since directional communications in vehicular scenarios are severely hindered by beam pointing issues, a beam alignment procedure has to be periodically carried out to guarantee the communication reliability. When dealing with massive MIMO links, the beam sweeping approach is known to be time consuming and often unfeasible due to latency constraints. To speed up the process, we propose a method that exploits a-priori information on array dynamics provided by an inertial sensor on transceivers to assist the beam alignment procedure. The proposed inertial sensor aided technique allows a continuous tracking of the beam while transmitting, avoiding frequent realignment phases. Numerical results based on real measurements of on-transceiver accelerometers demonstrate a significant gain in terms of V2V communication throughput with respect to conventional beam alignment protocols.