ترغب بنشر مسار تعليمي؟ اضغط هنا

52 - Mengan Song , Yiming Huo , Tao Lu 2021
As 5G wireless systems and networks are now being globally commercialized and deployed, more diversified application scenarios are emerging, quickly reshaping our societies and paving the road to the beyond 5G (6G) era when terahertz (THz) and unmann ed aerial vehicle (UAV) communications may play critical roles. In this paper, aerial channel models under multiple meteorological conditions such as rain, fog and snow, have been investigated at frequencies of interest (from 2 GHz to 900 GHz) for UAV communications. Furthermore, the link budget and the received signal-to-noise ratio (SNR) performance under the existing air-to-ground (A2G) channel models are studied with antenna(s) system considered. The relationship between the 3D coverage radius and UAV altitude under the influence of multiple weather (MW) conditions is simulated. Numerical results show that medium rain has the most effects on the UAVs coverage for UAV communications at millimeter wave (mmWave) bands, while snow has the largest impacts at near THz bands. In addition, when the frequency increases, the corresponding increase in the number of antennas can effectively compensate for the propagation loss introduced by weather factors, while its form factor and weight can be kept to maintain the UAVs payload.
In this work, we present a simple and general search space shrinking method, called Angle-Based search space Shrinking (ABS), for Neural Architecture Search (NAS). Our approach progressively simplifies the original search space by dropping unpromisin g candidates, thus can reduce difficulties for existing NAS methods to find superior architectures. In particular, we propose an angle-based metric to guide the shrinking process. We provide comprehensive evidences showing that, in weight-sharing supernet, the proposed metric is more stable and accurate than accuracy-based and magnitude-based metrics to predict the capability of child models. We also show that the angle-based metric can converge fast while training supernet, enabling us to get promising shrunk search spaces efficiently. ABS can easily apply to most of NAS approaches (e.g. SPOS, FairNAS, ProxylessNAS, DARTS and PDARTS). Comprehensive experiments show that ABS can dramatically enhance existing NAS approaches by providing a promising shrunk search space.
Disorder is more the rule than the exception in natural and synthetic materials. Nonetheless, wave propagation within inhomogeneously disordered materials has received scant attention. We combine microwave experiments and theory to find the spatial v ariation of generic wave propagation quantities in inhomogeneously disordered materials. We demonstrate that wave statistics within samples of any dimension are independent of the detailed structure of a material and depend only on the net strengths of distributed scattering and reflection between the observation point and each of the boundaries.
Deep neural networks (DNNs) have achieved great success in a wide range of computer vision areas, but the applications to mobile devices is limited due to their high storage and computational cost. Much efforts have been devoted to compress DNNs. In this paper, we propose a simple yet effective method for deep networks compression, named Cluster Regularized Quantization (CRQ), which can reduce the presentation precision of a full-precision model to ternary values without significant accuracy drop. In particular, the proposed method aims at reducing the quantization error by introducing a cluster regularization term, which is imposed on the full-precision weights to enable them naturally concentrate around the target values. Through explicitly regularizing the weights during the re-training stage, the full-precision model can achieve the smooth transition to the low-bit one. Comprehensive experiments on benchmark datasets demonstrate the effectiveness of the proposed method.
In recent years, deep neural networks have achieved great success in the field of computer vision. However, it is still a big challenge to deploy these deep models on resource-constrained embedded devices such as mobile robots, smart phones and so on . Therefore, network compression for such platforms is a reasonable solution to reduce memory consumption and computation complexity. In this paper, a novel channel pruning method based on genetic algorithm is proposed to compress very deep Convolution Neural Networks (CNNs). Firstly, a pre-trained CNN model is pruned layer by layer according to the sensitivity of each layer. After that, the pruned model is fine-tuned based on knowledge distillation framework. These two improvements significantly decrease the model redundancy with less accuracy drop. Channel selection is a combinatorial optimization problem that has exponential solution space. In order to accelerate the selection process, the proposed method formulates it as a search problem, which can be solved efficiently by genetic algorithm. Meanwhile, a two-step approximation fitness function is designed to further improve the efficiency of genetic process. The proposed method has been verified on three benchmark datasets with two popular CNN models: VGGNet and ResNet. On the CIFAR-100 and ImageNet datasets, our approach outperforms several state-of-the-art methods. On the CIFAR-10 and SVHN datasets, the pruned VGGNet achieves better performance than the original model with 8 times parameters compression and 3 times FLOPs reduction.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا