ترغب بنشر مسار تعليمي؟ اضغط هنا

This paper investigates the integrated sensing and communication (ISAC) in vehicle-to-infrastructure (V2I) networks. To realize ISAC, an effective beamforming design is essential which however, highly depends on the availability of accurate channel t racking requiring large training overhead and computational complexity. Motivated by this, we adopt a deep learning (DL) approach to implicitly learn the features of historical channels and directly predict the beamforming matrix to be adopted for the next time slot to maximize the average achievable sum-rate of an ISAC system. The proposed method can bypass the need of explicit channel tracking process and reduce the signaling overhead significantly. To this end, a general sum-rate maximization problem with Cramer-Rao lower bounds (CRLBs)-based sensing constraints is first formulated for the considered ISAC system. Then, by exploiting the penalty method, a versatile unsupervised DL-based predictive beamforming design framework is developed to address the formulated design problem. As a realization of the developed framework, a historical channels-based convolutional long short-term memory (LSTM) network (HCL-Net) is devised for predictive beamforming in the ISAC-based V2I network. Specifically, the convolution and LSTM modules are successively adopted in the proposed HCL-Net to exploit the spatial and temporal dependencies of communication channels to further improve the learning performance. Finally, simulation results show that the proposed predictive method not only guarantees the required sensing performance, but also achieves a satisfactory sum-rate that can approach the upper bound obtained by the genie-aided scheme with the perfect instantaneous channel state information.
97 - Mingchang Liu , Hao Wu 2021
We consider uniform spanning tree (UST) in topological polygons with $2N$ marked points on the boundary with alternating boundary conditions. In [LPW21], the authors derive the scaling limit of the Peano curve in the UST. They are variants of SLE$_8$ . In this article, we derive the scaling limit of the loop-erased random walk branch (LERW) in the UST. They are variants of SLE$_2$. The conclusion is a generalization of [HLW20,Theorem 1.6] where the authors derive the scaling limit of the LERW branch of UST when $N=2$. When $N=2$, the limiting law is SLE$_2(-1,-1; -1, -1)$. However, the limiting law is nolonger in the family of SLE$_2(rho)$ process as long as $Nge 3$.
126 - Chang Liu , Han Yu , Boyang Li 2021
Noisy labels are commonly found in real-world data, which cause performance degradation of deep neural networks. Cleaning data manually is labour-intensive and time-consuming. Previous research mostly focuses on enhancing classification models agains t noisy labels, while the robustness of deep metric learning (DML) against noisy labels remains less well-explored. In this paper, we bridge this important gap by proposing Probabilistic Ranking-based Instance Selection with Memory (PRISM) approach for DML. PRISM calculates the probability of a label being clean, and filters out potentially noisy samples. Specifically, we propose three methods to calculate this probability: 1) Average Similarity Method (AvgSim), which calculates the average similarity between potentially noisy data and clean data; 2) Proxy Similarity Method (ProxySim), which replaces the centers maintained by AvgSim with the proxies trained by proxy-based method; and 3) von Mises-Fisher Distribution Similarity (vMF-Sim), which estimates a von Mises-Fisher distribution for each data class. With such a design, the proposed approach can deal with challenging DML situations in which the majority of the samples are noisy. Extensive experiments on both synthetic and real-world noisy dataset show that the proposed approach achieves up to 8.37% higher Precision@1 compared with the best performing state-of-the-art baseline approaches, within reasonable training time.
The hybrid method combining particle-in-cell and magnetohydrodynamics can be used to study the interaction between energetic particles and global plasma modes. In this paper we introduce the M3D-C1-K code, which is developed based on the M3D-C1 finit e element code solving the magnetohydrodynamics equations, with a newly developed kinetic module simulating energetic particles. The particle pushing is done using a new algorithm by applying the Boris pusher to the classical Pauli particles to simulate the slow-manifold of particle orbits, with long-term accuracy and fidelity. The particle pushing can be accelerated using GPUs with a significant speedup. The moments of the particles are calculated using the $delta f$ method, and are coupled into the magnetohydrodynamics simulation through pressure or current coupling schemes. Several linear simulations of magnetohydrodynamics modes driven by energetic particles have been conducted using M3D-C1-K, including fishbone, toroidal Alfven eigenmodes and reversed shear Alfven eigenmodes. Good agreement with previous results from other eigenvalue, kinetic and hybrid codes has been achieved.
203 - Chang Liu , Xianqi Song , Quan Li 2021
Semiconductivity and superconductivity are remarkable quantum phenomena that have immense impact on science and technology, and materials that can be tuned, usually by pressure or doping, to host both types of quantum states are of great fundamental and practical significance. Here we show by first-principles calculations a distinct route for tuning semiconductors into superconductors by diverse large-range elastic shear strains, as demonstrated in exemplary cases of silicon and silicon carbide. Analysis of strain driven evolution of bonding structure, electronic states, lattice vibration, and electron-phonon coupling unveils robust pervading deformation induced mechanisms auspicious for modulating semiconducting and superconducting states under versatile material conditions. This finding opens vast untapped structural configurations for rational exploration of tunable emergence and transition of these intricate quantum phenomena in a broad range of materials.
462 - Chang Liu , Haoyue Tang , Tao Qin 2021
We study whether and how can we model a joint distribution $p(x,z)$ using two conditional models $p(x|z)$ and $q(z|x)$ that form a cycle. This is motivated by the observation that deep generative models, in addition to a likelihood model $p(x|z)$, of ten also use an inference model $q(z|x)$ for data representation, but they rely on a usually uninformative prior distribution $p(z)$ to define a joint distribution, which may render problems like posterior collapse and manifold mismatch. To explore the possibility to model a joint distribution using only $p(x|z)$ and $q(z|x)$, we study their compatibility and determinacy, corresponding to the existence and uniqueness of a joint distribution whose conditional distributions coincide with them. We develop a general theory for novel and operable equivalence criteria for compatibility, and sufficient conditions for determinacy. Based on the theory, we propose the CyGen framework for cyclic-conditional generative modeling, including methods to enforce compatibility and use the determined distribution to fit and generate data. With the prior constraint removed, CyGen better fits data and captures more representative features, supported by experiments showing better generation and downstream classification performance.
Large machine learning models achieve unprecedented performance on various tasks and have evolved as the go-to technique. However, deploying these compute and memory hungry models on resource constraint environments poses new challenges. In this work , we propose mathematically provable Representer Sketch, a concise set of count arrays that can approximate the inference procedure with simple hashing computations and aggregations. Representer Sketch builds upon the popular Representer Theorem from kernel literature, hence the name, providing a generic fundamental alternative to the problem of efficient inference that goes beyond the popular approach such as quantization, iterative pruning and knowledge distillation. A neural network function is transformed to its weighted kernel density representation, which can be very efficiently estimated with our sketching algorithm. Empirically, we show that Representer Sketch achieves up to 114x reduction in storage requirement and 59x reduction in computation complexity without any drop in accuracy.
241 - Chang Liu , Xiaolin Wu 2021
Nighttime photographers are often troubled by light pollution of unwanted artificial lights. Artificial lights, after scattered by aerosols in the atmosphere, can inundate the starlight and degrade the quality of nighttime images, by reducing contras t and dynamic range and causing hazes. In this paper we develop a physically-based light pollution reduction (LPR) algorithm that can substantially alleviate the aforementioned degradations of perceptual quality and restore the pristine state of night sky. The key to the success of the proposed LPR algorithm is an inverse method to estimate the spatial radiance distribution and spectral signature of ground artificial lights. Extensive experiments are carried out to evaluate the efficacy and limitations of the LPR algorithm.
With the rapid growth in mobile computing, massive amounts of data and computing resources are now located at the edge. To this end, Federated learning (FL) is becoming a widely adopted distributed machine learning (ML) paradigm, which aims to harnes s this expanding skewed data locally in order to develop rich and informative models. In centralized FL, a collection of devices collaboratively solve a ML task under the coordination of a central server. However, existing FL frameworks make an over-simplistic assumption about network connectivity and ignore the communication bandwidth of the different links in the network. In this paper, we present and study a novel FL algorithm, in which devices mostly collaborate with other devices in a pairwise manner. Our nonparametric approach is able to exploit network topology to reduce communication bottlenecks. We evaluate our approach on various FL benchmarks and demonstrate that our method achieves 10X better communication efficiency and around 8% increase in accuracy compared to the centralized approach.
As an in situ combustion diagnostic tool, Tunable Diode Laser Absorption Spectroscopy (TDLAS) tomography has been widely used for imaging of two-dimensional temperature distributions in reactive flows. Compared with the computational tomographic algo rithms, Convolutional Neural Networks (CNNs) have been proofed to be more robust and accurate for image reconstruction, particularly in case of limited access of laser beams in the Region of Interest (RoI). In practice, flame in the RoI that requires to be reconstructed with good spatial resolution is commonly surrounded by low-temperature background. Although the background is not of high interest, spectroscopic absorption still exists due to heat dissipation and gas convection. Therefore, we propose a Pseudo-Inversed CNN (PI-CNN) for hierarchical temperature imaging that (a) uses efficiently the training and learning resources for temperature imaging in the RoI with good spatial resolution, and (b) reconstructs the less spatially resolved background temperature by adequately addressing the integrity of the spectroscopic absorption model. In comparison with the traditional CNN, the newly introduced pseudo inversion of the RoI sensitivity matrix is more penetrating for revealing the inherent correlation between the projection data and the RoI to be reconstructed, thus prioritising the temperature imaging in the RoI with high accuracy and high computational efficiency. In this paper, the proposed algorithm was validated by both numerical simulation and lab-scale experiment, indicating good agreement between the phantoms and the high-fidelity reconstructions.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا