ترغب بنشر مسار تعليمي؟ اضغط هنا

82 - Guannan Lou , Yao Deng , Xi Zheng 2021
Autonomous driving shows great potential to reform modern transportation and its safety is attracting much attention from public. Autonomous driving systems generally include deep neural networks (DNNs) for gaining better performance (e.g., accuracy on object detection and trajectory prediction). However, compared with traditional software systems, this new paradigm (i.e., program + DNNs) makes software testing more difficult. Recently, software engineering community spent significant effort in developing new testing methods for autonomous driving systems. However, it is not clear that what extent those testing methods have addressed the needs of industrial practitioners of autonomous driving. To fill this gap, in this paper, we present the first comprehensive study to identify the current practices and needs of testing autonomous driving systems in industry. We conducted semi-structured interviews with developers from 10 autonomous driving companies and surveyed 100 developers who have worked on autonomous driving systems. Through thematic analysis of interview and questionnaire data, we identified five urgent needs of testing autonomous driving systems from industry. We further analyzed the limitations of existing testing methods to address those needs and proposed several future directions for software testing researchers.
Deep generative models of 3D shapes have received a great deal of research interest. Yet, almost all of them generate discrete shape representations, such as voxels, point clouds, and polygon meshes. We present the first 3D generative model for a dra stically different shape representation --- describing a shape as a sequence of computer-aided design (CAD) operations. Unlike meshes and point clouds, CAD models encode the user creation process of 3D shapes, widely used in numerous industrial and engineering design tasks. However, the sequential and irregular structure of CAD operations poses significant challenges for existing 3D generative models. Drawing an analogy between CAD operations and natural language, we propose a CAD generative network based on the Transformer. We demonstrate the performance of our model for both shape autoencoding and random shape generation. To train our network, we create a new CAD dataset consisting of 178,238 models and their CAD construction sequences. We have made this dataset publicly available to promote future research on this topic.
65 - Jiwei Guan , Xi Zheng , Chen Wang 2021
With recent advances in autonomous driving, Voice Control Systems have become increasingly adopted as human-vehicle interaction methods. This technology enables drivers to use voice commands to control the vehicle and will be soon available in Advanc ed Driver Assistance Systems (ADAS). Prior work has shown that Siri, Alexa and Cortana, are highly vulnerable to inaudible command attacks. This could be extended to ADAS in real-world applications and such inaudible command threat is difficult to detect due to microphone nonlinearities. In this paper, we aim to develop a more practical solution by using camera views to defend against inaudible command attacks where ADAS are capable of detecting their environment via multi-sensors. To this end, we propose a novel multimodal deep learning classification system to defend against inaudible command attacks. Our experimental results confirm the feasibility of the proposed defense methods and the best classification accuracy reaches 89.2%. Code is available at https://github.com/ITSEG-MQ/Sensor-Fusion-Against-VoiceCommand-Attacks.
Generative Adversarial Networks (GANs) are able to generate high-quality images, but it remains difficult to explicitly specify the semantics of synthesized images. In this work, we aim to better understand the semantic representation of GANs, and th ereby enable semantic control in GANs generation process. Interestingly, we find that a well-trained GAN encodes image semantics in its internal feature maps in a surprisingly simple way: a linear transformation of feature maps suffices to extract the generated image semantics. To verify this simplicity, we conduct extensive experiments on various GANs and datasets; and thanks to this simplicity, we are able to learn a semantic segmentation model for a trained GAN from a small number (e.g., 8) of labeled images. Last but not least, leveraging our findings, we propose two few-shot image editing approaches, namely Semantic-Conditional Sampling and Semantic Image Editing. Given a trained GAN and as few as eight semantic annotations, the user is able to generate diverse images subject to a user-provided semantic layout, and control the synthesized image semantics. We have made the code publicly available.
58 - Sangsu Lee , Xi Zheng , Jie Hua 2021
Pervasive computing applications commonly involve users personal smartphones collecting data to influence application behavior. Applications are often backed by models that learn from the users experiences to provide personalized and responsive behav ior. While models are often pre-trained on massive datasets, federated learning has gained attention for its ability to train globally shared models on users private data without requiring the users to share their data directly. However, federated learning requires devices to collaborate via a central server, under the assumption that all users desire to learn the same model. We define a new approach, opportunistic federated learning, in which individual devices belonging to different users seek to learn robust models that are personalized to their users own experiences. However, instead of learning in isolation, these models opportunistically incorporate the learned experiences of other devices they encounter opportunistically. In this paper, we explore the feasibility and limits of such an approach, culminating in a framework that supports encounter-based pairwise collaborative learning. The use of our opportunistic encounter-based learning amplifies the performance of personalized learning while resisting overfitting to encountered data.
In this paper, we propose a direct Eulerian generalized Riemann problem (GRP) scheme for a blood flow model in arteries. It is an extension of the Eulerian GRP scheme, which is developed by Ben-Artzi, et. al. in J. Comput. Phys., 218(2006). By using the Riemann invariants, we diagonalize the blood flow system into a weakly coupled system, which is used to resolve rarefaction wave. We also use Rankine-Hugoniot condition to resolve the local GRP formulation. We pay special attention to the acoustic case as well as the sonic case. The extension to the two dimensional case is carefully obtained by using the dimensional splitting technique. We test that the derived GRP scheme is second order accuracy.
Supervised learning (SL) has achieved remarkable success in numerous artificial intelligence applications. In the current literature, by referring to the properties of the ground-truth labels prepared for a training data set, SL is roughly categorize d as fully supervised learning (FSL) and weakly supervised learning (WSL). However, solutions for various FSL tasks have shown that the given ground-truth labels are not always learnable, and the target transformation from the given ground-truth labels to learnable targets can significantly affect the performance of the final FSL solutions. Without considering the properties of the target transformation from the given ground-truth labels to learnable targets, the roughness of the FSL category conceals some details that can be critical to building the optimal solutions for some specific FSL tasks. Thus, it is desirable to reveal these details. This article attempts to achieve this goal by expanding the categorization of FSL and investigating the subtype that plays the central role in FSL. Taking into consideration the properties of the target transformation from the given ground-truth labels to learnable targets, we first categorize FSL into three narrower subtypes. Then, we focus on the subtype moderately supervised learning (MSL). MSL concerns the situation where the given ground-truth labels are ideal, but due to the simplicity in annotation of the given ground-truth labels, careful designs are required to transform the given ground-truth labels into learnable targets. From the perspectives of definition and framework, we comprehensively illustrate MSL to reveal what details are concealed by the roughness of the FSL category. Finally, discussions on the revealed details suggest that MSL should be given more attention.
150 - Shi. Qiu , Changxi Zheng , Qi Zhou 2020
Understanding the structure and chemical composition at the liquid-nanoparticle (NP) interface is crucial for a wide range of physical, chemical and biological processes. In this study, direct imaging of the liquid-NP interface by atom probe tomograp hy (APT) is reported for the first time, which reveals the distributions and the interactions of key atoms and molecules in this critical domain. The APT specimen is prepared by controlled graphene encapsulation of the solution containing nanoparticles on a metal tip, with an end radius in the range of 50 nm to allow field ionization and evaporation. Using Au nanoparticles (AuNPs) in suspension as an example, analysis of the mass spectrum and three-dimensional (3D) chemical maps from APT provides a detailed image of the water-gold interface with near-atomic resolution. At the water-gold interface, the formation of an electrical double layer (EDL) rich in water (H2O) molecules has been observed, which results from the charge from the binding between the trisodium-citrate layer and the AuNP. In the bulk water region, the density of reconstructed H2O has been shown to be consistent, reflecting a highly packed density of H2O molecules after graphene encapsulation. This study is the first demonstration of direct imaging of liquid-NP interface using APT with results providing an atom-by-atom 3D dissection of the liquid-NP interface.
As 5G and Internet-of-Things (IoT) are deeply integrated into vertical industries such as autonomous driving and industrial robotics, timely status update is crucial for remote monitoring and control. In this regard, Age of Information (AoI) has been proposed to measure the freshness of status updates. However, it is just a metric changing linearly with time and irrelevant of context-awareness. We propose a context-based metric, named as Urgency of Information (UoI), to measure the nonlinear time-varying importance and the non-uniform context-dependence of the status information. This paper first establishes a theoretical framework for UoI characterization and then provides UoI-optimal status updating and user scheduling schemes in both single-terminal and multi-terminal cases. Specifically, an update-index-based scheme is proposed for a single-terminal system, where the terminal always updates and transmits when its update index is larger than a threshold. For the multi-terminal case, the UoI of the proposed scheduling scheme is proven to be upper-bounded and its decentralized implementation by Carrier Sensing Multiple Access with Collision Avoidance (CSMA/CA) is also provided. In the simulations, the proposed updating and scheduling schemes notably outperform the existing ones such as round robin and AoI-optimal schemes in terms of UoI, error-bound violation and control system stability.
Timely status updating is crucial for future applications that involve remote monitoring and control, such as autonomous driving and Industrial Internet of Things (IIoT). Age of Information (AoI) has been proposed to measure the freshness of status u pdates. However, it is incapable of capturing critical systematic context information that indicates the time-varying importance of status information, and the dynamic evolution of status. In this paper, we propose a context-based metric, namely the Urgency of Information (UoI), to evaluate the timeliness of status updates. Compared to AoI, the new metric incorporates both time-varying context information and dynamic status evolution, which enables the analysis on context-based adaptive status update schemes, as well as more effective remote monitoring and control. The minimization of average UoI for a status update terminal with an updating frequency constraint is investigated, and an update-index-based adaptive scheme is proposed. Simulation results show that the proposed scheme achieves a near-optimal performance with a low computational complexity.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا