ترغب بنشر مسار تعليمي؟ اضغط هنا

Event camera is an emerging imaging sensor for capturing dynamics of moving objects as events, which motivates our work in estimating 3D human pose and shape from the event signals. Events, on the other hand, have their unique challenges: rather than capturing static body postures, the event signals are best at capturing local motions. This leads us to propose a two-stage deep learning approach, called EventHPE. The first-stage, FlowNet, is trained by unsupervised learning to infer optical flow from events. Both events and optical flow are closely related to human body dynamics, which are fed as input to the ShapeNet in the second stage, to estimate 3D human shapes. To mitigate the discrepancy between image-based flow (optical flow) and shape-based flow (vertices movement of human body shape), a novel flow coherence loss is introduced by exploiting the fact that both flows are originated from the identical human motion. An in-house event-based 3D human dataset is curated that comes with 3D pose and shape annotations, which is by far the largest one to our knowledge. Empirical evaluations on DHP19 dataset and our in-house dataset demonstrate the effectiveness of our approach.
93 - Ruihan Wu , Chuan Guo , Yi Su 2021
Machine learning models often encounter distribution shifts when deployed in the real world. In this paper, we focus on adaptation to label distribution shift in the online setting, where the test-time label distribution is continually changing and t he model must dynamically adapt to it without observing the true label. Leveraging a novel analysis, we show that the lack of true label does not hinder estimation of the expected test loss, which enables the reduction of online label shift adaptation to conventional online learning. Informed by this observation, we propose adaptation algorithms inspired by classical online learning techniques such as Follow The Leader (FTL) and Online Gradient Descent (OGD) and derive their regret bounds. We empirically verify our findings under both simulated and real world label distribution shifts and show that OGD is particularly effective and robust to a variety of challenging label shift scenarios.
Blockchain creates a secure environment on top of strict cryptographic assumptions and rigorous security proofs. It permits on-chain interactions to achieve trustworthy properties such as traceability, transparency, and accountability. However, curre nt blockchain trustworthiness is only confined to on-chain, creating a trust gap to the physical, off-chain environment. This is due to the lack of a scheme that can truthfully reflect the physical world in a real-time and consistent manner. Such an absence hinders further real-world blockchain applications, especially for security-sensitive ones. In this paper, we propose a scheme to extend blockchain trust from on-chain to off-chain, and take trustworthy vaccine transportation as an example. Our scheme consists of 1) a Trusted Execution Environment (TEE)-enabled trusted environment monitoring system built with the Arm Cortex-M33 microcontroller that continuously senses the inside of a vaccine box through trusted sensors and generates anti-forgery data; and 2) a consistency protocol to upload the environment status data from the TEE system to blockchain in a truthful, real-time consistent, continuous and fault-tolerant fashion. Our security analysis indicates that no adversary can tamper with the vaccine in any way without being captured. We carry out an experiment to record the internal status of a vaccine shipping box during transportation, and the results indicate that the proposed system incurs an average latency of 84 ms in local sensing and processing followed by an average latency of 130 ms to have the sensed data transmitted to and available in the blockchain.
Federated learning has emerged as a popular paradigm for collaboratively training a model from data distributed among a set of clients. This learning setting presents, among others, two unique challenges: how to protect privacy of the clients data du ring training, and how to ensure integrity of the trained model. We propose a two-pronged solution that aims to address both challenges under a single framework. First, we propose to create secure enclaves using a trusted execution environment (TEE) within the server. Each client can then encrypt their gradients and send them to verifiable enclaves. The gradients are decrypted within the enclave without the fear of privacy breaches. However, robustness check computations in a TEE are computationally prohibitive. Hence, in the second step, we perform a novel gradient encoding that enables TEEs to encode the gradients and then offloading Byzantine check computations to accelerators such as GPUs. Our proposed approach provides theoretical bounds on information leakage and offers a significant speed-up over the baseline in empirical evaluation.
We propose the first general-purpose gradient-based attack against transformer models. Instead of searching for a single adversarial example, we search for a distribution of adversarial examples parameterized by a continuous-valued matrix, hence enab ling gradient-based optimization. We empirically demonstrate that our white-box attack attains state-of-the-art attack performance on a variety of natural language tasks. Furthermore, we show that a powerful black-box transfer attack, enabled by sampling from the adversarial distribution, matches or exceeds existing methods, while only requiring hard-label outputs.
Machine-learning systems such as self-driving cars or virtual assistants are composed of a large number of machine-learning models that recognize image content, transcribe speech, analyze natural language, infer preferences, rank options, etc. Models in these systems are often developed and trained independently, which raises an obvious concern: Can improving a machine-learning model make the overall system worse? We answer this question affirmatively by showing that improving a model can deteriorate the performance of downstream models, even after those downstream models are retrained. Such self-defeating improvements are the result of entanglement between the models in the system. We perform an error decomposition of systems with multiple machine-learning models, which sheds light on the types of errors that can lead to self-defeating improvements. We also present the results of experiments which show that self-defeating improvements emerge in a realistic stereo-based detection system for cars and pedestrians.
Machine-learning models contain information about the data they were trained on. This information leaks either through the model itself or through predictions made by the model. Consequently, when the training data contains sensitive attributes, asse ssing the amount of information leakage is paramount. We propose a method to quantify this leakage using the Fisher information of the model about the data. Unlike the worst-case a priori guarantees of differential privacy, Fisher information loss measures leakage with respect to specific examples, attributes, or sub-populations within the dataset. We motivate Fisher information loss through the Cram{e}r-Rao bound and delineate the implied threat model. We provide efficient methods to compute Fisher information loss for output-perturbed generalized linear models. Finally, we empirically validate Fisher information loss as a useful measure of information leakage.
109 - Ruihan Wu , Chuan Guo , Felix Wu 2021
Most computer science conferences rely on paper bidding to assign reviewers to papers. Although paper bidding enables high-quality assignments in days of unprecedented submission numbers, it also opens the door for dishonest reviewers to adversariall y influence paper reviewing assignments. Anecdotal evidence suggests that some reviewers bid on papers by friends or colluding authors, even though these papers are outside their area of expertise, and recommend them for acceptance without considering the merit of the work. In this paper, we study the efficacy of such bid manipulation attacks and find that, indeed, they can jeopardize the integrity of the review process. We develop a novel approach for paper bidding and assignment that is much more robust against such attacks. We show empirically that our approach provides robustness even when dishonest reviewers collude, have full knowledge of the assignment systems internal workings, and have access to the systems inputs. In addition to being more robust, the quality of our paper review assignments is comparable to that of current, non-robust assignment approaches.
With the prevalence of Internet of Things (IoT) applications, IoT devices interact closely with our surrounding environments, bringing us unparalleled smartness and convenience. However, the development of secure IoT solutions is getting a long way l agged behind, making us exposed to common unauthorized accesses that may bring malicious attacks and unprecedented danger to our daily life. Overprivilege attack, a widely reported phenomenon in IoT that accesses unauthorized or excessive resources, is notoriously hard to prevent, trace and mitigate. To tackle this challenge, we propose Tokoin-Based Access Control (TBAC), an accountable access control model enabled by blockchain and Trusted Execution Environment (TEE) technologies, to offer fine-graininess, strong auditability, and access procedure control for IoT. TBAC materializes the virtual access power into a definite-amount and secure cryptographic coin termed tokoin (token+coin), and manages it using atomic and accountable state-transition functions in a blockchain. We also realize access procedure control by mandating every tokoin a fine-grained access policy defining who is allowed to do what at when in where by how. The tokoin is peer-to-peer transferable, and can be modified only by the resource owner when necessary. We fully implement TBAC with well-studied cryptographic primitives and blockchain platforms and present a readily available APP for regular users. We also present a case study to demonstrate how TBAC is employed to enable autonomous in-home cargo delivery while guaranteeing the access policy compliance and home owners physical security by regulating the physical behaviors of the deliveryman.
103 - Chuan Guo , Xinxin Zuo , Sen Wang 2020
Action recognition is a relatively established task, where givenan input sequence of human motion, the goal is to predict its ac-tion category. This paper, on the other hand, considers a relativelynew problem, which could be thought of as an inverse of actionrecognition: given a prescribed action type, we aim to generateplausible human motion sequences in 3D. Importantly, the set ofgenerated motions are expected to maintain itsdiversityto be ableto explore the entire action-conditioned motion space; meanwhile,each sampled sequence faithfully resembles anaturalhuman bodyarticulation dynamics. Motivated by these objectives, we followthe physics law of human kinematics by adopting the Lie Algebratheory to represent thenaturalhuman motions; we also propose atemporal Variational Auto-Encoder (VAE) that encourages adiversesampling of the motion space. A new 3D human motion dataset, HumanAct12, is also constructed. Empirical experiments overthree distinct human motion datasets (including ours) demonstratethe effectiveness of our approach.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا