No Arabic abstract
The emergence of various intelligent mobile applications demands the deployment of powerful deep learning models at resource-constrained mobile devices. The device-edge co-inference framework provides a promising solution by splitting a neural network at a mobile device and an edge computing server. In order to balance the on-device computation and the communication overhead, the splitting point needs to be carefully picked, while the intermediate feature needs to be compressed before transmission. Existing studies decoupled the design of model splitting, feature compression, and communication, which may lead to excessive resource consumption of the mobile device. In this paper, we introduce an end-to-end architecture, named BottleNet++, that consists of an encoder, a non-trainable channel layer, and a decoder for more efficient feature compression and transmission. The encoder and decoder essentially implement joint source-channel coding via convolutional neural networks (CNNs), while explicitly considering the effect of channel noise. By exploiting the strong sparsity and the fault-tolerant property of the intermediate feature in a deep neural network (DNN), BottleNet++ achieves a much higher compression ratio than existing methods. Furthermore, by providing the channel condition to the encoder as an input, our method enjoys a strong generalization ability in different channel conditions. Compared with merely transmitting intermediate data without feature compression, BottleNet++ achieves up to 64x bandwidth reduction over the additive white Gaussian noise channel and up to 256x bit compression ratio in the binary erasure channel, with less than 2% reduction in accuracy. With a higher compression ratio, BottleNet++ enables splitting a DNN at earlier layers, which leads to up to 3x reduction in on-device computation compared with other compression methods.
End-to-end performance estimation and measurement of deep neural network (DNN) systems become more important with increasing complexity of DNN systems consisting of hardware and software components. The methodology proposed in this paper aims at a reduced turn-around time for evaluating different design choices of hardware and software components of DNN systems. This reduction is achieved by moving the performance estimation from the implementation phase to the concept phase by employing virtual hardware models instead of gathering measurement results from physical prototypes. Deep learning compilers introduce hardware-specific transformations and are, therefore, considered a part of the design flow of virtual system models to extract end-to-end performance estimations. To validate the run-time accuracy of the proposed methodology, a system processing the DilatedVGG DNN is realized both as virtual system model and as hardware implementation. The results show that up to 92 % accuracy can be reached in predicting the processing time of the DNN inference.
Respiratory rate (RR) is a clinical sign representing ventilation. An abnormal change in RR is often the first sign of health deterioration as the body attempts to maintain oxygen delivery to its tissues. There has been a growing interest in remotely monitoring of RR in everyday settings which has made photoplethysmography (PPG) monitoring wearable devices an attractive choice. PPG signals are useful sources for RR extraction due to the presence of respiration-induced modulations in them. The existing PPG-based RR estimation methods mainly rely on hand-crafted rules and manual parameters tuning. An end-to-end deep learning approach was recently proposed, however, despite its automatic nature, the performance of this method is not ideal using the real world data. In this paper, we present an end-to-end and accurate pipeline for RR estimation using Cycle Generative Adversarial Networks (CycleGAN) to reconstruct respiratory signals from raw PPG signals. Our results demonstrate a higher RR estimation accuracy of up to 2$times$ (mean absolute error of 1.9$pm$0.3 using five fold cross validation) compared to the state-of-th-art using a identical publicly available dataset. Our results suggest that CycleGAN can be a valuable method for RR estimation from raw PPG signals.
We propose a novel deep learning method for local self-supervised representation learning that does not require labels nor end-to-end backpropagation but exploits the natural order in data instead. Inspired by the observation that biological neural networks appear to learn without backpropagating a global error signal, we split a deep neural network into a stack of gradient-isolated modules. Each module is trained to maximally preserve the information of its inputs using the InfoNCE bound from Oord et al. [2018]. Despite this greedy training, we demonstrate that each module improves upon the output of its predecessor, and that the representations created by the top module yield highly competitive results on downstream classification tasks in the audio and visual domain. The proposal enables optimizing modules asynchronously, allowing large-scale distributed training of very deep neural networks on unlabelled datasets.
PyODDS is an end-to end Python system for outlier detection with database support. PyODDS provides outlier detection algorithms which meet the demands for users in different fields, w/wo data science or machine learning background. PyODDS gives the ability to execute machine learning algorithms in-database without moving data out of the database server or over the network. It also provides access to a wide range of outlier detection algorithms, including statistical analysis and more recent deep learning based approaches. PyODDS is released under the MIT open-source license, and currently available at (https://github.com/datamllab/pyodds) with official documentations at (https://pyodds.github.io/).
In this paper, we propose a Deep Reinforcement Learning (RL) framework for task arrangement, which is a critical problem for the success of crowdsourcing platforms. Previous works conduct the personalized recommendation of tasks to workers via supervised learning methods. However, the majority of them only consider the benefit of either workers or requesters independently. In addition, they cannot handle the dynamic environment and may produce sub-optimal results. To address these issues, we utilize Deep Q-Network (DQN), an RL-based method combined with a neural network to estimate the expected long-term return of recommending a task. DQN inherently considers the immediate and future reward simultaneously and can be updated in real-time to deal with evolving data and dynamic changes. Furthermore, we design two DQNs that capture the benefit of both workers and requesters and maximize the profit of the platform. To learn value functions in DQN effectively, we also propose novel state representations, carefully design the computation of Q values, and predict transition probabilities and future states. Experiments on synthetic and real datasets demonstrate the superior performance of our framework.