ترغب بنشر مسار تعليمي؟ اضغط هنا

On-Demand Video Dispatch Networks: A Scalable End-to-End Learning Approach

140   0   0.0 ( 0 )
 نشر من قبل Sihan Peng
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We design a dispatch system to improve the peak service quality of video on demand (VOD). Our system predicts the hot videos during the peak hours of the next day based on the historical requests, and dispatches to the content delivery networks (CDNs) at the previous off-peak time. In order to scale to billions of videos, we build the system with two neural networks, one for video clustering and the other for dispatch policy developing. The clustering network employs autoencoder layers and reduces the video number to a fixed value. The policy network employs fully connected layers and ranks the clustered videos with dispatch probabilities. The two networks are coupled with weight-sharing temporal layers, which analyze the video request sequences with convolutional and recurrent modules. Therefore, the clustering and dispatch tasks are trained in an end-to-end mechanism. The real-world results show that our approach achieves an average prediction accuracy of 17%, compared with 3% from the present baseline method, for the same amount of dispatches.

قيم البحث

اقرأ أيضاً

155 - Yuanyuan Shi , Bolun Xu 2021
This paper proposes a novel end-to-end deep learning framework that simultaneously identifies demand baselines and the incentive-based agent demand response model, from the net demand measurements and incentive signals. This learning framework is mod ularized as two modules: 1) the decision making process of a demand response participant is represented as a differentiable optimization layer, which takes the incentive signal as input and predicts users response; 2) the baseline demand forecast is represented as a standard neural network model, which takes relevant features and predicts users baseline demand. These two intermediate predictions are integrated, to form the net demand forecast. We then propose a gradient-descent approach that backpropagates the net demand forecast errors to update the weights of the agent model and the weights of baseline demand forecast, jointly. We demonstrate the effectiveness of our approach through computation experiments with synthetic demand response traces and a large-scale real world demand response dataset. Our results show that the approach accurately identifies the demand response model, even without any prior knowledge about the baseline demand.
This paper presents end-to-end learning from spectrum data - an umbrella term for new sophisticated wireless signal identification approaches in spectrum monitoring applications based on deep neural networks. End-to-end learning allows to (i) automat ically learn features directly from simple wireless signal representations, without requiring design of hand-crafted expert features like higher order cyclic moments, and (ii) train wireless signal classifiers in one end-to-end step which eliminates the need for complex multi-stage machine learning processing pipelines. The purpose of this article is to present the conceptual framework of end-to-end learning for spectrum monitoring and systematically introduce a generic methodology to easily design and implement wireless signal classifiers. Furthermore, we investigate the importance of the choice of wireless data representation to various spectrum monitoring tasks. In particular, two case studies are elaborated (i) modulation recognition and (ii) wireless technology interference detection. For each case study three convolutional neural networks are evaluated for the following wireless signal representations: temporal IQ data, the amplitude/phase representation and the frequency domain representation. From our analysis we prove that the wireless data representation impacts the accuracy depending on the specifics and similarities of the wireless signals that need to be differentiated, with different data representations resulting in accuracy variations of up to 29%. Experimental results show that using the amplitude/phase representation for recognizing modulation formats can lead to performance improvements up to 2% and 12% for medium to high SNR compared to IQ and frequency domain data, respectively. For the task of detecting interference, frequency domain representation outperformed amplitude/phase and IQ data representation up to 20%.
368 - Yankun Xu , Jie Yang , Shiqi Zhao 2021
An accurate seizure prediction system enables early warnings before seizure onset of epileptic patients. It is extremely important for drug-refractory patients. Conventional seizure prediction works usually rely on features extracted from Electroence phalography (EEG) recordings and classification algorithms such as regression or support vector machine (SVM) to locate the short time before seizure onset. However, such methods cannot achieve high-accuracy prediction due to information loss of the hand-crafted features and the limited classification ability of regression and SVM algorithms. We propose an end-to-end deep learning solution using a convolutional neural network (CNN) in this paper. One and two dimensional kernels are adopted in the early- and late-stage convolution and max-pooling layers, respectively. The proposed CNN model is evaluated on Kaggle intracranial and CHB-MIT scalp EEG datasets. Overall sensitivity, false prediction rate, and area under receiver operating characteristic curve reaches 93.5%, 0.063/h, 0.981 and 98.8%, 0.074/h, 0.988 on two datasets respectively. Comparison with state-of-the-art works indicates that the proposed model achieves exceeding prediction performance.
In this paper we present a system for monitoring and controlling dynamic network circuits inside the USLHCNet network. This distributed service system provides in near real-time complete topological information for all the circuits, resource allocati on and usage, accounting, detects automatically failures in the links and network equipment, generate alarms and has the functionality to take automatic actions. The system is developed based on the MonALISA framework, which provides a robust monitoring and controlling service oriented architecture, with no single points of failure.
Video-to-speech is the process of reconstructing the audio speech from a video of a spoken utterance. Previous approaches to this task have relied on a two-step process where an intermediate representation is inferred from the video, and is then deco ded into waveform audio using a vocoder or a waveform reconstruction algorithm. In this work, we propose a new end-to-end video-to-speech model based on Generative Adversarial Networks (GANs) which translates spoken video to waveform end-to-end without using any intermediate representation or separate waveform synthesis algorithm. Our model consists of an encoder-decoder architecture that receives raw video as input and generates speech, which is then fed to a waveform critic and a power critic. The use of an adversarial loss based on these two critics enables the direct synthesis of raw audio waveform and ensures its realism. In addition, the use of our three comparative losses helps establish direct correspondence between the generated audio and the input video. We show that this model is able to reconstruct speech with remarkable realism for constrained datasets such as GRID, and that it is the first end-to-end model to produce intelligible speech for LRW (Lip Reading in the Wild), featuring hundreds of speakers recorded entirely `in the wild. We evaluate the generated samples in two different scenarios -- seen and unseen speakers -- using four objective metrics which measure the quality and intelligibility of artificial speech. We demonstrate that the proposed approach outperforms all previous works in most metrics on GRID and LRW.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا