Do you want to publish a course? Click here

DeepCC: Bridging the Gap Between Congestion Control and Applications via Multi-Objective Optimization

113   0   0.0 ( 0 )
 Added by Lei Zhang
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

The increasingly complicated and diverse applications have distinct network performance demands, e.g., some desire high throughput while others require low latency. Traditional congestion controls (CC) have no perception of these demands. Consequently, literatures have explored the objective-specific algorithms, which are based on either offline training or online learning, to adapt to certain application demands. However, once generated, such algorithms are tailored to a specific performance objective function. Newly emerged performance demands in a changeable network environment require either expensive retraining (in the case of offline training), or manually redesigning a new objective function (in the case of online learning). To address this problem, we propose a novel architecture, DeepCC. It generates a CC agent that is generically applicable to a wide range of application requirements and network conditions. The key idea of DeepCC is to leverage both offline deep reinforcement learning and online fine-tuning. In the offline phase, instead of training towards a specific objective function, DeepCC trains its deep neural network model using multi-objective optimization. With the trained model, DeepCC offers near Pareto optimal policies w.r.t different user-specified trade-offs between throughput, delay, and loss rate without any redesigning or retraining. In addition, a quick online fine-tuning phase further helps DeepCC achieve the application-specific demands under dynamic network conditions. The simulation and real-world experiments show that DeepCC outperforms state-of-the-art schemes in a wide range of settings. DeepCC gains a higher target completion ratio of application requirements up to 67.4% than that of other schemes, even in an untrained environment.



rate research

Read More

Decades of research on Internet congestion control (CC) has produced a plethora of algorithms that optimize for different performance objectives. Applications face the challenge of choosing the most suitable algorithm based on their needs, and it takes tremendous efforts and expertise to customize CC algorithms when new demands emerge. In this paper, we explore a basic question: can we design a single CC algorithm to satisfy different objectives? We propose MOCC, the first multi-objective congestion control algorithm that attempts to address this challenge. The core of MOCC is a novel multi-objective reinforcement learning framework for CC that can automatically learn the correlations between different application requirements and the corresponding optimal control policies. Under this framework, MOCC further applies transfer learning to transfer the knowledge from past experience to new applications, quickly adapting itself to a new objective even if it is unforeseen. We provide both user-space and kernel-space implementation of MOCC. Real-world experiments and extensive simulations show that MOCC well supports multi-objective, competing or outperforming the best existing CC algorithms on individual objectives, and quickly adapting to new applications (e.g., 14.2x faster than prior work) without compromising old ones.
We demonstrate that the choice of optimizer, neural network architecture, and regularizer significantly affect the adversarial robustness of linear neural networks, providing guarantees without the need for adversarial training. To this end, we revisit a known result linking maximally robust classifiers and minimum norm solutions, and combine it with recent results on the implicit bias of optimizers. First, we show that, under certain conditions, it is possible to achieve both perfect standard accuracy and a certain degree of robustness, simply by training an overparametrized model using the implicit bias of the optimization. In that regime, there is a direct relationship between the type of the optimizer and the attack to which the model is robust. To the best of our knowledge, this work is the first to study the impact of optimization methods such as sign gradient descent and proximal methods on adversarial robustness. Second, we characterize the robustness of linear convolutional models, showing that they resist attacks subject to a constraint on the Fourier-$ell_infty$ norm. To illustrate these findings we design a novel Fourier-$ell_infty$ attack that finds adversarial examples with controllable frequencies. We evaluate Fourier-$ell_infty$ robustness of adversarially-trained deep CIFAR-10 models from the standard RobustBench benchmark and visualize adversarial perturbations.
When interacting with objects through cameras, or pictures, users often have a specific intent. For example, they may want to perform a visual search. However, most object detection models ignore the user intent, relying on image pixels as their only input. This often leads to incorrect results, such as lack of a high-confidence detection on the object of interest, or detection with a wrong class label. In this paper we investigate techniques to modulate standard object detectors to explicitly account for the user intent, expressed as an embedding of a simple query. Compared to standard object detectors, query-modulated detectors show superior performance at detecting objects for a given label of interest. Thanks to large-scale training data synthesized from standard object detection annotations, query-modulated detectors can also outperform specialized referring expression recognition systems. Furthermore, they can be simultaneously trained to solve for both query-modulated detection and standard object detection.
Optimization of materials performance for specific applications often requires balancing multiple aspects of materials functionality. Even for the cases where generative physical model of material behavior is known and reliable, this often requires search over multidimensional parameter space to identify low-dimensional manifold corresponding to required Pareto front. Here we introduce the multi-objective Bayesian Optimization (MOBO) workflow for the ferroelectric/anti-ferroelectric performance optimization for memory and energy storage applications based on the numerical solution of the Ginzburg-Landau equation with electrochemical or semiconducting boundary conditions. MOBO is a low computational cost optimization tool for expensive multi-objective functions, where we update posterior surrogate Gaussian process models from prior evaluations, and then select future evaluations from maximizing an acquisition function. Using the parameters for a prototype bulk antiferroelectric (PbZrO3), we first develop a physics-driven decision tree of target functions from the loop structures. We further develop a physics-driven MOBO architecture to explore multidimensional parameter space and build Pareto-frontiers by maximizing two target functions jointly: energy storage and loss. This approach allows for rapid initial materials and device parameter selection for a given application and can be further expanded towards the active experiment setting. The associated notebooks provide both the tutorial on MOBO and allow to reproduce the reported analyses and apply them to other systems (https://github.com/arpanbiswas52/MOBO_AFI_Supplements).
The fact that there exists a gap between low-level features and semantic meanings of images, called the semantic gap, is known for decades. Resolution of the semantic gap is a long standing problem. The semantic gap problem is reviewed and a survey on recent efforts in bridging the gap is made in this work. Most importantly, we claim that the semantic gap is primarily bridged through supervised learning today. Experiences are drawn from two application domains to illustrate this point: 1) object detection and 2) metric learning for content-based image retrieval (CBIR). To begin with, this paper offers a historical retrospective on supervision, makes a gradual transition to the modern data-driven methodology and introduces commonly used datasets. Then, it summarizes various supervision methods to bridge the semantic gap in the context of object detection and metric learning.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا