ترغب بنشر مسار تعليمي؟ اضغط هنا

Learning Fast Adaptation with Meta Strategy Optimization

159   0   0.0 ( 0 )
 نشر من قبل Wenhao Yu
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The ability to walk in new scenarios is a key milestone on the path toward real-world applications of legged robots. In this work, we introduce Meta Strategy Optimization, a meta-learning algorithm for training policies with latent variable inputs that can quickly adapt to new scenarios with a handful of trials in the target environment. The key idea behind MSO is to expose the same adaptation process, Strategy Optimization (SO), to both the training and testing phases. This allows MSO to effectively learn locomotion skills as well as a latent space that is suitable for fast adaptation. We evaluate our method on a real quadruped robot and demonstrate successful adaptation in various scenarios, including sim-to-real transfer, walking with a weakened motor, or climbing up a slope. Furthermore, we quantitatively analyze the generalization capability of the trained policy in simulated environments. Both real and simulated experiments show that our method outperforms previous methods in adaptation to novel tasks.



قيم البحث

اقرأ أيضاً

Animals have remarkable abilities to adapt locomotion to different terrains and tasks. However, robots trained by means of reinforcement learning are typically able to solve only a single task and a transferred policy is usually inferior to that trai ned from scratch. In this work, we demonstrate that meta-reinforcement learning can be used to successfully train a robot capable to solve a wide range of locomotion tasks. The performance of the meta-trained robot is similar to that of a robot that is trained on a single task.
Meta-learning algorithms can accelerate the model-based reinforcement learning (MBRL) algorithms by finding an initial set of parameters for the dynamical model such that the model can be trained to match the actual dynamics of the system with only a few data-points. However, in the real world, a robot might encounter any situation starting from motor failures to finding itself in a rocky terrain where the dynamics of the robot can be significantly different from one another. In this paper, first, we show that when meta-training situations (the prior situations) have such diverse dynamics, using a single set of meta-trained parameters as a starting point still requires a large number of observations from the real system to learn a useful model of the dynamics. Second, we propose an algorithm called FAMLE that mitigates this limitation by meta-training several initial starting points (i.e., initial parameters) for training the model and allows the robot to select the most suitable starting point to adapt the model to the current situation with only a few gradient steps. We compare FAMLE to MBRL, MBRL with a meta-trained model with MAML, and model-free policy search algorithm PPO for various simulated and real robotic tasks, and show that FAMLE allows the robots to adapt to novel damages in significantly fewer time-steps than the baselines.
Many meta-learning approaches for few-shot learning rely on simple base learners such as nearest-neighbor classifiers. However, even in the few-shot regime, discriminatively trained linear predictors can offer better generalization. We propose to use these predictors as base learners to learn representations for few-shot learning and show they offer better tradeoffs between feature size and performance across a range of few-shot recognition benchmarks. Our objective is to learn feature embeddings that generalize well under a linear classification rule for novel categories. To efficiently solve the objective, we exploit two properties of linear classifiers: implicit differentiation of the optimality conditions of the convex problem and the dual formulation of the optimization problem. This allows us to use high-dimensional embeddings with improved generalization at a modest increase in computational overhead. Our approach, named MetaOptNet, achieves state-of-the-art performance on miniImageNet, tieredImageNet, CIFAR-FS, and FC100 few-shot learning benchmarks. Our code is available at https://github.com/kjunelee/MetaOptNet.
Power control in decentralized wireless networks poses a complex stochastic optimization problem when formulated as the maximization of the average sum rate for arbitrary interference graphs. Recent work has introduced data-driven design methods that leverage graph neural network (GNN) to efficiently parametrize the power control policy mapping channel state information (CSI) to the power vector. The specific GNN architecture, known as random edge GNN (REGNN), defines a non-linear graph convolutional architecture whose spatial weights are tied to the channel coefficients, enabling a direct adaption to channel conditions. This paper studies the higher-level problem of enabling fast adaption of the power control policy to time-varying topologies. To this end, we apply first-order meta-learning on data from multiple topologies with the aim of optimizing for a few-shot adaptation to new network configurations.
This paper studies fast adaptive beamforming optimization for the signal-to-interference-plus-noise ratio balancing problem in a multiuser multiple-input single-output downlink system. Existing deep learning based approaches to predict beamforming re ly on the assumption that the training and testing channels follow the same distribution which may not hold in practice. As a result, a trained model may lead to performance deterioration when the testing network environment changes. To deal with this task mismatch issue, we propose two offline adaptive algorithms based on deep transfer learning and meta-learning, which are able to achieve fast adaptation with the limited new labelled data when the testing wireless environment changes. Furthermore, we propose an online algorithm to enhance the adaptation capability of the offline meta algorithm in realistic non-stationary environments. Simulation results demonstrate that the proposed adaptive algorithms achieve much better performance than the direct deep learning algorithm without adaptation in new environments. The meta-learning algorithm outperforms the deep transfer learning algorithm and achieves near optimal performance. In addition, compared to the offline meta-learning algorithm, the proposed online meta-learning algorithm shows superior adaption performance in changing environments.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا