ترغب بنشر مسار تعليمي؟ اضغط هنا

Some Experiments on the influence of Problem Hardness in Morphological Development based Learning of Neural Controllers

92   0   0.0 ( 0 )
 نشر من قبل Martin Naya-Varela
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Natural beings undergo a morphological development process of their bodies while they are learning and adapting to the environments they face from infancy to adulthood. In fact, this is the period where the most important learning pro-cesses, those that will support learning as adults, will take place. However, in artificial systems, this interaction between morphological development and learning, and its possible advantages, have seldom been considered. In this line, this paper seeks to provide some insights into how morphological development can be harnessed in order to facilitate learning in em-bodied systems facing tasks or domains that are hard to learn. In particular, here we will concentrate on whether morphological development can really provide any advantage when learning complex tasks and whether its relevance towards learning in-creases as tasks become harder. To this end, we present the results of some initial experiments on the application of morpho-logical development to learning to walk in three cases, that of a quadruped, a hexapod and that of an octopod. These results seem to confirm that as task learning difficulty increases the application of morphological development to learning becomes more advantageous.



قيم البحث

اقرأ أيضاً

Morphological development is part of the way any human or animal learns. The learning processes starts with the morphology at birth and progresses through changing morphologies until adulthood is reached. Biologically, this seems to facilitate learni ng and make it more robust. However, when this approach is transferred to robotic systems, the results found in the literature are inconsistent: morphological development does not provide a learning advantage in every case. In fact, it can lead to poorer results than when learning with a fixed morphology. In this paper we analyze some of the issues involved by means of a simple, but very informative experiment in quadruped walking. From the results obtained an initial series of insights on when and under what conditions to apply morphological development for learning are presented.
We present an algorithm for rapidly learning controllers for robotics systems. The algorithm follows the model-based reinforcement learning paradigm, and improves upon existing algorithms; namely Probabilistic learning in Control (PILCO) and a sample -based version of PILCO with neural network dynamics (Deep-PILCO). We propose training a neural network dynamics model using variational dropout with truncated Log-Normal noise. This allows us to obtain a dynamics model with calibrated uncertainty, which can be used to simulate controller executions via rollouts. We also describe set of techniques, inspired by viewing PILCO as a recurrent neural network model, that are crucial to improve the convergence of the method. We test our method on a variety of benchmark tasks, demonstrating data-efficiency that is competitive with PILCO, while being able to optimize complex neural network controllers. Finally, we assess the performance of the algorithm for learning motor controllers for a six legged autonomous underwater vehicle. This demonstrates the potential of the algorithm for scaling up the dimensionality and dataset sizes, in more complex control tasks.
Most policy search algorithms require thousands of training episodes to find an effective policy, which is often infeasible with a physical robot. This survey article focuses on the extreme other end of the spectrum: how can a robot adapt with only a handful of trials (a dozen) and a few minutes? By analogy with the word big-data, we refer to this challenge as micro-data reinforcement learning. We show that a first strategy is to leverage prior knowledge on the policy structure (e.g., dynamic movement primitives), on the policy parameters (e.g., demonstrations), or on the dynamics (e.g., simulators). A second strategy is to create data-driven surrogate models of the expected reward (e.g., Bayesian optimization) or the dynamical model (e.g., model-based policy search), so that the policy optimizer queries the model instead of the real system. Overall, all successful micro-data algorithms combine these two strategies by varying the kind of model and prior knowledge. The current scientific challenges essentially revolve around scaling up to complex robots (e.g., humanoids), designing generic priors, and optimizing the computing time.
We show how a distributed flocking controller can be synthesized using deep learning from a centralized controller which generates the trajectories of the flock. Our approach is based on supervised learning, with the centralized controller providing the training data to the learning agent, i.e., the synthesized distributed controller. We use Model Predictive Control (MPC) for the centralized controller, an approach that has been successfully demonstrated on flocking problems. MPC-based flocking controllers are high-performing but also computationally expensive. By learning a symmetric distributed neural flocking controller from a centralized MPC-based flocking controller, we achieve the best of both worlds: the neural controllers have high performance (on par with the MPC controllers) and high efficiency. Our experimental results demonstrate the sophisticated nature of the distributed controllers we learn. In particular, the neural controllers are capable of achieving myriad flocking-oriented control objectives, including flocking formation, collision avoidance, obstacle avoidance, predator avoidance, and target seeking. Moreover, they generalize the behavior seen in the training data in order to achieve these objectives in a significantly broader range of scenarios.
The multiple traveling salesman problem (mTSP) is a well-known NP-hard problem with numerous real-world applications. In particular, this work addresses MinMax mTSP, where the objective is to minimize the max tour length (sum of Euclidean distances) among all agents. The mTSP is normally considered as a combinatorial optimization problem, but due to its computational complexity, search-based exact and heuristic algorithms become inefficient as the number of cities increases. Encouraged by the recent developments in deep reinforcement learning (dRL), this work considers the mTSP as a cooperative task and introduces a decentralized attention-based neural network method to solve the MinMax mTSP, named DAN. In DAN, agents learn fully decentralized policies to collaboratively construct a tour, by predicting the future decisions of other agents. Our model relies on the Transformer architecture, and is trained using multi-agent RL with parameter sharing, which provides natural scalability to the numbers of agents and cities. We experimentally demonstrate our model on small- to large-scale mTSP instances, which involve 50 to 1000 cities and 5 to 20 agents, and compare against state-of-the-art baselines. For small-scale problems (fewer than 100 cities), DAN is able to closely match the performance of the best solver available (OR Tools, a meta-heuristic solver) given the same computation time budget. In larger-scale instances, DAN outperforms both conventional and dRL-based solvers, while keeping computation times low, and exhibits enhanced collaboration among agents.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا