ترغب بنشر مسار تعليمي؟ اضغط هنا

Efficient Navigation of Colloidal Robots in an Unknown Environment via Deep Reinforcement Learning

137   0   0.0 ( 0 )
 نشر من قبل Yuguang Yang
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

Equipping active colloidal robots with intelligence such that they can efficiently navigate in unknown complex environments could dramatically impact their use in emerging applications like precision surgery and targeted drug delivery. Here we develop a model-free deep reinforcement learning that can train colloidal robots to learn effective navigation strategies in unknown environments with random obstacles. We show that trained robot agents learn to make navigation decisions regarding both obstacle avoidance and travel time minimization, based solely on local sensory inputs without prior knowledge of the global environment. Such agents with biologically inspired mechanisms can acquire competitive navigation capabilities in large-scale, complex environments containing obstacles of diverse shapes, sizes, and configurations. This study illustrates the potential of artificial intelligence in engineering active colloidal systems for future applications and constructing complex active systems with visual and learning capability.



قيم البحث

اقرأ أيضاً

Efficient navigation and precise localization of Brownian micro/nano self-propelled motor particles within complex landscapes could enable future high-tech applications involving for example drug delivery, precision surgery, oil recovery, and environ mental remediation. Here we employ a model-free deep reinforcement learning algorithm based on bio-inspired neural networks to enable different types of micro/nano motors to be continuously controlled to carry out complex navigation and localization tasks. Micro/nano motors with either tunable self-propelling speeds or orientations or both, are found to exhibit strikingly different dynamics. In particular, distinct control strategies are required to achieve effective navigation in free space and obstacle environments, as well as under time constraints. Our findings provide fundamental insights into active dynamics of Brownian particles controlled using artificial intelligence and could guide the design of motor and robot control systems with diverse application requirements.
Designing intelligent microrobots that can autonomously navigate and perform instructed routines in blood vessels, a complex and crowded environment with obstacles including dense cells, different flow patterns and diverse vascular geometries, can of fer enormous possibilities in biomedical applications. Here we report a hierarchical control scheme that enables a microrobot to efficiently navigate and execute customizable routines in blood vessels. The control scheme consists of two highly decoupled components: a high-level controller setting short-ranged dynamic targets to guide the microrobot to follow a preset path and a low-level deep reinforcement learning (DRL) controller responsible for maneuvering microrobots towards these dynamic guiding targets. The proposed DRL controller utilizes three-dimensional (3D) convolutional neural networks and is capable of learning control policy directly from a coarse raw 3D sensory input. In blood vessels with rich configurations of red blood cells and vessel geometry, the control scheme enables efficient navigation and faithful execution of instructed routines. The control scheme is also robust to adversarial perturbations including blood flows. This study provides a proof-of-principle for designing data-driven control systems for autonomous navigation in vascular networks; it illustrates the great potential of artificial intelligence for broad biomedical applications such as target drug delivery, blood clots clear, precision surgery, disease diagnosis, and more.
Deep reinforcement learning is an emerging machine learning approach which can teach a computer to learn from their actions and rewards similar to the way humans learn from experience. It offers many advantages in automating decision processes to nav igate large parameter spaces. This paper proposes a novel approach to the efficient measurement of quantum devices based on deep reinforcement learning. We focus on double quantum dot devices, demonstrating the fully automatic identification of specific transport features called bias triangles. Measurements targeting these features are difficult to automate, since bias triangles are found in otherwise featureless regions of the parameter space. Our algorithm identifies bias triangles in a mean time of less than 30 minutes, and sometimes as little as 1 minute. This approach, based on dueling deep Q-networks, can be adapted to a broad range of devices and target transport features. This is a crucial demonstration of the utility of deep reinforcement learning for decision making in the measurement and operation of quantum devices.
Transfer Learning (TL) has shown great potential to accelerate Reinforcement Learning (RL) by leveraging prior knowledge from past learned policies of relevant tasks. Existing transfer approaches either explicitly computes the similarity between task s or select appropriate source policies to provide guided explorations for the target task. However, how to directly optimize the target policy by alternatively utilizing knowledge from appropriate source policies without explicitly measuring the similarity is currently missing. In this paper, we propose a novel Policy Transfer Framework (PTF) to accelerate RL by taking advantage of this idea. Our framework learns when and which source policy is the best to reuse for the target policy and when to terminate it by modeling multi-policy transfer as the option learning problem. PTF can be easily combined with existing deep RL approaches. Experimental results show it significantly accelerates the learning process and surpasses state-of-the-art policy transfer methods in terms of learning efficiency and final performance in both discrete and continuous action spaces.
This paper is an initial endeavor to bridge the gap between powerful Deep Reinforcement Learning methodologies and the problem of exploration/coverage of unknown terrains. Within this scope, MarsExplorer, an openai-gym compatible environment tailored to exploration/coverage of unknown areas, is presented. MarsExplorer translates the original robotics problem into a Reinforcement Learning setup that various off-the-shelf algorithms can tackle. Any learned policy can be straightforwardly applied to a robotic platform without an elaborate simulation model of the robots dynamics to apply a different learning/adaptation phase. One of its core features is the controllable multi-dimensional procedural generation of terrains, which is the key for producing policies with strong generalization capabilities. Four different state-of-the-art RL algorithms (A3C, PPO, Rainbow, and SAC) are trained on the MarsExplorer environment, and a proper evaluation of their results compared to the average human-level performance is reported. In the follow-up experimental analysis, the effect of the multi-dimensional difficulty setting on the learning capabilities of the best-performing algorithm (PPO) is analyzed. A milestone result is the generation of an exploration policy that follows the Hilbert curve without providing this information to the environment or rewarding directly or indirectly Hilbert-curve-like trajectories. The experimental analysis is concluded by comparing PPO learned policy results with frontier-based exploration context for extended terrain sizes. The source code can be found at: https://github.com/dimikout3/GeneralExplorationPolicy.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا