ترغب بنشر مسار تعليمي؟ اضغط هنا

Web Standards as Standard Pieces in Robotics

173   0   0.0 ( 0 )
 نشر من قبل Sekou Remy
 تاريخ النشر 2013
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Sekou L. Remy




اسأل ChatGPT حول البحث

Modern robotics often involves the use of web technologies as a means to cope with the complexity of design and operation. Many of these technologies have been formalized into standards, which are often avoided by those in robotics and controls because of a sometimes warranted fear that the web is too slow, or too uncertain for meaningful control applications. In this work we argue that while web technologies may not be applicable for all control, they should not be dismissed outright because they can provide critical help with system integration. Web technologies have also advanced significantly over the past decade. We present the details of an application of a web server to perform open and close-loop control (between 3Hz and 1kHz) over a variety of different network topologies. In our study we also consider the impact of a web browser to implement the control of the plant. Our results confirm that meaningful control can be performed using web technologies, and also highlight design choices that can limit their applicability.

قيم البحث

اقرأ أيضاً

Miniaturization and cost, two of the main attractive factors of swarm robotics, have motivated its use as a solution in object collecting tasks, search & rescue missions, and other applications. However, in the current literature only a few papers co nsider energy allocation efficiency within a swarm. Generally, robots recharge to their maximum level every time unconditionally, and do not incorporate estimates of the energy needed for their next task. In this paper we present an energy efficiency maximization method that minimizes the overall energy cost within a swarm while simultaneously maximizing swarm performance on an object gathering task. The method utilizes dynamic thresholds for upper and lower battery limits. This method has also shown to improve the efficiency of existing energy management methods.
Ensuring that autonomous space robot control software behaves as it should is crucial, particularly as software failure in space often equates to mission failure and could potentially endanger nearby astronauts and costly equipment. To minimise missi on failure caused by software errors, we can utilise a variety of tools and techniques to verify that the software behaves as intended. In particular, distinct nodes in a robotic system often require different verification techniques to ensure that they behave as expected. This paper introduces a method for integrating the various verification techniques that are applied to robotic software, via a First-Order Logic (FOL) specification that captures each nodes assumptions and guarantees. These FOL specifications are then used to guide the verification of the individual nodes, be it by testing or the use of a formal method. We also outline a way of measuring our confidence in the verification of the entire system in terms of the verification techniques used.
Those designing autonomous systems that interact with humans will invariably face questions about how humans think and make decisions. Fortunately, computational cognitive science offers insight into human decision-making using tools that will be fam iliar to those with backgrounds in optimization and control (e.g., probability theory, statistical machine learning, and reinforcement learning). Here, we review some of this work, focusing on how cognitive science can provide forward models of human decision-making and inverse models of how humans think about others decision-making. We highlight relevant recent developments, including approaches that synthesize blackbox and theory-driven modeling, accounts that recast heuristics and biases as forms of bounded optimality, and models that characterize human theory of mind and communication in decision-theoretic terms. In doing so, we aim to provide readers with a glimpse of the range of frameworks, methodologies, and actionable insights that lie at the intersection of cognitive science and control research.
Many real-world control problems involve both discrete decision variables - such as the choice of control modes, gear switching or digital outputs - as well as continuous decision variables - such as velocity setpoints, control gains or analogue outp uts. However, when defining the corresponding optimal control or reinforcement learning problem, it is commonly approximated with fully continuous or fully discrete action spaces. These simplifications aim at tailoring the problem to a particular algorithm or solver which may only support one type of action space. Alternatively, expert heuristics are used to remove discrete actions from an otherwise continuous space. In contrast, we propose to treat hybrid problems in their native form by solving them with hybrid reinforcement learning, which optimizes for discrete and continuous actions simultaneously. In our experiments, we first demonstrate that the proposed approach efficiently solves such natively hybrid reinforcement learning problems. We then show, both in simulation and on robotic hardware, the benefits of removing possibly imperfect expert-designed heuristics. Lastly, hybrid reinforcement learning encourages us to rethink problem definitions. We propose reformulating control problems, e.g. by adding meta actions, to improve exploration or reduce mechanical wear and tear.
Deep reinforcement learning has recently seen huge success across multiple areas in the robotics domain. Owing to the limitations of gathering real-world data, i.e., sample inefficiency and the cost of collecting it, simulation environments are utili zed for training the different agents. This not only aids in providing a potentially infinite data source, but also alleviates safety concerns with real robots. Nonetheless, the gap between the simulated and real worlds degrades the performance of the policies once the models are transferred into real robots. Multiple research efforts are therefore now being directed towards closing this sim-to-real gap and accomplish more efficient policy transfer. Recent years have seen the emergence of multiple methods applicable to different domains, but there is a lack, to the best of our knowledge, of a comprehensive review summarizing and putting into context the different methods. In this survey paper, we cover the fundamental background behind sim-to-real transfer in deep reinforcement learning and overview the main methods being utilized at the moment: domain randomization, domain adaptation, imitation learning, meta-learning and knowledge distillation. We categorize some of the most relevant recent works, and outline the main application scenarios. Finally, we discuss the main opportunities and challenges of the different approaches and point to the most promising directions.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا