ترغب بنشر مسار تعليمي؟ اضغط هنا

Bringing A Robot Simulator to the SCAMP Vision System

75   0   0.0 ( 0 )
 نشر من قبل Yanan Liu
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This work develops and demonstrates the integration of the SCAMP-5d vision system into the CoppeliaSim robot simulator, creating a semi-simulated environment. By configuring a camera in the simulator and setting up communication with the SCAMP python host through remote API, sensor images from the simulator can be transferred to the SCAMP vision sensor, where on-sensor image processing such as CNN inference can be performed. SCAMP output is then fed back into CoppeliaSim. This proposed platform integration enables rapid prototyping validations of SCAMP algorithms for robotic systems. We demonstrate a car localisation and tracking task using this proposed semi-simulated platform, with a CNN inference on SCAMP to command the motion of a robot. We made this platform available online.

قيم البحث

اقرأ أيضاً

Currently, mobile robots are developing rapidly and are finding numerous applications in industry. However, there remain a number of problems related to their practical use, such as the need for expensive hardware and their high power consumption lev els. In this study, we propose a navigation system that is operable on a low-end computer with an RGB-D camera and a mobile robot platform for the operation of an integrated autonomous driving system. The proposed system does not require LiDARs or a GPU. Our raw depth image ground segmentation approach extracts a traversability map for the safe driving of low-body mobile robots. It is designed to guarantee real-time performance on a low-cost commercial single board computer with integrated SLAM, global path planning, and motion planning. Running sensor data processing and other autonomous driving functions simultaneously, our navigation method performs rapidly at a refresh rate of 18Hz for control command, whereas other systems have slower refresh rates. Our method outperforms current state-of-the-art navigation approaches as shown in 3D simulation tests. In addition, we demonstrate the applicability of our mobile robot system through successful autonomous driving in a residential lobby.
Nasopharyngeal (NP) swab sampling is an effective approach for the diagnosis of coronavirus disease 2019 (COVID-19). Medical staffs carrying out the task of collecting NP specimens are in close contact with the suspected patient, thereby posing a hig h risk of cross-infection. We propose a low-cost miniature robot that can be easily assembled and remotely controlled. The system includes an active end-effector, a passive positioning arm, and a detachable swab gripper with integrated force sensing capability. The cost of the materials for building this robot is 55 USD and the total weight of the functional part is 0.23kg. The design of the force sensing swab gripper was justified using Finite Element (FE) modeling and the performances of the robot were validated with a simulation phantom and three pig noses. FE analysis indicated a 0.5mm magnitude displacement of the grippers sensing beam, which meets the ideal detecting range of the optoelectronic sensor. Studies on both the phantom and the pig nose demonstrated the successful operation of the robot during the collection task. The average forces were found to be 0.35N and 0.85N, respectively. It is concluded that the proposed robot is promising and could be further developed to be used in vivo.
PyRep is a toolkit for robot learning research, built on top of the virtual robotics experimentation platform (V-REP). Through a series of modifications and additions, we have created a tailored version of V-REP built with robot learning in mind. The new PyRep toolkit offers three improvements: (1) a simple and flexible API for robot control and scene manipulation, (2) a new rendering engine, and (3) speed boosts upwards of 10,000x in comparison to the previous Python Remote API. With these improvements, we believe PyRep is the ideal toolkit to facilitate rapid prototyping of learning algorithms in the areas of reinforcement learning, imitation learning, state estimation, mapping, and computer vision.
Humans in contrast to robots are excellent in performing fine manipulation tasks owing to their remarkable dexterity and sensorimotor organization. Enabling robots to acquire such capabilities, necessitates a framework that not only replicates the hu man behaviour but also integrates the multi-sensory information for autonomous object interaction. To address such limitations, this research proposes to augment the previously developed kernelized synergies framework with visual perception to automatically adapt to the unknown objects. The kernelized synergies, inspired from humans, retain the same reduced subspace for object grasping and manipulation. To detect object in the scene, a simplified perception pipeline is used that leverages the RANSAC algorithm with Euclidean clustering and SVM for object segmentation and recognition respectively. Further, the comparative analysis of kernelized synergies with other state of art approaches is made to confirm their flexibility and effectiveness on the robotic manipulation tasks. The experiments conducted on the robot hand confirm the robustness of modified kernelized synergies framework against the uncertainties related to the perception of environment.
This paper proposes a method to navigate a mobile robot by estimating its state over a number of distributed sensor networks (DSNs) such that it can successively accomplish a sequence of tasks, i.e., its state enters each targeted set and stays insid e no less than the desired time, under a resource-aware, time-efficient, and computation- and communication-constrained setting.We propose a new robot state estimation and navigation architecture, which integrates an event-triggered task-switching feedback controller for the robot and a two-time-scale distributed state estimator for each sensor. The architecture has three major advantages over existing approaches: First, in each task only one DSN is active for sensing and estimating the robot state, and for different tasks the robot can switch the active DSN by taking resource saving and system performance into account; Second, the robot only needs to communicate with one active sensor at each time to obtain its state information from the active DSN; Third, no online optimization is required. With the controller, the robot is able to accomplish a task by following a reference trajectory and switch to the next task when an event-triggered condition is fulfilled. With the estimator, each active sensor is able to estimate the robot state. Under proper conditions, we prove that the state estimation error and the trajectory tracking deviation are upper bounded by two time-varying sequences respectively, which play an essential role in the event-triggered condition. Furthermore, we find a sufficient condition for accomplishing a task and provide an upper bound of running time for the task. Numerical simulations of an indoor robots localization and navigation are provided to validate the proposed architecture.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا