ترغب بنشر مسار تعليمي؟ اضغط هنا

Dynamical System Segmentation for Information Measures in Motion

83   0   0.0 ( 0 )
 نشر من قبل Todd Murphey
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Motions carry information about the underlying task being executed. Previous work in human motion analysis suggests that complex motions may result from the composition of fundamental submovements called movemes. The existence of finite structure in motion motivates information-theoretic approaches to motion analysis and robotic assistance. We define task embodiment as the amount of task information encoded in an agents motions. By decoding task-specific information embedded in motion, we can use task embodiment to create detailed performance assessments. We extract an alphabet of behaviors comprising a motion without textit{a priori} knowledge using a novel algorithm, which we call dynamical system segmentation. For a given task, we specify an optimal agent, and compute an alphabet of behaviors representative of the task. We identify these behaviors in data from agent executions, and compare their relative frequencies against that of the optimal agent using the Kullback-Leibler divergence. We validate this approach using a dataset of human subjects (n=53) performing a dynamic task, and under this measure find that individuals receiving assistance better embody the task. Moreover, we find that task embodiment is a better predictor of assistance than integrated mean-squared-error.



قيم البحث

اقرأ أيضاً

153 - Xiao Gao , Miao Li , Xiaohui Xiao 2021
Dynamical System has been widely used for encoding trajectories from human demonstration, which has the inherent adaptability to dynamically changing environments and robustness to perturbations. In this paper we propose a framework to learn a dynami cal system that couples position and orientation based on a diffeomorphism. Different from other methods, it can realise the synchronization between positon and orientation during the whole trajectory. Online grasping experiments are carried out to prove its effectiveness and online adaptability.
This paper introduces a novel motion planning algorithm, incrementally stochastic and accelerated gradient information mixed optimization (iSAGO), for robotic manipulators in a narrow workspace. Primarily, we propose the overall scheme of iSAGO integ rating the accelerated and stochastic gradient information for efficient descent in the penalty method. In the stochastic part, we generate the adaptive stochastic moment via the random selection of collision checkboxes, interval time-series, and penalty factor based on Adam to solve the body-obstacle stuck case. Due to the slow convergence of STOMA, we integrate the accelerated gradient and stimulate the descent rate in a Lipschitz constant reestimation framework. Moreover, we introduce the Bayesian tree inference (BTI) method, transforming the whole trajectory optimization (SAGO) into an incremental sub-trajectory optimization (iSAGO) to improve the computational efficiency and success rate. Finally, we demonstrate the key coefficient tuning, benchmark iSAGO against other planners (CHOMP, GPMP2, TrajOpt, STOMP, and RRT-Connect), and implement iSAGO on AUBO-i5 in a storage shelf. The result shows the highest success rate and moderate solving efficiency of iSAGO.
This paper investigates the online motion coordination problem for a group of mobile robots moving in a shared workspace. Based on the realistic assumptions that each robot is subject to both velocity and input constraints and can have only local vie w and local information, a fully distributed multi-robot motion coordination strategy is proposed. Building on top of a cell decomposition, a conflict detection algorithm is presented first. Then, a rule is proposed to assign dynamically a planning order to each pair of neighboring robots, which is deadlock-free. Finally, a two-step motion planning process that combines fixed-path planning and trajectory planning is designed. The effectiveness of the resulting solution is verified by a simulation example.
Given two consecutive RGB-D images, we propose a model that estimates a dense 3D motion field, also known as scene flow. We take advantage of the fact that in robot manipulation scenarios, scenes often consist of a set of rigidly moving objects. Our model jointly estimates (i) the segmentation of the scene into an unknown but finite number of objects, (ii) the motion trajectories of these objects and (iii) the object scene flow. We employ an hourglass, deep neural network architecture. In the encoding stage, the RGB and depth images undergo spatial compression and correlation. In the decoding stage, the model outputs three images containing a per-pixel estimate of the corresponding object center as well as object translation and rotation. This forms the basis for inferring the object segmentation and final object scene flow. To evaluate our model, we generated a new and challenging, large-scale, synthetic dataset that is specifically targeted at robotic manipulation: It contains a large number of scenes with a very diverse set of simultaneously moving 3D objects and is recorded with a simulated, static RGB-D camera. In quantitative experiments, we show that we outperform state-of-the-art scene flow and motion-segmentation methods on this data set. In qualitative experiments, we show how our learned model transfers to challenging real-world scenes, visually generating better results than existing methods.
Many robotic tasks rely on the accurate localization of moving objects within a given workspace. This information about the objects poses and velocities are used for control,motion planning, navigation, interaction with the environment or verificatio n. Often motion capture systems are used to obtain such a state estimate. However, these systems are often costly, limited in workspace size and not suitable for outdoor usage. Therefore, we propose a lightweight and easy to use, visual-inertial Simultaneous Localization and Mapping approach that leverages cost-efficient, paper printable artificial landmarks, socalled fiducials. Results show that by fusing visual and inertial data, the system provides accurate estimates and is robust against fast motions and changing lighting conditions. Tight integration of the estimation of sensor and fiducial pose as well as extrinsics ensures accuracy, map consistency and avoids the requirement for precalibration. By providing an open source implementation and various datasets, partially with ground truth information, we enable community members to run, test, modify and extend the system either using these datasets or directly running the system on their own robotic setups.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا