Do you want to publish a course? Click here

Quantifying Emergent Behavior of Autonomous Robots

60   0   0.0 ( 0 )
 Added by Georg Martius
 Publication date 2015
and research's language is English




Ask ChatGPT about the research

Quantifying behaviors of robots which were generated autonomously from task-independent objective functions is an important prerequisite for objective comparisons of algorithms and movements of animals. The temporal sequence of such a behavior can be considered as a time series and hence complexity measures developed for time series are natural candidates for its quantification. The predictive information and the excess entropy are such complexity measures. They measure the amount of information the past contains about the future and thus quantify the nonrandom structure in the temporal sequence. However, when using these measures for systems with continuous states one has to deal with the fact that their values will depend on the resolution with which the systems states are observed. For deterministic systems both measures will diverge with increasing resolution. We therefore propose a new decomposition of the excess entropy in resolution dependent and resolution independent parts and discuss how they depend on the dimensionality of the dynamics, correlations and the noise level. For the practical estimation we propose to use estimates based on the correlation integral instead of the direct estimation of the mutual information using the algorithm by Kraskov et al. (2004) which is based on next neighbor statistics because the latter allows less control of the scale dependencies. Using our algorithm we are able to show how autonomous learning generates behavior of increasing complexity with increasing learning duration.



rate research

Read More

Optimal and Learning Control for Autonomous Robots has been taught in the Robotics, Systems and Controls Masters at ETH Zurich with the aim to teach optimal control and reinforcement learning for closed loop control problems from a unified point of view. The starting point is the formulation of of an optimal control problem and deriving the different types of solutions and algorithms from there. These lecture notes aim at supporting this unified view with a unified notation wherever possible, and a bit of a translation help to compare the terminology and notation in the different fields. The course assumes basic knowledge of Control Theory, Linear Algebra and Stochastic Calculus.
This paper presents a holistic approach to saliency-guided visual attention modeling (SVAM) for use by autonomous underwater robots. Our proposed model, named SVAM-Net, integrates deep visual features at various scales and semantics for effective salient object detection (SOD) in natural underwater images. The SVAM-Net architecture is configured in a unique way to jointly accommodate bottom-up and top-down learning within two separate branches of the network while sharing the same encoding layers. We design dedicated spatial attention modules (SAMs) along these learning pathways to exploit the coarse-level and fine-level semantic features for SOD at four stages of abstractions. The bottom-up branch performs a rough yet reasonably accurate saliency estimation at a fast rate, whereas the deeper top-down branch incorporates a residual refinement module (RRM) that provides fine-grained localization of the salient objects. Extensive performance evaluation of SVAM-Net on benchmark datasets clearly demonstrates its effectiveness for underwater SOD. We also validate its generalization performance by several ocean trials data that include test images of diverse underwater scenes and waterbodies, and also images with unseen natural objects. Moreover, we analyze its computational feasibility for robotic deployments and demonstrate its utility in several important use cases of visual attention modeling.
Camera anomalies like rain or dust can severelydegrade image quality and its related tasks, such as localizationand segmentation. In this work we address this importantissue by implementing a pre-processing step that can effectivelymitigate such artifacts in a real-time fashion, thus supportingthe deployment of autonomous systems with limited computecapabilities. We propose a shallow generator with aggregation,trained in an adversarial setting to solve the ill-posed problemof reconstructing the occluded regions. We add an enhancer tofurther preserve high-frequency details and image colorization.We also produce one of the largest publicly available datasets1to train our architecture and use realistic synthetic raindrops toobtain an improved initialization of the model. We benchmarkour framework on existing datasets and on our own imagesobtaining state-of-the-art results while enabling real-time per-formance, with up to 40x faster inference time than existingapproaches.
Among fundamental problems in the context of distributed computing by autonomous mobile entities, one of the most representative and well studied is {sc Point Convergence}: given an arbitrary initial configuration of identical entities, disposed in the Euclidean plane, move in such a way that, for all $eps>0$, a configuration in which the separation between all entities is at most $eps$ is eventually reached and maintained. The problem has been previously studied in a variety of settings, including full visibility, exact measurements (like distances and angles), and synchronous activation of entities. Our study concerns the minimal assumptions under which entities, moving asynchronously with limited and unknown visibility range and subject to limited imprecision in measurements, can be guaranteed to converge in this way. We present an algorithm that solves {sc Point Convergence}, for entities in the plane, in such a setting, provided the degree of asynchrony is bounded: while any one entity is active, any other entity can be activated at most $k$ times, for some arbitrarily large but fixed $k$. This provides a strong positive answer to a decade old open question posed by Katreniak. We also prove that in a comparable setting that permits unbounded asynchrony, {sc Point Convergence} in the plane is impossible, contingent on the natural assumption that algorithms maintain the (visible) connectivity among entities present in the initial configuration. This variant, that we call {sc Cohesive Convergence}, serves to distinguish the power of bounded and unbounded asynchrony in the control of autonomous mobile entities, settling at the same time a long-standing question whether in the Euclidean plane synchronously scheduled entities are more powerful than asynchronously scheduled entities.
We present a new and complex traffic dataset, METEOR, which captures traffic patterns in unstructured scenarios in India. METEOR consists of more than 1000 one-minute video clips, over 2 million annotated frames with ego-vehicle trajectories, and more than 13 million bounding boxes for surrounding vehicles or traffic agents. METEOR is a unique dataset in terms of capturing the heterogeneity of microscopic and macroscopic traffic characteristics. Furthermore, we provide annotations for rare and interesting driving behaviors such as cut-ins, yielding, overtaking, overspeeding, zigzagging, sudden lane changing, running traffic signals, driving in the wrong lanes, taking wrong turns, lack of right-of-way rules at intersections, etc. We also present diverse traffic scenarios corresponding to rainy weather, nighttime driving, driving in rural areas with unmarked roads, and high-density traffic scenarios. We use our novel dataset to evaluate the performance of object detection and behavior prediction algorithms. We show that state-of-the-art object detectors fail in these challenging conditions and also propose a new benchmark test: action-behavior prediction with a baseline mAP score of 70.74.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا