ترغب بنشر مسار تعليمي؟ اضغط هنا

Detect, Reject, Correct: Crossmodal Compensation of Corrupted Sensors

63   0   0.0 ( 0 )
 نشر من قبل Michelle A. Lee
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Using sensor data from multiple modalities presents an opportunity to encode redundant and complementary features that can be useful when one modality is corrupted or noisy. Humans do this everyday, relying on touch and proprioceptive feedback in visually-challenging environments. However, robots might not always know when their sensors are corrupted, as even broken sensors can return valid values. In this work, we introduce the Crossmodal Compensation Model (CCM), which can detect corrupted sensor modalities and compensate for them. CMM is a representation model learned with self-supervision that leverages unimodal reconstruction loss for corruption detection. CCM then discards the corrupted modality and compensates for it with information from the remaining sensors. We show that CCM learns rich state representations that can be used for contact-rich manipulation policies, even when input modalities are corrupted in ways not seen during training time.

قيم البحث

اقرأ أيضاً

In essence, successful grasp boils down to correct responses to multiple contact events between fingertips and objects. In most scenarios, tactile sensing is adequate to distinguish contact events. Due to the nature of high dimensionality of tactile information, classifying spatiotemporal tactile signals using conventional model-based methods is difficult. In this work, we propose to predict and classify tactile signal using deep learning methods, seeking to enhance the adaptability of the robotic grasp system to external event changes that may lead to grasping failure. We develop a deep learning framework and collect 6650 tactile image sequences with a vision-based tactile sensor, and the neural network is integrated into a contact-event-based robotic grasping system. In grasping experiments, we achieved 52% increase in terms of object lifting success rate with contact detection, significantly higher robustness under unexpected loads with slip prediction compared with open-loop grasps, demonstrating that integration of the proposed framework into robotic grasping system substantially improves picking success rate and capability to withstand external disturbances.
The lack of extensive research in the application of inexpensive wireless sensor nodes for the early detection of wildfires motivated us to investigate the cost of such a network. As a first step, in this paper we present several results which relate the time to detection and the burned area to the number of sensor nodes in the region which is protected. We prove that the probability distribution of the burned area at the moment of detection is approximately exponential, given that some hypotheses hold: the positions of the sensor nodes are independent random variables uniformly distributed and the number of sensor nodes is large. This conclusion depends neither on the number of ignition points nor on the propagation model of the fire.
This paper presents an active stabilization method for a fully actuated lower-limb exoskeleton. The method was tested on the exoskeleton ATALANTE, which was designed and built by the French start-up company Wandercraft. The main objective of this pap er is to present a practical method of realizing more robust walking on hardware through active ankle compensation. The nominal gait was generated through the hybrid zero dynamic framework. The ankles are individually controlled to establish three main directives; (1) keeping the non-stance foot parallel to the ground, (2) maintaining rigid contact between the stance foot and the ground, and (3) closing the loop on pelvis orientation to achieve better tracking. Each individual component of this method was demonstrated separately to show each components contribution to stability. The results showed that the ankle controller was able to experimentally maintain static balance in the sagittal plane while the exoskeleton was balanced on one leg, even when disturbed. The entire ankle controller was then also demonstrated on crutch-less dynamic walking. During testing, an anatomically correct manikin was placed in the exoskeleton, in lieu of a paraplegic patient. The pitch of the pelvis of the exoskeleton-manikin system was shown to track the gait trajectory better when ankle compensation was used. Overall, active ankle compensation was demonstrated experimentally to improve balance in the sagittal plane of the exoskeleton manikin system and points to an improved practical approach for stable walking.
Human infants are able to acquire natural language seemingly easily at an early age. Their language learning seems to occur simultaneously with learning other cognitive functions as well as with playful interactions with the environment and caregiver s. From a neuroscientific perspective, natural language is embodied, grounded in most, if not all, sensory and sensorimotor modalities, and acquired by means of crossmodal integration. However, characterising the underlying mechanisms in the brain is difficult and explaining the grounding of language in crossmodal perception and action remains challenging. In this paper, we present a neurocognitive model for language grounding which reflects bio-inspired mechanisms such as an implicit adaptation of timescales as well as end-to-end multimodal abstraction. It addresses developmental robotic interaction and extends its learning capabilities using larger-scale knowledge-based data. In our scenario, we utilise the humanoid robot NICO in obtaining the EMIL data collection, in which the cognitive robot interacts with objects in a childrens playground environment while receiving linguistic labels from a caregiver. The model analysis shows that crossmodally integrated representations are sufficient for acquiring language merely from sensory input through interaction with objects in an environment. The representations self-organise hierarchically and embed temporal and spatial information through composition and decomposition. This model can also provide the basis for further crossmodal integration of perceptually grounded cognitive representations.
Recent research has focused on the monitoring of global-scale online data for improved detection of epidemics, mood patterns, movements in the stock market, political revolutions, box-office revenues, consumer behaviour and many other important pheno mena. However, privacy considerations and the sheer scale of data available online are quickly making global monitoring infeasible, and existing methods do not take full advantage of local network structure to identify key nodes for monitoring. Here, we develop a model of the contagious spread of information in a global-scale, publicly-articulated social network and show that a simple method can yield not just early detection, but advance warning of contagious outbreaks. In this method, we randomly choose a small fraction of nodes in the network and then we randomly choose a friend of each node to include in a group for local monitoring. Using six months of data from most of the full Twittersphere, we show that this friend group is more central in the network and it helps us to detect viral outbreaks of the use of novel hashtags about 7 days earlier than we could with an equal-sized randomly chosen group. Moreover, the method actually works better than expected due to network structure alone because highly central actors are both more active and exhibit increased diversity in the information they transmit to others. These results suggest that local monitoring is not just more efficient, it is more effective, and it is possible that other contagious processes in global-scale networks may be similarly monitored.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا