Do you want to publish a course? Click here

Immersive Operation of a Semi-Autonomous Aerial Platform for Detecting and Mapping Radiation

101   0   0.0 ( 0 )
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Recent advancements in radiation detection and computer vision have enabled small unmanned aerial systems (sUASs) to produce 3D nuclear radiation maps in real-time. Currently these state-of-the-art systems still require two operators: one to pilot the sUAS and another operator to monitor the detected radiation. In this work we present a system that integrates real-time 3D radiation visualization with semi-autonomous sUAS control. Our Virtual Reality interface enables a single operator to define trajectories using waypoints to abstract complex flight control and utilize the semi-autonomous maneuvering capabilities of the sUAS. The interface also displays a fused radiation visualization and environment map, thereby enabling simultaneous remote operation and radiation monitoring by a single operator. This serves as the basis for development of a single system that deploys and autonomously controls fleets of sUASs.



rate research

Read More

We introduce Air Learning, an open-source simulator, and a gym environment for deep reinforcement learning research on resource-constrained aerial robots. Equipped with domain randomization, Air Learning exposes a UAV agent to a diverse set of challenging scenarios. We seed the toolset with point-to-point obstacle avoidance tasks in three different environments and Deep Q Networks (DQN) and Proximal Policy Optimization (PPO) trainers. Air Learning assesses the policies performance under various quality-of-flight (QoF) metrics, such as the energy consumed, endurance, and the average trajectory length, on resource-constrained embedded platforms like a Raspberry Pi. We find that the trajectories on an embedded Ras-Pi are vastly different from those predicted on a high-end desktop system, resulting in up to 40% longer trajectories in one of the environments. To understand the source of such discrepancies, we use Air Learning to artificially degrade high-end desktop performance to mimic what happens on a low-end embedded system. We then propose a mitigation technique that uses the hardware-in-the-loop to determine the latency distribution of running the policy on the target platform (onboard compute on the aerial robot). A randomly sampled latency from the latency distribution is then added as an artificial delay within the training loop. Training the policy with artificial delays allows us to minimize the hardware gap (discrepancy in the flight time metric reduced from 37.73% to 0.5%). Thus, Air Learning with hardware-in-the-loop characterizes those differences and exposes how the onboard computes choice affects the aerial robots performance. We also conduct reliability studies to assess the effect of sensor failures on the learned policies. All put together, Air Learning enables a broad class of deep RL research on UAVs. The source code is available at:http://bit.ly/2JNAVb6.
We present PufferBot, an aerial robot with an expandable structure that may expand to protect a drones propellers when the robot is close to obstacles or collocated humans. PufferBot is made of a custom 3D-printed expandable scissor structure, which utilizes a one degree of freedom actuator with rack and pinion mechanism. We propose four designs for the expandable structure, each with unique characterizations for different situations. Finally, we present three motivating scenarios in which PufferBot may extend the utility of existing static propeller guard structures. The supplementary video can be found at: https://youtu.be/XtPepCxWcBg
106 - Yuhui Su , Zhe Liu , Chunyang Chen 2021
Graphical User Interface (GUI) provides visual bridges between software apps and end users. However, due to the compatibility of software or hardware, UI display issues such as text overlap, blurred screen, image missing always occur during GUI rendering on different devices. Because these UI display issues can be found directly by human eyes, in this paper, we implement an online UI display issue detection tool OwlEyes-Online, which provides a simple and easy-to-use platform for users to realize the automatic detection and localization of UI display issues. The OwlEyes-Online can automatically run the app and get its screenshots and XML files, and then detect the existence of issues by analyzing the screenshots. In addition, OwlEyes-Online can also find the detailed area of the issue in the given screenshots to further remind developers. Finally, OwlEyes-Online will automatically generate test reports with UI display issues detected in app screenshots and send them to users. The OwlEyes-Online was evaluated and proved to be able to accurately detect UI display issues. Tool Link: http://www.owleyes.online:7476 Github Link: https://github.com/franklinbill/owleyes Demo Video Link: https://youtu.be/002nHZBxtCY
This letter presents a fully autonomous robot system that possesses both terrestrial and aerial mobility. We firstly develop a lightweight terrestrial-aerial quadrotor that carries sufficient sensing and computing resources. It incorporates both the high mobility of unmanned aerial vehicles and the long endurance of unmanned ground vehicles. An adaptive navigation framework is then proposed that brings complete autonomy to it. In this framework, a hierarchical motion planner is proposed to generate safe and low-power terrestrial-aerial trajectories in unknown environments. Moreover, we present a unified motion controller which dynamically adjusts energy consumption in terrestrial locomotion. Extensive realworld experiments and benchmark comparisons validate the robustness and outstanding performance of the proposed system. During the tests, it safely traverses complex environments with terrestrial aerial integrated mobility, and achieves 7 times energy savings in terrestrial locomotion. Finally, we will release our code and hardware configuration as an open-source package.
Autonomous-mobile cyber-physical machines are part of our future. Specifically, unmanned-aerial-vehicles have seen a resurgence in activity with use-cases such as package delivery. These systems face many challenges such as their low-endurance caused by limited onboard-energy, hence, improving the mission-time and energy are of importance. Such improvements traditionally are delivered through better algorithms. But our premise is that more powerful and efficient onboard-compute should also address the problem. This paper investigates how the compute subsystem, in a cyber-physical mobile machine, such as a Micro Aerial Vehicle, impacts mission-time and energy. Specifically, we pose the question as what is the role of computing for cyber-physical mobile robots? We show that compute and motion are tightly intertwined, hence a close examination of cyber and physical processes and their impact on one another is necessary. We show different impact paths through which compute impacts mission-metrics and examine them using analytical models, simulation, and end-to-end benchmarking. To enable similar studies, we open sourced MAVBench, our tool-set consisting of a closed-loop simulator and a benchmark suite. Our investigations show cyber-physical co-design, a methodology where robots cyber and physical processes/quantities are developed with one another consideration, similar to hardware-software co-design, is necessary for optimal robot design.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا