Do you want to publish a course? Click here

Singularity-free Aerial Deformation by Two-dimensional Multilinked Aerial Robot with 1-DoF Vectorable Propeller

53   0   0.0 ( 0 )
 Added by Moju Zhao
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Two-dimensional multilinked structures can benefit aerial robots in both maneuvering and manipulation because of their deformation ability. However, certain types of singular forms must be avoided during deformation. Hence, an additional 1 Degrees-of-Freedom (DoF) vectorable propeller is employed in this work to overcome singular forms by properly changing the thrust direction. In this paper, we first extend modeling and control methods from our previous works for an under-actuated model whose thrust forces are not unidirectional. We then propose a planning method for the vectoring angles to solve the singularity by maximizing the controllability under arbitrary robot forms. Finally, we demonstrate the feasibility of the proposed methods by experiments where a quad-type model is used to perform trajectory tracking under challenging forms, such as a line-shape form, and the deformation passing these challenging forms.

rate research

Read More

Multilinked aerial robot is one of the state-of-the-art works in aerial robotics, which demonstrates the deformability benefiting both maneuvering and manipulation. However, the performance in outdoor physical world has not yet been evaluated because of the weakness in the controllability and the lack of the state estimation for autonomous flight. Thus we adopt tilting propellers to enhance the controllability. The related design, modeling and control method are developed in this work to enable the stable hovering and deformation. Furthermore, the state estimation which involves the time synchronization between sensors and the multilinked kinematics is also presented in this work to enable the fully autonomous flight in the outdoor environment. Various autonomous outdoor experiments, including the fast maneuvering for interception with target, object grasping for delivery, and blanket manipulation for firefighting are performed to evaluate the feasibility and versatility of the proposed robot platform. To the best of our knowledge, this is the first study for the multilinked aerial robot to achieve the fully autonomous flight and the manipulation task in outdoor environment. We also applied our platform in all challenges of the 2020 Mohammed Bin Zayed International Robotics Competition, and ranked third place in Challenge 1 and sixth place in Challenge 3 internationally, demonstrating the reliable flight performance in the fields.
This paper describes the process and challenges behind the design and development of a micro-gravity enabling aerial robot. The vehicle, designed to provide at minimum 4 seconds of micro-gravity at an accuracy of .001 gs, is designed with suggestions and constraints from both academia and industry as well a regulatory agency. The feasibility of the flight mission is validated using a simulation environment, where models obtained from system identification of existing hardware are implemented to increase the fidelity of the simulation. The current development of a physical test bed is described. The vehicle employs both control and autonomy logic, which is developed in the Simulink environment and executed in a Pixhawk flight control board.
Aerial filming is constantly gaining importance due to the recent advances in drone technology. It invites many intriguing, unsolved problems at the intersection of aesthetical and scientific challenges. In this work, we propose a deep reinforcement learning agent which supervises motion planning of a filming drone by making desirable shot mode selections based on aesthetical values of video shots. Unlike most of the current state-of-the-art approaches that require explicit guidance by a human expert, our drone learns how to make favorable viewpoint selections by experience. We propose a learning scheme that exploits aesthetical features of retrospective shots in order to extract a desirable policy for better prospective shots. We train our agent in realistic AirSim simulations using both a hand-crafted reward function as well as reward from direct human input. We then deploy the same agent on a real DJI M210 drone in order to test the generalization capability of our approach to real world conditions. To evaluate the success of our approach in the end, we conduct a comprehensive user study in which participants rate the shot quality of our methods. Videos of the system in action can be seen at https://youtu.be/qmVw6mfyEmw.
Aerial cinematography is significantly expanding the capabilities of film-makers. Recent progress in autonomous unmanned aerial vehicles (UAVs) has further increased the potential impact of aerial cameras, with systems that can safely track actors in unstructured cluttered environments. Professional productions, however, require the use of multiple cameras simultaneously to record different viewpoints of the same scene, which are edited into the final footage either in real time or in post-production. Such extreme motion coordination is particularly hard for unscripted action scenes, which are a common use case of aerial cameras. In this work we develop a real-time multi-UAV coordination system that is capable of recording dynamic targets while maximizing shot diversity and avoiding collisions and mutual visibility between cameras. We validate our approach in multiple cluttered environments of a photo-realistic simulator, and deploy the system using two UAVs in real-world experiments. We show that our coordination scheme has low computational cost and takes only 1.17 ms on average to plan for a team of 3 UAVs over a 10 s time horizon. Supplementary video: https://youtu.be/m2R3anv2ADE
Today, physical Human-Robot Interaction (pHRI) is a very popular topic in the field of ground manipulation. At the same time, Aerial Physical Interaction (APhI) is also developing very fast. Nevertheless, pHRI with aerial vehicles has not been addressed so far. In this work, we present the study of one of the first systems in which a human is physically connected to an aerial vehicle by a cable. We want the robot to be able to pull the human toward a desired position (or along a path) only using forces as an indirect communication-channel. We propose an admittance-based approach that makes pHRI safe. A controller, inspired by the literature on flexible manipulators, computes the desired interaction forces that properly guide the human. The stability of the system is formally proved with a Lyapunov-based argument. The system is also shown to be passive, and thus robust to non-idealities like additional human forces, time-varying inputs, and other external disturbances. We also design a maneuver regulation policy to simplify the path following problem. The global method has been experimentally validated on a group of four subjects, showing a reliable and safe pHRI.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا