ترغب بنشر مسار تعليمي؟ اضغط هنا

Reinforced Edge Selection using Deep Learning for Robust Surveillance in Unmanned Aerial Vehicles

117   0   0.0 ( 0 )
 نشر من قبل Soohyun Park
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we propose a novel deep Q-network (DQN)-based edge selection algorithm designed specifically for real-time surveillance in unmanned aerial vehicle (UAV) networks. The proposed algorithm is designed under the consideration of delay, energy, and overflow as optimizations to ensure real-time properties while striking a balance for other environment-related parameters. The merit of the proposed algorithm is verified via simulation-based performance evaluation.



قيم البحث

اقرأ أيضاً

Personal monitoring devices such as cyclist helmet cameras to record accidents or dash cams to catch collisions have proliferated, with more companies producing smaller and compact recording gadgets. As these devices are becoming a part of citizens e veryday arsenal, concerns over the residents privacy are progressing. Therefore, this paper presents SASSL, a secure aerial surveillance drone using split learning to classify whether there is a presence of a fire on the streets. This innovative split learning method transfers CCTV footage captured with a drone to a nearby server to run a deep neural network to detect a fires presence in real-time without exposing the original data. We devise a scenario where surveillance UAVs roam around the suburb, recording any unnatural behavior. The UAV can process the recordings through its on-mobile deep neural network system or transfer the information to a server. Due to the resource limitations of mobile UAVs, the UAV does not have the capacity to run an entire deep neural network on its own. This is where the split learning method comes in handy. The UAV runs the deep neural network only up to the first hidden layer and sends only the feature map to the cloud server, where the rest of the deep neural network is processed. By ensuring that the learning process is divided between the UAV and the server, the privacy of raw data is secured while the UAV does not overexert its minimal resources.
Pervasive applications are revolutionizing the perception that users have towards the environment. Indeed, pervasive applications perform resource intensive computations over large amounts of stream sensor data collected from multiple sources. This a llows applications to provide richer and deep insights into the natural characteristics that govern everything that surrounds us. A key limitation of these applications is that they have high energy footprints, which in turn hampers the quality of experience of users. While cloud and edge computing solutions can be applied to alleviate the problem, these solutions are hard to adopt in existing architecture and far from become ubiquitous. Fortunately, cloudlets are becoming portable enough, such that they can be transported and integrated into any environment easily and dynamically. In this article, we investigate how cloudlets can be transported by unmanned autonomous vehicles (UAV)s to provide computation support on the edge. Based on our study, we develop GEESE, a novel UAVbased system that enables the dynamic deployment of an edge computing infrastructure through the cooperation of multiple UAVs carrying cloudlets. By using GEESE, we conduct rigorous experiments to analyze the effort to deliver cloudlets using aerial, ground, and underwater UAVs. Our results indicate that UAVs can work in a cooperative manner to enable edge computing in the wild.
Astronomical adaptive optics systems are used to increase effective telescope resolution. However, they cannot be used to observe the whole sky since one or more natural guide stars of sufficient brightness must be found within the telescope field of view for the AO system to work. Even when laser guide stars are used, natural guide stars are still required to provide a constant position reference. Here, we introduce a technique to overcome this problem by using rotary unmanned aerial vehicles (UAVs) as a platform from which to produce artificial guide stars. We describe the concept, which relies on the UAV being able to measure its precise relative position. We investigate the adaptive optics performance improvements that can be achieved, which in the cases presented here can improve the Strehl ratio by a factor of at least 2 for a 8~m class telescope. We also discuss improvements to this technique, which is relevant to both astronomical and solar adaptive optics systems.
Advancements in artificial intelligence (AI) gives a great opportunity to develop an autonomous devices. The contribution of this work is an improved convolutional neural network (CNN) model and its implementation for the detection of road cracks, po tholes, and yellow lane in the road. The purpose of yellow lane detection and tracking is to realize autonomous navigation of unmanned aerial vehicle (UAV) by following yellow lane while detecting and reporting the road cracks and potholes to the server through WIFI or 5G medium. The fabrication of own data set is a hectic and time-consuming task. The data set is created, labeled and trained using default and an improved model. The performance of both these models is benchmarked with respect to accuracy, mean average precision (mAP) and detection time. In the testing phase, it was observed that the performance of the improved model is better in respect of accuracy and mAP. The improved model is implemented in UAV using the robot operating system for the autonomous detection of potholes and cracks in roads via UAV front camera vision in real-time.
The capabilities of autonomous flight with unmanned aerial vehicles (UAVs) have significantly increased in recent times. However, basic problems such as fast and robust geo-localization in GPS-denied environments still remain unsolved. Existing resea rch has primarily concentrated on improving the accuracy of localization at the cost of long and varying computation time in various situations, which often necessitates the use of powerful ground station machines. In order to make image-based geo-localization online and pragmatic for lightweight embedded systems on UAVs, we propose a framework that is reliable in changing scenes, flexible about computing resource allocation and adaptable to common camera placements. The framework is comprised of two stages: offline database preparation and online inference. At the first stage, color images and depth maps are rendered as seen from potential vehicle poses quantized over the satellite and topography maps of anticipated flying areas. A database is then populated with the global and local descriptors of the rendered images. At the second stage, for each captured real-world query image, top global matches are retrieved from the database and the vehicle pose is further refined via local descriptor matching. We present field experiments of image-based localization on two different UAV platforms to validate our results.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا