ترغب بنشر مسار تعليمي؟ اضغط هنا

Recovery of Meteorites Using an Autonomous Drone and Machine Learning

89   0   0.0 ( 0 )
 نشر من قبل Robert Citron
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

The recovery of freshly fallen meteorites from tracked and triangulated meteors is critical to determining their source asteroid families. However, locating meteorite fragments in strewn fields remains a challenge with very few meteorites being recovered from the meteors triangulated in past and ongoing meteor camera networks. We examined if locating meteorites can be automated using machine learning and an autonomous drone. Drones can be programmed to fly a grid search pattern and take systematic pictures of the ground over a large survey area. Those images can be analyzed using a machine learning classifier to identify meteorites in the field among many other features. Here, we describe a proof-of-concept meteorite classifier that deploys off-line a combination of different convolution neural networks to recognize meteorites from images taken by drones in the field. The system was implemented in a conceptual drone setup and tested in the suspected strewn field of a recent meteorite fall near Walker Lake, Nevada.



قيم البحث

اقرأ أيضاً

We present a novel methodology for recovering meteorite falls observed and constrained by fireball networks, using drones and machine learning algorithms. This approach uses images of the local terrain for a given fall site to train an artificial neu ral network, designed to detect meteorite candidates. We have field tested our methodology to show a meteorite detection rate between 75-97%, while also providing an efficient mechanism to eliminate false-positives. Our tests at a number of locations within Western Australia also showcase the ability for this training scheme to generalize a model to learn localized terrain features. Our model-training approach was also able to correctly identify 3 meteorites in their native fall sites, that were found using traditional searching techniques. Our methodology will be used to recover meteorite falls in a wide range of locations within globe-spanning fireball networks.
On the 27th of November 2015, at 10:43:45.526 UTC, a fireball was observed across South Australia by ten Desert Fireball Network observatories lasting 6.1 s. A $sim37$ kg meteoroid entered the atmosphere with a speed of 13.68$pm0.09,mbox{km s}^{-1}$ and was observed ablating from a height of 85 km down to 18 km, having slowed to 3.28$pm0.21 ,mbox{km s}^{-1}$. Despite the relatively steep 68.5$^circ$ trajectory, strong atmospheric winds significantly influenced the darkfight phase and the predicted fall line, but the analysis put the fall site in the centre of Kati Thanda - Lake Eyre South. Kati Thanda has metres-deep mud under its salt-encrusted surface. Reconnaissance of the area where the meteorite landed from a low flying aircraft revealed a 60 cm circular feature in the muddy lake, less than 50 m from the predicted fall line. After a short search, which again employed light aircraft, the meteorite was recovered on the 31st December 2015 from a depth of 42 cm. Murrili is the first recovered observed fall by the digital Desert Fireball Network (DFN). In addition to its scientific value, connecting composition to solar system context via orbital data, the recover demonstrates and validates the capabilities of the DFN, with its next generation remote observatories and automated data reduction pipeline.
We introduce a new machine learning based technique to detect exoplanets using the transit method. Machine learning and deep learning techniques have proven to be broadly applicable in various scientific research areas. We aim to exploit some of thes e methods to improve the conventional algorithm based approaches presently used in astrophysics to detect exoplanets. Using the time-series analysis library TSFresh to analyse light curves, we extracted 789 features from each curve, which capture the information about the characteristics of a light curve. We then used these features to train a gradient boosting classifier using the machine learning tool lightgbm. This approach was tested on simulated data, which showed that is more effective than the conventional box least squares fitting (BLS) method. We further found that our method produced comparable results to existing state-of-the-art deep learning models, while being much more computationally efficient and without needing folded and secondary views of the light curves. For Kepler data, the method is able to predict a planet with an AUC of 0.948, so that 94.8 per cent of the true planet signals are ranked higher than non-planet signals. The resulting recall is 0.96, so that 96 per cent of real planets are classified as planets. For the Transiting Exoplanet Survey Satellite (TESS) data, we found our method can classify light curves with an accuracy of 0.98, and is able to identify planets with a recall of 0.82 at a precision of 0.63.
Decentralized drone swarms deployed today either rely on sharing of positions among agents or detecting swarm members with the help of visual markers. This work proposes an entirely visual approach to coordinate markerless drone swarms based on imita tion learning. Each agent is controlled by a small and efficient convolutional neural network that takes raw omnidirectional images as inputs and predicts 3D velocity commands that match those computed by a flocking algorithm. We start training in simulation and propose a simple yet effective unsupervised domain adaptation approach to transfer the learned controller to the real world. We further train the controller with data collected in our motion capture hall. We show that the convolutional neural network trained on the visual inputs of the drone can learn not only robust inter-agent collision avoidance but also cohesion of the swarm in a sample-efficient manner. The neural controller effectively learns to localize other agents in the visual input, which we show by visualizing the regions with the most influence on the motion of an agent. We remove the dependence on sharing positions among swarm members by taking only local visual information into account for control. Our work can therefore be seen as the first step towards a fully decentralized, vision-based swarm without the need for communication or visual markers.
We present fully autonomous source seeking onboard a highly constrained nano quadcopter, by contributing application-specific system and observation feature design to enable inference of a deep-RL policy onboard a nano quadcopter. Our deep-RL algorit hm finds a high-performance solution to a challenging problem, even in presence of high noise levels and generalizes across real and simulation environments with different obstacle configurations. We verify our approach with simulation and in-field testing on a Bitcraze CrazyFlie using only the cheap and ubiquitous Cortex-M4 microcontroller unit. The results show that by end-to-end application-specific system design, our contribution consumes almost three times less additional power, as compared to competing learning-based navigation approach onboard a nano quadcopter. Thanks to our observation space, which we carefully design within the resource constraints, our solution achieves a 94% success rate in cluttered and randomized test environments, as compared to the previously achieved 80%. We also compare our strategy to a simple finite state machine (FSM), geared towards efficient exploration, and demonstrate that our policy is more robust and resilient at obstacle avoidance as well as up to 70% more efficient in source seeking. To this end, we contribute a cheap and lightweight end-to-end tiny robot learning (tinyRL) solution, running onboard a nano quadcopter, that proves to be robust and efficient in a challenging task using limited sensory input.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا