Do you want to publish a course? Click here

Probabilistic Localization of Insect-Scale Drones on Floating-Gate Inverter Arrays

65   0   0.0 ( 0 )
 Added by Priyesh Shukla
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

We propose a novel compute-in-memory (CIM)-based ultra-low-power framework for probabilistic localization of insect-scale drones. The conventional probabilistic localization approaches rely on the three-dimensional (3D) Gaussian Mixture Model (GMM)-based representation of a 3D map. A GMM model with hundreds of mixture functions is typically needed to adequately learn and represent the intricacies of the map. Meanwhile, localization using complex GMM map models is computationally intensive. Since insect-scale drones operate under extremely limited area/power budget, continuous localization using GMM models entails much higher operating energy -- thereby, limiting flying duration and/or size of the drone due to a larger battery. Addressing the computational challenges of localization in an insect-scale drone using a CIM approach, we propose a novel framework of 3D map representation using a harmonic mean of Gaussian-like mixture (HMGM) model. The likelihood function useful for drone localization can be efficiently implemented by connecting many multi-input inverters in parallel, each programmed with the parameters of the 3D map model represented as HMGM. When the depth measurements are projected to the input of the implementation, the summed current of the inverters emulates the likelihood of the measurement. We have characterized our approach on an RGB-D indoor localization dataset. The average localization error in our approach is $sim$0.1125 m which is only slightly degraded than software-based evaluation ($sim$0.08 m). Meanwhile, our localization framework is ultra-low-power, consuming as little as $sim$17 $mu$W power while processing a depth frame in 1.33 ms over hundred pose hypotheses in the particle-filtering (PF) algorithm used to localize the drone.



rate research

Read More

SLAM based techniques are often adopted for solving the navigation problem for the drones in GPS denied environment. Despite the widespread success of these approaches, they have not yet been fully exploited for automation in a warehouse system due to expensive sensors and setup requirements. This paper focuses on the use of low-cost monocular camera-equipped drones for performing warehouse management tasks like inventory scanning and position update. The methods introduced are at par with the existing state of warehouse environment present today, that is, the existence of a grid network for the ground vehicles, hence eliminating any additional infrastructure requirement for drone deployment. As we lack scale information, that in itself forbids us to use any 3D techniques, we focus more towards optimizing standard image processing algorithms like the thick line detection and further developing it into a fast and robust grid localization framework. In this paper, we show different line detection algorithms, their significance in grid localization and their limitations. We further extend our proposed implementation towards a real-time navigation stack for an actual warehouse inspection case scenario. Our line detection method using skeletonization and centroid strategy works considerably even with varying light conditions, line thicknesses, colors, orientations, and partial occlusions. A simple yet effective Kalman Filter has been used for smoothening the {rho} and {theta} outputs of the two different line detection methods for better drone control while grid following. A generic strategy that handles the navigation of the drone on a grid for completion of the allotted task is also developed. Based on the simulation and real-life experiments, the final developments on the drone localization and navigation in a structured environment are discussed.
66 - J. B. Kim , E. Won 2017
Pipelined algorithms implemented in field programmable gate arrays are being extensively used for hardware triggers in the modern experimental high energy physics field and the complexity of such algorithms are increases rapidly. For development of such hardware triggers, algorithms are developed in $texttt{C++}$, ported to hardware description language for synthesizing firmware, and then ported back to $texttt{C++}$ for simulating the firmware response down to the single bit level. We present a $texttt{C++}$ software framework which automatically simulates and generates hardware description language code for pipelined arithmetic algorithms.
Quantum annealing machines based on superconducting qubits, which have the potential to solve optimization problems faster than digital computers, are of great interest not only to researchers but also to the general public. Here, we propose a quantum annealing machine based on a semiconductor floating gate (FG) array. We use the same device structure as that of the commercial FG NAND flash memory except for small differences such as thinner tunneling barrier. We theoretically derive an Ising Hamiltonian from the FG system in its single-electron region. Recent high-density NAND flash memories are subject to several problems that originate from their small FG cells. In order to store information reliably, the number of electrons in each FG cell should be sufficiently large. However, the number of electrons stored in each FG cell becomes smaller and can be countable. So we utilize the countable electron region to operate single-electron effects of FG cells. Second, in the conventional NAND flash memory, the high density of FG cells induces the problem of cell-to-cell interference through their mutual capacitive couplings. This interference problem is usually solved by various methods using a software of error-correcting codes. We derive the Ising interaction from this natural capacitive coupling. Considering the size of the cell, 10 nm, the operation temperature is expected to be approximately that of a liquid nitrogen. If a commercial 64 Gbit NAND flash memory is used, ideally we expect it to be possible to construct 2 megabytes (MB) entangled qubits by using the conventional fabrication processes in the same factory as is used for manufacture of NAND flash memory. A qubit system of highest density will be obtained as a natural extension of the miniaturization of commonly used memories in this society.
This paper presents a vision-based modularized drone racing navigation system that uses a customized convolutional neural network (CNN) for the perception module to produce high-level navigation commands and then leverages a state-of-the-art planner and controller to generate low-level control commands, thus exploiting the advantages of both data-based and model-based approaches. Unlike the state-of-the-art method which only takes the current camera image as the CNN input, we further add the latest three drone states as part of the inputs. Our method outperforms the state-of-the-art method in various track layouts and offers two switchable navigation behaviors with a single trained network. The CNN-based perception module is trained to imitate an expert policy that automatically generates ground truth navigation commands based on the pre-computed global trajectories. Owing to the extensive randomization and our modified dataset aggregation (DAgger) policy during data collection, our navigation system, which is purely trained in simulation with synthetic textures, successfully operates in environments with randomly-chosen photorealistic textures without further fine-tuning.
This work focuses on the formation reshaping in an optimized manner in autonomous swarm of drones. Here, the two main problems are: 1) how to break and reshape the initial formation in an optimal manner, and 2) how to do such reformation while minimizing the overall deviation of the drones and the overall time, i.e., without slowing down. To address the first problem, we introduce a set of routines for the drones/agents to follow while reshaping to a secondary formation shape. And the second problem is resolved by utilizing the temperature function reduction technique, originally used in the point set registration process. The goal is to be able to dynamically reform the shape of multi-agent based swarm in near-optimal manner while going through narrow openings between, for instance obstacles, and then bringing the agents back to their original shape after passing through the narrow passage using point set registration technique.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا