No Arabic abstract
SLAM based techniques are often adopted for solving the navigation problem for the drones in GPS denied environment. Despite the widespread success of these approaches, they have not yet been fully exploited for automation in a warehouse system due to expensive sensors and setup requirements. This paper focuses on the use of low-cost monocular camera-equipped drones for performing warehouse management tasks like inventory scanning and position update. The methods introduced are at par with the existing state of warehouse environment present today, that is, the existence of a grid network for the ground vehicles, hence eliminating any additional infrastructure requirement for drone deployment. As we lack scale information, that in itself forbids us to use any 3D techniques, we focus more towards optimizing standard image processing algorithms like the thick line detection and further developing it into a fast and robust grid localization framework. In this paper, we show different line detection algorithms, their significance in grid localization and their limitations. We further extend our proposed implementation towards a real-time navigation stack for an actual warehouse inspection case scenario. Our line detection method using skeletonization and centroid strategy works considerably even with varying light conditions, line thicknesses, colors, orientations, and partial occlusions. A simple yet effective Kalman Filter has been used for smoothening the {rho} and {theta} outputs of the two different line detection methods for better drone control while grid following. A generic strategy that handles the navigation of the drone on a grid for completion of the allotted task is also developed. Based on the simulation and real-life experiments, the final developments on the drone localization and navigation in a structured environment are discussed.
We propose a novel compute-in-memory (CIM)-based ultra-low-power framework for probabilistic localization of insect-scale drones. The conventional probabilistic localization approaches rely on the three-dimensional (3D) Gaussian Mixture Model (GMM)-based representation of a 3D map. A GMM model with hundreds of mixture functions is typically needed to adequately learn and represent the intricacies of the map. Meanwhile, localization using complex GMM map models is computationally intensive. Since insect-scale drones operate under extremely limited area/power budget, continuous localization using GMM models entails much higher operating energy -- thereby, limiting flying duration and/or size of the drone due to a larger battery. Addressing the computational challenges of localization in an insect-scale drone using a CIM approach, we propose a novel framework of 3D map representation using a harmonic mean of Gaussian-like mixture (HMGM) model. The likelihood function useful for drone localization can be efficiently implemented by connecting many multi-input inverters in parallel, each programmed with the parameters of the 3D map model represented as HMGM. When the depth measurements are projected to the input of the implementation, the summed current of the inverters emulates the likelihood of the measurement. We have characterized our approach on an RGB-D indoor localization dataset. The average localization error in our approach is $sim$0.1125 m which is only slightly degraded than software-based evaluation ($sim$0.08 m). Meanwhile, our localization framework is ultra-low-power, consuming as little as $sim$17 $mu$W power while processing a depth frame in 1.33 ms over hundred pose hypotheses in the particle-filtering (PF) algorithm used to localize the drone.
The core problem of visual multi-robot simultaneous localization and mapping (MR-SLAM) is how to efficiently and accurately perform multi-robot global localization (MR-GL). The difficulties are two-fold. The first is the difficulty of global localization for significant viewpoint difference. Appearance-based localization methods tend to fail under large viewpoint changes. Recently, semantic graphs have been utilized to overcome the viewpoint variation problem. However, the methods are highly time-consuming, especially in large-scale environments. This leads to the second difficulty, which is how to perform real-time global localization. In this paper, we propose a semantic histogram-based graph matching method that is robust to viewpoint variation and can achieve real-time global localization. Based on that, we develop a system that can accurately and efficiently perform MR-GL for both homogeneous and heterogeneous robots. The experimental results show that our approach is about 30 times faster than Random Walk based semantic descriptors. Moreover, it achieves an accuracy of 95% for global localization, while the accuracy of the state-of-the-art method is 85%.
The search for new materials, based on computational screening, relies on methods that accurately predict, in an automatic manner, total energy, atomic-scale geometries, and other fundamental characteristics of materials. Many technologically important material properties directly stem from the electronic structure of a material, but the usual workhorse for total energies, namely density-functional theory, is plagued by fundamental shortcomings and errors from approximate exchange-correlation functionals in its prediction of the electronic structure. At variance, the $GW$ method is currently the state-of-the-art {em ab initio} approach for accurate electronic structure. It is mostly used to perturbatively correct density-functional theory results, but is however computationally demanding and also requires expert knowledge to give accurate results. Accordingly, it is not presently used in high-throughput screening: fully automatized algorithms for setting up the calculations and determining convergence are lacking. In this work we develop such a method and, as a first application, use it to validate the accuracy of $G_0W_0$ using the PBE starting point, and the Godby-Needs plasmon pole model ($G_0W_0^textrm{GN}$@PBE), on a set of about 80 solids. The results of the automatic convergence study utilized provides valuable insights. Indeed, we find correlations between computational parameters that can be used to further improve the automatization of $GW$ calculations. Moreover, we find that $G_0W_0^textrm{GN}$@PBE shows a correlation between the PBE and the $G_0W_0^textrm{GN}$@PBE gaps that is much stronger than that between $GW$ and experimental gaps. However, the $G_0W_0^textrm{GN}$@PBE gaps still describe the experimental gaps more accurately than a linear model based on the PBE gaps.
The design and development of swarms of micro-aerial vehicles (MAVs) has recently gained significant traction. Collaborative aerial swarms have potential applications in areas as diverse as surveillance and monitoring, inventory management, search and rescue, or in the entertainment industry. Swarm intelligence has, by definition, a distributed nature. Yet performing experiments in truly distributed systems is not always possible, as much of the underlying ecosystem employed requires some sort of central control. Indeed, in experimental proofs of concept, most research relies on more traditional connectivity solutions and centralized approaches. External localization solutions, such as motion capture (MOCAP) systems, visual markers, or ultra-wideband (UWB) anchors are often used. Alternatively, intra-swarm solutions are often limited in terms of, e.g., range or field-of-view. Research and development has been supported by platforms such as the e-puck, the kilobot, or the crazyflie quadrotors. We believe there is a need for inexpensive platforms such as the Crazyflie with more advanced onboard processing capabilities and sensors, while offering scalability and robust communication and localization solutions. In the following, we present a platform for research and development in aerial swarms currently under development, where we leverage Wi-Fi mesh connectivity and the distributed ROS2 middleware together with UWB ranging and communication for situated communication. We present a platform for building towards large-scale swarms of autonomous MAVs leveraging the ROS2 middleware, Wi-Fi mesh connectivity, and UWB ranging and communication. The platform is based on the Ryze Tello Drone, a Raspberry Pi Zero W as a companion computer together with a camera module, and a Decawave DWM1001 UWB module for ranging and basic communication.
In this paper, we propose an operation procedure for our previously developed in-pipe robotic system that is used for water quality monitoring in water distribution systems (WDS). The proposed operation procedure synchronizes a developed wireless communication system that is suitable for harsh environments of soil, water, and rock with a multi-phase control algorithm. The new wireless control algorithm facilitates smart navigation and near real-time wireless data transmission during operation for our in-pipe robot in WDS. The smart navigation enables the robot to pass through different configurations of the pipeline with long inspection capability with a battery in which is mounted on the robot. To this end, we have divided the operation procedure into five steps that assign a specific motion control phase and wireless communication task to the robot. We describe each step and the algorithm associated with that step in this paper. The proposed robotic system defines the configuration type in each pipeline with the pre-programmed pipeline map that is given to the robot before the operation and the wireless communication system. The wireless communication system includes some relay nodes that perform bi-directional communication in the operation procedure. The developed wireless robotic system along with operation procedure facilitates localization and navigation for the robot toward long-distance inspection in WDS.