ترغب بنشر مسار تعليمي؟ اضغط هنا

Enhancing Inertial Navigation Performance via Fusion of Classical and Quantum Accelerometers

97   0   0.0 ( 0 )
 نشر من قبل Simon Haine
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

While quantum accelerometers sense with extremely low drift and low bias, their practical sensing capabilities face two limitations compared with classical accelerometers: a lower sample rate due to cold atom interrogation time, and a reduced dynamic range due to signal phase wrapping. In this paper, we propose a maximum likelihood probabilistic data fusion method, under which the actual phase of the quantum accelerometer can be unwrapped by fusing it with the output of a classical accelerometer on the platform. Consequently, the proposed method enables quantum accelerometers to be applied in practical inertial navigation scenarios with enhanced performance. The recovered measurement from the quantum accelerometer is also used to re-calibrate the classical accelerometer. We demonstrate the enhanced error performance achieved by the proposed fusion method using a simulated 1D inertial navigation scenario. We conclude with a discussion on fusion error and potential solutions.



قيم البحث

اقرأ أيضاً

Quantum annealing is a promising technique which leverages quantum mechanics to solve hard optimization problems. Considerable progress has been made in the development of a physical quantum annealer, motivating the study of methods to enhance the ef ficiency of such a solver. In this work, we present a quantum annealing approach to measure similarity among molecular structures. Implementing real-world problems on a quantum annealer is challenging due to hardware limitations such as sparse connectivity, intrinsic control error, and limited precision. In order to overcome the limited connectivity, a problem must be reformulated using minor-embedding techniques. Using a real data set, we investigate the performance of a quantum annealer in solving the molecular similarity problem. We provide experimental evidence that common practices for embedding can be replaced by new alternatives which mitigate some of the hardware limitations and enhance its performance. Common practices for embedding include minimizing either the number of qubits or the chain length, and determining the strength of ferromagnetic couplers empirically. We show that current criteria for selecting an embedding do not improve the hardwares performance for the molecular similarity problem. Furthermore, we use a theoretical approach to determine the strength of ferromagnetic couplers. Such an approach removes the computational burden of the current empirical approaches, and also results in hardware solutions that can benefit from simple local classical improvement. Although our results are limited to the problems considered here, they can be generalized to guide future benchmarking studies.
Quadrature squeezing of light is investigated in a hybrid atom-optomechanical system comprising a cloud of two-level atoms and a movable mirror mediated by a single-mode cavity field. When the system is at high temperatures with quadrature fluctuatio ns of light much above the standard quantum limit (SQL), excitation counting on the collective atomic state can effectively reduce the light noise close to the SQL. When the system is at low temperatures, considerable squeezing of light below the SQL is found at steady state. The squeezing is enhanced by simply increasing the atom-light coupling strength with the laser power optimized close to the unstable regime, and further noise reduction is achieved by decreasing various losses in the system. The presence of atoms and excitation counting on the atoms lessen the limitation of thermal noise, and the squeezing can be achieved at environment temperature of the order K. The nonclassicality of the light, embodied by the negative distributions of the Wigner function, is also studied by making non-Gaussian measurements on the atoms. It is shown that with feasible parameters excitation counting on the atoms is effective in inducing strongly optical nonclassicality.
An application of quantum cloning to optimally interface a quantum system with a classical observer is presented, in particular we describe a procedure to perform a minimal disturbance measurement on a single qubit by adopting a 1->2 cloning machine followed by a generalized measurement on a single clone and the anti-clone or on the two clones. Such scheme has been applied to enhance the transmission fidelity over a lossy quantum channel.
Photonic Quantum Computers provides several benefits over the discrete qubit-based paradigm of quantum computing. By using the power of continuous-variable computing we build an anomaly detection model to use on searches for New Physics. Our model us es Gaussian Boson Sampling, a $#$P-hard problem and thus not efficiently accessible to classical devices. This is used to create feature vectors from graph data, a natural format for representing data of high-energy collision events. A simple K-means clustering algorithm is used to provide a baseline method of classification. We then present a novel method of anomaly detection, combining the use of Gaussian Boson Sampling and a quantum extension to K-means known as Q-means. This is found to give equivalent results compared to the classical clustering version while also reducing the $mathcal{O}$ complexity, with respect to the samples feature-vector length, from $mathcal{O}(N)$ to $mathcal{O}(mbox{log}(N))$. Due to the speed of the sampling algorithm and the feasibility of near-term photonic quantum devices, anomaly detection at the trigger level can become practical in future LHC runs.
We describe a general methodology for enhancing the efficiency of adiabatic quantum computations (AQC). It consists of homotopically deforming the original Hamiltonian surface in a way that the redistribution of the Gaussian curvature weakens the eff ect of the anti-crossing, thus yielding the desired improvement. Our approach is not pertubative but instead is built on our previous global description of AQC in the language of Morse theory. Through the homotopy deformation we witness the birth and death of critical points whilst, in parallel, the Gauss-Bonnet theorem reshuffles the curvature around the changing set of critical points. Therefore, by creating enough critical points around the anti-crossing, the total curvature--which was initially centered at the original anti-crossing--gets redistributed around the new neighbouring critical points, which weakens its severity and so improves the speedup of the AQC. We illustrate this on two examples taken from the literature.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا