ترغب بنشر مسار تعليمي؟ اضغط هنا

Inertial based Integration with Transformed INS Mechanization in Earth Frame

71   0   0.0 ( 0 )
 نشر من قبل Lubin Chang
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper proposes to use a newly-derived transformed inertial navigation system (INS) mechanization to fuse INS with other complementary navigation systems. Through formulating the attitude, velocity and position as one group state of group of double direct spatial isometries SE2(3), the transformed INS mechanization has proven to be group affine, which means that the corresponding vector error state model will be trajectory-independent. In order to make use of the transformed INS mechanization in inertial based integration, both the right and left vector error state models are derived. The INS/GPS and INS/Odometer integration are investigated as two representatives of inertial based integration. Some application aspects of the derived error state models in the two applications are presented, which include how to select the error state model, initialization of the SE2(3) based error state covariance and feedback correction corresponding to the error state definitions. Extensive Monte Carlo simulations and land vehicle experiments are conducted to evaluate the performance of the derived error state models. It is shown that the most striking superiority of using the derived error state models is their ability to handle the large initial attitude misalignments, which is just the result of log-linearity property of the derived error state models. Therefore, the derived error state models can be used in the so-called attitude alignment for the two applications. Moreover, the derived right error state-space model is also very preferred for long-endurance INS/Odometer integration due to the filtering consistency caused by its less dependence on the global state estimate.



قيم البحث

اقرأ أيضاً

Micro-aerial vehicles (MAVs) are becoming ubiquitous across multiple industries and application domains. Lightweight MAVs with only an onboard flight controller and a minimal sensor suite (e.g., IMU, vision, and vertical ranging sensors) have potenti al as mobile and easily deployable sensing platforms. When deployed from a ground robot, a key parameter is a relative localization between the ground robot and the MAV. This paper proposes a novel method for tracking MAVs in lidar point clouds. In lidar point clouds, we consider the speed and distance of the MAV to actively adapt the lidars frame integration time and, in essence, the density and size of the point cloud to be processed. We show that this method enables more persistent and robust tracking when the speed of the MAV or its distance to the tracking sensor changes. In addition, we propose a multi-modal tracking method that relies on high-frequency scans for accurate state estimation, lower-frequency scans for robust and persistent tracking, and sub-Hz processing for trajectory and object identification. These three integration and processing modalities allow for an overall accurate and robust MAV tracking while ensuring the object being tracked meets shape and size constraints.
78 - Jinxu Liu , Wei Gao , Zhanyi Hu 2020
Unlike loose coupling approaches and the EKF-based approaches in the literature, we propose an optimization-based visual-inertial SLAM tightly coupled with raw Global Navigation Satellite System (GNSS) measurements, a first attempt of this kind in th e literature to our knowledge. More specifically, reprojection error, IMU pre-integration error and raw GNSS measurement error are jointly minimized within a sliding window, in which the asynchronism between images and raw GNSS measurements is accounted for. In addition, issues such as marginalization, noisy measurements removal, as well as tackling vulnerable situations are also addressed. Experimental results on public dataset in complex urban scenes show that our proposed approach outperforms state-of-the-art visual-inertial SLAM, GNSS single point positioning, as well as a loose coupling approach, including scenes mainly containing low-rise buildings and those containing urban canyons.
In this work, we describe the process of teleportation between Alice in an inertial frame, and Rob who is in uniform acceleration with respect to Alice. The fidelity of the teleportation is reduced due to Davies-Unruh radiation in Robs frame. In so f ar as teleportation is a measure of entanglement, our results suggest that quantum entanglement is degraded in non-inertial frames. We discuss this reduction in fidelity for both bosonic and fermionic resources.
In the 2-spinor formalism, the gravity can be dealt with curvature spinors with four spinor indices. Here we show a new effective method to express the components of curvature spinors in the rank-2 $4 times 4$ tensor representation for the gravity in a locally inertial frame. In the process we have developed a few manipulating techniques, through which the roles of each component of Riemann curvature tensor are revealed. We define a new algebra `sedon, whose structure is almost the same as sedenion except the basis multiplication rule. Finally we also show that curvature spinors can be represented in the sedon form and observe the chiral structure in curvature spinors. A few applications of the sedon representation, which includes the quaternion form of differential Binanchi indentity, are also presented.
Many robotic tasks rely on the accurate localization of moving objects within a given workspace. This information about the objects poses and velocities are used for control,motion planning, navigation, interaction with the environment or verificatio n. Often motion capture systems are used to obtain such a state estimate. However, these systems are often costly, limited in workspace size and not suitable for outdoor usage. Therefore, we propose a lightweight and easy to use, visual-inertial Simultaneous Localization and Mapping approach that leverages cost-efficient, paper printable artificial landmarks, socalled fiducials. Results show that by fusing visual and inertial data, the system provides accurate estimates and is robust against fast motions and changing lighting conditions. Tight integration of the estimation of sensor and fiducial pose as well as extrinsics ensures accuracy, map consistency and avoids the requirement for precalibration. By providing an open source implementation and various datasets, partially with ground truth information, we enable community members to run, test, modify and extend the system either using these datasets or directly running the system on their own robotic setups.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا