No Arabic abstract
Goal: This paper presents an algorithm for accurately estimating pelvis, thigh, and shank kinematics during walking using only three wearable inertial sensors. Methods: The algorithm makes novel use of a constrained Kalman filter (CKF). The algorithm iterates through the prediction (kinematic equation), measurement (pelvis position pseudo-measurements, zero velocity update, flat-floor assumption, and covariance limiter), and constraint update (formulation of hinged knee joints and ball-and-socket hip joints). Results: Evaluation of the algorithm using an optical motion capture-based sensor-to-segment calibration on nine participants ($7$ men and $2$ women, weight $63.0 pm 6.8$ kg, height $1.70 pm 0.06$ m, age $24.6 pm 3.9$ years old), with no known gait or lower body biomechanical abnormalities, who walked within a $4 times 4$ m$^2$ capture area shows that it can track motion relative to the mid-pelvis origin with mean position and orientation (no bias) root-mean-square error (RMSE) of $5.21 pm 1.3$ cm and $16.1 pm 3.2^circ$, respectively. The sagittal knee and hip joint angle RMSEs (no bias) were $10.0 pm 2.9^circ$ and $9.9 pm 3.2^circ$, respectively, while the corresponding correlation coefficient (CC) values were $0.87 pm 0.08$ and $0.74 pm 0.12$. Conclusion: The CKF-based algorithm was able to track the 3D pose of the pelvis, thigh, and shanks using only three inertial sensors worn on the pelvis and shanks. Significance: Due to the Kalman-filter-based algorithms low computation cost and the relative convenience of using only three wearable sensors, gait parameters can be computed in real-time and remotely for long-term gait monitoring. Furthermore, the system can be used to inform real-time gait assistive devices.
This paper presents an algorithm that makes novel use of a Lie group representation of position and orientation alongside a constrained extended Kalman filter (CEKF) to accurately estimate pelvis, thigh, and shank kinematics during walking using only three wearable inertial sensors. The algorithm iterates through the prediction update (kinematic equation), measurement update (pelvis height, zero velocity update, flat-floor assumption, and covariance limiter), and constraint update (formulation of hinged knee joints and ball-and-socket hip joints). The paper also describes a novel Lie group formulation of the assumptions implemented in the said measurement and constraint updates. Evaluation of the algorithm on nine healthy subjects who walked freely within a $4 times 4$ m$^2$ room shows that the knee and hip joint angle root-mean-square errors (RMSEs) in the sagittal plane for free walking were $10.5 pm 2.8^circ$ and $9.7 pm 3.3^circ$, respectively, while the correlation coefficients (CCs) were $0.89 pm 0.06$ and $0.78 pm 0.09$, respectively. The evaluation demonstrates a promising application of Lie group representation to inertial motion capture under reduced-sensor-count configuration, improving the estimates (i.e., joint angle RMSEs and CCs) for dynamic motion, and enabling better convergence for our non-linear biomechanical constraints. To further improve performance, additional information relating the pelvis and ankle kinematics is needed.
This paper presents an algorithm that makes novel use of distance measurements alongside a constrained Kalman filter to accurately estimate pelvis, thigh, and shank kinematics for both legs during walking and other body movements using only three wearable inertial measurement units (IMUs). The distance measurement formulation also assumes hinge knee joint and constant body segment length, helping produce estimates that are near or in the constraint space for better estimator stability. Simulated experiments shown that inter-IMU distance measurement is indeed a promising new source of information to improve the pose estimation of inertial motion capture systems under a reduced sensor count configuration. Furthermore, experiments show that performance improved dramatically for dynamic movements even at high noise levels (e.g., $sigma_{dist} = 0.2$ m), and that acceptable performance for normal walking was achieved at $sigma_{dist} = 0.1$ m. Nevertheless, further validation is recommended using actual distance measurement sensors.
Goal: This paper presents an algorithm for estimating pelvis, thigh, shank, and foot kinematics during walking using only two or three wearable inertial sensors. Methods: The algorithm makes novel use of a Lie-group-based extended Kalman filter. The algorithm iterates through the prediction (kinematic equation), measurement (pelvis position pseudo-measurements, zero-velocity update, and flat-floor assumption), and constraint update (hinged knee and ankle joints, constant leg lengths). Results: The inertial motion capture algorithm was extensively evaluated on two datasets showing its performance against two standard benchmark approaches in optical motion capture (i.e., plug-in gait (commonly used in gait analysis) and a kinematic fit (commonly used in animation, robotics, and musculoskeleton simulation)), giving insight into the similarity and differences between the said approaches used in different application areas. The overall mean body segment position (relative to mid-pelvis origin) and orientation error magnitude of our algorithm ($n=14$ participants) for free walking was $5.93 pm 1.33$ cm and $13.43 pm 1.89^circ$ when using three IMUs placed on the feet and pelvis, and $6.35 pm 1.20$ cm and $12.71 pm 1.60^circ$ when using only two IMUs placed on the feet. Conclusion: The algorithm was able to track the joint angles in the sagittal plane for straight walking well, but requires improvement for unscripted movements (e.g., turning around, side steps), especially for dynamic movements or when considering clinical applications. Significance: This work has brought us closer to comprehensive remote gait monitoring using IMUs on the shoes. The low computational cost also suggests that it can be used in real-time with gait assistive devices.
This paper proposes a method to navigate a mobile robot by estimating its state over a number of distributed sensor networks (DSNs) such that it can successively accomplish a sequence of tasks, i.e., its state enters each targeted set and stays inside no less than the desired time, under a resource-aware, time-efficient, and computation- and communication-constrained setting.We propose a new robot state estimation and navigation architecture, which integrates an event-triggered task-switching feedback controller for the robot and a two-time-scale distributed state estimator for each sensor. The architecture has three major advantages over existing approaches: First, in each task only one DSN is active for sensing and estimating the robot state, and for different tasks the robot can switch the active DSN by taking resource saving and system performance into account; Second, the robot only needs to communicate with one active sensor at each time to obtain its state information from the active DSN; Third, no online optimization is required. With the controller, the robot is able to accomplish a task by following a reference trajectory and switch to the next task when an event-triggered condition is fulfilled. With the estimator, each active sensor is able to estimate the robot state. Under proper conditions, we prove that the state estimation error and the trajectory tracking deviation are upper bounded by two time-varying sequences respectively, which play an essential role in the event-triggered condition. Furthermore, we find a sufficient condition for accomplishing a task and provide an upper bound of running time for the task. Numerical simulations of an indoor robots localization and navigation are provided to validate the proposed architecture.
Over the past several years, the electrocardiogram (ECG) has been investigated for its uniqueness and potential to discriminate between individuals. This paper discusses how this discriminatory information can help in continuous user authentication by a wearable chest strap which uses dry electrodes to obtain a single lead ECG signal. To the best of the authors knowledge, this is the first such work which deals with continuous authentication using a genuine wearable device as most prior works have either used medical equipment employing gel electrodes to obtain an ECG signal or have obtained an ECG signal through electrode positions that would not be feasible using a wearable device. Prior works have also mainly dealt with using the ECG signal for identification rather than verification, or dealt with using the ECG signal for discrete authentication. This paper presents a novel algorithm which uses QRS detection, weighted averaging, Discrete Cosine Transform (DCT), and a Support Vector Machine (SVM) classifier to determine whether the wearer of the device should be positively verified or not. Zero intrusion attempts were successful when tested on a database consisting of 33 subjects.