No Arabic abstract
Managing uncertainty is a fundamental and critical issue in spacecraft entry guidance. This paper presents a novel approach for uncertainty propagation during entry, descent and landing that relies on a new sum-of-squares robust verification technique. Unlike risk-based and probabilistic approaches, our technique does not rely on any probabilistic assumptions. It uses a set-based description to bound uncertainties and disturbances like vehicle and atmospheric parameters and winds. The approach leverages a recently developed sampling-based version of sum-of-squares programming to compute regions of finite time invariance, commonly referred to as invariant funnels. We apply this approach to a three-degree-of-freedom entry vehicle model and test it using a Mars Science Laboratory reference trajectory. We compute tight approximations of robust invariant funnels that are guaranteed to reach a goal region with increased landing accuracy while respecting realistic thermal constraints.
Object tracking has been broadly applied in unmanned aerial vehicle (UAV) tasks in recent years. However, existing algorithms still face difficulties such as partial occlusion, clutter background, and other challenging visual factors. Inspired by the cutting-edge attention mechanisms, a novel object tracking framework is proposed to leverage multi-level visual attention. Three primary attention, i.e., contextual attention, dimensional attention, and spatiotemporal attention, are integrated into the training and detection stages of correlation filter-based tracking pipeline. Therefore, the proposed tracker is equipped with robust discriminative power against challenging factors while maintaining high operational efficiency in UAV scenarios. Quantitative and qualitative experiments on two well-known benchmarks with 173 challenging UAV video sequences demonstrate the effectiveness of the proposed framework. The proposed tracking algorithm favorably outperforms 12 state-of-the-art methods, yielding 4.8% relative gain in UAVDT and 8.2% relative gain in UAV123@10fps against the baseline tracker while operating at the speed of $sim$ 28 frames per second.
Retinal surgery is a complex activity that can be challenging for a surgeon to perform effectively and safely. Image guided robot-assisted surgery is one of the promising solutions that bring significant surgical enhancement in treatment outcome and reduce the physical limitations of human surgeons. In this paper, we demonstrate a novel method for 3D guidance of the instrument based on the projection of spotlight in the single microscope images. The spotlight projection mechanism is firstly analyzed and modeled with a projection on both a plane and a sphere surface. To test the feasibility of the proposed method, a light fiber is integrated into the instrument which is driven by the Steady-Hand Eye Robot (SHER). The spot of light is segmented and tracked on a phantom retina using the proposed algorithm. The static calibration and dynamic test results both show that the proposed method can easily archive 0.5 mm of tip-to-surface distance which is within the clinically acceptable accuracy for intraocular visual guidance.
This paper presents numerical methods for computing regions of finite-time invariance (funnels) around solutions of polynomial differential equations. First, we present a method which exactly certifies sufficient conditions for invariance despite relying on approximate trajectories from numerical integration. Our second method relaxes the constraints of the first by sampling in time. In applications, this can recover almost identical funnels but is much faster to compute. In both cases, funnels are verified using Sum-of-Squares programming to search over a family of time-varying polynomial Lyapunov functions. Initial candidate Lyapunov functions are constructed using the linearization about the trajectory, and associated time-varying Lyapunov and Riccati differential equations. The methods are compared on stabilized trajectories of a six-state model of a satellite.
This paper presents a sampling-based planning algorithm for in-hand manipulation of a grasped object using a series of external pushes. A high-level sampling-based planning framework, in tandem with a low-level inverse contact dynamics solver, effectively explores the space of continuous pushes with discrete pusher contact switch-overs. We model the frictional interaction between gripper, grasped object, and pusher, by discretizing complex surface/line contacts into arrays of hard frictional point contacts. The inverse dynamics problem of finding an instantaneous pusher motion that yields a desired instantaneous object motion takes the form of a mixed nonlinear complementarity problem. Building upon this dynamics solver, our planner generates a sequence of pushes that steers the object to a goal grasp. We evaluate the performance of the planner for the case of a parallel-jaw gripper manipulating different objects, both in simulation and with real experiments. Through these examples, we highlight the important properties of the planner: respecting and exploiting the hybrid dynamics of contact sticking/sliding/rolling and a sense of efficiency with respect to discrete contact switch-overs.
We introduce a robust, real-time, high-resolution human video matting method that achieves new state-of-the-art performance. Our method is much lighter than previous approaches and can process 4K at 76 FPS and HD at 104 FPS on an Nvidia GTX 1080Ti GPU. Unlike most existing methods that perform video matting frame-by-frame as independent images, our method uses a recurrent architecture to exploit temporal information in videos and achieves significant improvements in temporal coherence and matting quality. Furthermore, we propose a novel training strategy that enforces our network on both matting and segmentation objectives. This significantly improves our models robustness. Our method does not require any auxiliary inputs such as a trimap or a pre-captured background image, so it can be widely applied to existing human matting applications.