ترغب بنشر مسار تعليمي؟ اضغط هنا

EgoCap: Egocentric Marker-less Motion Capture with Two Fisheye Cameras (Extended Abstract)

86   0   0.0 ( 0 )
 نشر من قبل Helge Rhodin
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Marker-based and marker-less optical skeletal motion-capture methods use an outside-in arrangement of cameras placed around a scene, with viewpoints converging on the center. They often create discomfort by possibly needed marker suits, and their recording volume is severely restricted and often constrained to indoor scenes with controlled backgrounds. We therefore propose a new method for real-time, marker-less and egocentric motion capture which estimates the full-body skeleton pose from a lightweight stereo pair of fisheye cameras that are attached to a helmet or virtual-reality headset. It combines the strength of a new generative pose estimation framework for fisheye views with a ConvNet-based body-part detector trained on a new automatically annotated and augmented dataset. Our inside-in method captures full-body motion in general indoor and outdoor scenes, and also crowded scenes.



قيم البحث

اقرأ أيضاً

Marker-based and marker-less optical skeletal motion-capture methods use an outside-in arrangement of cameras placed around a scene, with viewpoints converging on the center. They often create discomfort by possibly needed marker suits, and their rec ording volume is severely restricted and often constrained to indoor scenes with controlled backgrounds. Alternative suit-based systems use several inertial measurement units or an exoskeleton to capture motion. This makes capturing independent of a confined volume, but requires substantial, often constraining, and hard to set up body instrumentation. We therefore propose a new method for real-time, marker-less and egocentric motion capture which estimates the full-body skeleton pose from a lightweight stereo pair of fisheye cameras that are attached to a helmet or virtual reality headset. It combines the strength of a new generative pose estimation framework for fisheye views with a ConvNet-based body-part detector trained on a large new dataset. Our inside-in method captures full-body motion in general indoor and outdoor scenes, and also crowded scenes with many people in close vicinity. The captured user can freely move around, which enables reconstruction of larger-scale activities and is particularly useful in virtual reality to freely roam and interact, while seeing the fully motion-captured virtual body.
157 - Lan Xu , Lu Fang , Wei Cheng 2016
Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras ( autonomous unmanned aerial vehicles(UAV) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the competent and plausible surface and motion reconstruction results
168 - Yuxiang Zhang , Zhe Li , Liang An 2021
Multi-person total motion capture is extremely challenging when it comes to handle severe occlusions, different reconstruction granularities from body to face and hands, drastically changing observation scales and fast body movements. To overcome the se challenges above, we contribute a lightweight total motion capture system for multi-person interactive scenarios using only sparse multi-view cameras. By contributing a novel hand and face bootstrapping algorithm, our method is capable of efficient localization and accurate association of the hands and faces even on severe occluded occasions. We leverage both pose regression and keypoints detection methods and further propose a unified two-stage parametric fitting method for achieving pixel-aligned accuracy. Moreover, for extremely self-occluded poses and close interactions, a novel feedback mechanism is proposed to propagate the pixel-aligned reconstructions into the next frame for more accurate association. Overall, we propose the first light-weight total capture system and achieves fast, robust and accurate multi-person total motion capture performance. The results and experiments show that our method achieves more accurate results than existing methods under sparse-view setups.
Malnutrition is a major public health concern in low-and-middle-income countries (LMICs). Understanding food and nutrient intake across communities, households and individuals is critical to the development of health policies and interventions. To ea se the procedure in conducting large-scale dietary assessments, we propose to implement an intelligent passive food intake assessment system via egocentric cameras particular for households in Ghana and Uganda. Algorithms are first designed to remove redundant images for minimising the storage memory. At run time, deep learning-based semantic segmentation is applied to recognise multi-food types and newly-designed handcrafted features are extracted for further consumed food weight monitoring. Comprehensive experiments are conducted to validate our methods on an in-the-wild dataset captured under the settings which simulate the unique LMIC conditions with participants of Ghanaian and Kenyan origin eating common Ghanaian/Kenyan dishes. To demonstrate the efficacy, experienced dietitians are involved in this research to perform the visual portion size estimation, and their predictions are compared to our proposed method. The promising results have shown that our method is able to reliably monitor food intake and give feedback on users eating behaviour which provides guidance for dietitians in regular dietary assessment.
In this paper, we introduce a moving object detection algorithm for fisheye cameras used in autonomous driving. We reformulate the three commonly used constraints in rectilinear images (epipolar, positive depth and positive height constraints) to sph erical coordinates which is invariant to specific camera configuration once the calibration is known. One of the main challenging use case in autonomous driving is to detect parallel moving objects which suffer from motion-parallax ambiguity. To alleviate this, we formulate an additional fourth constraint, called the anti-parallel constraint, which aids the detection of objects with motion that mirrors the ego-vehicle possible. We analyze the proposed algorithm in different scenarios and demonstrate that it works effectively operating directly on fisheye images.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا