ترغب بنشر مسار تعليمي؟ اضغط هنا

Wireless Software Synchronization of Multiple Distributed Cameras

71   0   0.0 ( 0 )
 نشر من قبل Sameer Ansari
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We present a method for precisely time-synchronizing the capture of image sequences from a collection of smartphone cameras connected over WiFi. Our method is entirely software-based, has only modest hardware requirements, and achieves an accuracy of less than 250 microseconds on unmodified commodity hardware. It does not use image content and synchronizes cameras prior to capture. The algorithm operates in two stages. In the first stage, we designate one device as the leader and synchronize each client devices clock to it by estimating network delay. Once clocks are synchronized, the second stage initiates continuous image streaming, estimates the relative phase of image timestamps between each client and the leader, and shifts the streams into alignment. We quantitatively validate our results on a multi-camera rig imaging a high-precision LED array and qualitatively demonstrate significant improvements to multi-view stereo depth estimation and stitching of dynamic scenes. We release as open source libsoftwaresync, an Android implementation of our system, to inspire new types of collective capture applications.



قيم البحث

اقرأ أيضاً

We propose DeepMultiCap, a novel method for multi-person performance capture using sparse multi-view cameras. Our method can capture time varying surface details without the need of using pre-scanned template models. To tackle with the serious occlus ion challenge for close interacting scenes, we combine a recently proposed pixel-aligned implicit function with parametric model for robust reconstruction of the invisible surface areas. An effective attention-aware module is designed to obtain the fine-grained geometry details from multi-view images, where high-fidelity results can be generated. In addition to the spatial attention method, for video inputs, we further propose a novel temporal fusion method to alleviate the noise and temporal inconsistencies for moving character reconstruction. For quantitative evaluation, we contribute a high quality multi-person dataset, MultiHuman, which consists of 150 static scenes with different levels of occlusions and ground truth 3D human models. Experimental results demonstrate the state-of-the-art performance of our method and the well generalization to real multiview video data, which outperforms the prior works by a large margin.
157 - Lan Xu , Lu Fang , Wei Cheng 2016
Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras ( autonomous unmanned aerial vehicles(UAV) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the competent and plausible surface and motion reconstruction results
Coordinating the operations of separate wireless systems at the wavelength level can lead to significant improvements in wireless capabilities. We address a fundamental challenge in distributed radio frequency system cooperation - inter-node phase al ignment - which must be accomplished wirelessly, and is particularly challenging when the nodes are in relative motion. We present a solution to this problem that is based on a novel combined high-accuracy ranging and frequency transfer technique. Using this approach, we present the design of the first fully wireless distributed system operating at the wavelength level. We demonstrate the system in the first open-loop coherent distributed beamforming experiment. Internode range estimation to support phase alignment was performed using a two-tone stepped frequency waveform with a single pulse, while a two-tone waveform was used for frequency synchronization, where the oscillator of a secondary node was disciplined to the primary node. In this concept, secondary nodes are equipped with an adjunct self-mixing circuit that is able to extract the reference frequency from the captured synchronization waveform. The approach was implemented on a two-node dynamic system using Ettus X310 software-defined radios, with coherent beamforming at 1.5 GHz. We demonstrate distributed beamforming with greater than 90% of the maximum possible coherent gain throughout the displacement of the secondary node over one full cycle of the beamforming frequency.
Electric Vehicles are increasingly common, with inductive chargepads being considered a convenient and efficient means of charging electric vehicles. However, drivers are typically poor at aligning the vehicle to the necessary accuracy for efficient inductive charging, making the automated alignment of the two charging plates desirable. In parallel to the electrification of the vehicular fleet, automated parking systems that make use of surround-view camera systems are becoming increasingly popular. In this work, we propose a system based on the surround-view camera architecture to detect, localize and automatically align the vehicle with the inductive chargepad. The visual design of the chargepads is not standardized and not necessarily known beforehand. Therefore a system that relies on offline training will fail in some situations. Thus we propose an online learning method that leverages the drivers actions when manually aligning the vehicle with the chargepad and combine it with weak supervision from semantic segmentation and depth to learn a classifier to auto-annotate the chargepad in the video for further training. In this way, when faced with a previously unseen chargepad, the driver needs only manually align the vehicle a single time. As the chargepad is flat on the ground, it is not easy to detect it from a distance. Thus, we propose using a Visual SLAM pipeline to learn landmarks relative to the chargepad to enable alignment from a greater range. We demonstrate the working system on an automated vehicle as illustrated in the video https://youtu.be/_cLCmkW4UYo. To encourage further research, we will share a chargepad dataset used in this work.
In 2015/16, the photomultiplier cameras of the H.E.S.S. Cherenkov telescopes CT1-4 have undergone a major upgrade. The entire electronics has been replaced, using NECTAr chips for the front-end readout. A new ventilation system has been installed and several auxiliary components have been replaced. Besides this, the internal control and readout software was rewritten from scratch in a modern and modular way. Ethernet technology was used wherever possible to ensure both flexibility, stability and high bandwidth. An overview of the installed components will be given.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا