ترغب بنشر مسار تعليمي؟ اضغط هنا

There and Back Again: Learning to Simulate Radar Data for Real-World Applications

84   0   0.0 ( 0 )
 نشر من قبل Robert Weston Mr
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Simulating realistic radar data has the potential to significantly accelerate the development of data-driven approaches to radar processing. However, it is fraught with difficulty due to the notoriously complex image formation process. Here we propose to learn a radar sensor model capable of synthesising faithful radar observations based on simulated elevation maps. In particular, we adopt an adversarial approach to learning a forward sensor model from unaligned radar examples. In addition, modelling the backward model encourages the output to remain aligned to the world state through a cyclical consistency criterion. The backward model is further constrained to predict elevation maps from real radar data that are grounded by partial measurements obtained from corresponding lidar scans. Both models are trained in a joint optimisation. We demonstrate the efficacy of our approach by evaluating a down-stream segmentation model trained purely on simulated data in a real-world deployment. This achieves performance within four percentage points of the same model trained entirely on real data.



قيم البحث

اقرأ أيضاً

We learn an interactive vision-based driving policy from pre-recorded driving logs via a model-based approach. A forward model of the world supervises a driving policy that predicts the outcome of any potential driving trajectory. To support learning from pre-recorded logs, we assume that the world is on rails, meaning neither the agent nor its actions influence the environment. This assumption greatly simplifies the learning problem, factorizing the dynamics into a nonreactive world model and a low-dimensional and compact forward model of the ego-vehicle. Our approach computes action-values for each training trajectory using a tabular dynamic-programming evaluation of the Bellman equations; these action-values in turn supervise the final vision-based driving policy. Despite the world-on-rails assumption, the final driving policy acts well in a dynamic and reactive world. At the time of writing, our method ranks first on the CARLA leaderboard, attaining a 25% higher driving score while using 40 times less data. Our method is also an order of magnitude more sample-efficient than state-of-the-art model-free reinforcement learning techniques on navigational tasks in the ProcGen benchmark.
Translations between the quantum circuit model and the measurement-based one-way model are useful for verification and optimisation of quantum computations. They make crucial use of a property known as gflow. While gflow is defined for one-way comput ations allowing measurements in three different planes of the Bloch sphere, most research so far has focused on computations containing only measurements in the XY-plane. Here, we give the first circuit-extraction algorithm to work for one-way computations containing measurements in all three planes and having gflow. The algorithm is efficient and the resulting circuits do not contain ancillae. One-way computations are represented using the ZX-calculus, hence the algorithm also represents the most general known procedure for extracting circuits from ZX-diagrams. In developing this algorithm, we generalise several concepts and results previously known for computations containing only XY-plane measurements. We bring together several known rewrite rules for measurement patterns and formalise them in a unified notation using the ZX-calculus. These rules are used to simplify measurement patterns by reducing the number of qubits while preserving both the semantics and the existence of gflow. The results can be applied to circuit optimisation by translating circuits to patterns and back again.
With increasing automation in passenger vehicles, the study of safe and smooth occupant-vehicle interaction and control transitions is key. In this study, we focus on the development of contextual, semantically meaningful representations of the drive r state, which can then be used to determine the appropriate timing and conditions for transfer of control between driver and vehicle. To this end, we conduct a large-scale real-world controlled data study where participants are instructed to take-over control from an autonomous agent under different driving conditions while engaged in a variety of distracting activities. These take-over events are captured using multiple driver-facing cameras, which when labelled result in a dataset of control transitions and their corresponding take-over times (TOTs). We then develop and train TOT models that operate sequentially on mid to high-level features produced by computer vision algorithms operating on different driver-facing camera views. The proposed TOT model produces continuous predictions of take-over times without delay, and shows promising qualitative and quantitative results in complex real-world scenarios.
158 - Graham Fyffe 2019
We prove that the evidence lower bound (ELBO) employed by variational auto-encoders (VAEs) admits non-trivial solutions having constant posterior variances under certain mild conditions, removing the need to learn variances in the encoder. The proof follows from an unexpected journey through an array of topics: the closed form optimal decoder for Gaussian VAEs, a proof that the decoder is always smooth, a proof that the ELBO at its stationary points is equal to the exact log evidence, and the posterior variance is merely part of a stochastic estimator of the decoder Hessian. The penalty incurred from using a constant posterior variance is small under mild conditions, and otherwise discourages large variations in the decoder Hessian. From here we derive a simplified formulation of the ELBO as an expectation over a batch, which we call the Batch Information Lower Bound (BILBO). Despite the use of Gaussians, our analysis is broadly applicable -- it extends to any likelihood function that induces a Riemannian metric. Regarding learned likelihoods, we show that the ELBO is optimal in the limit as the likelihood variances approach zero, where it is equivalent to the change of variables formulation employed in normalizing flow networks. Standard optimization procedures are unstable in this limit, so we propose a bounded Gaussian likelihood that is invariant to the scale of the data using a measure of the aggregate information in a batch, which we call Bounded Aggregate Information Sampling (BAGGINS). Combining the two formulations, we construct VAE networks with only half the outputs of ordinary VAEs (no learned variances), yielding improved ELBO scores and scale invariance in experiments. As we perform our analyses irrespective of any particular network architecture, our reformulations may apply to any VAE implementation.
Across a wide range of applications, from autonomous vehicles to medical imaging, multi-spectral images provide an opportunity to extract additional information not present in color images. One of the most important steps in making this information r eadily available is the accurate estimation of dense correspondences between different spectra. Due to the nature of cross-spectral images, most correspondence solving techniques for the visual domain are simply not applicable. Furthermore, most cross-spectral techniques utilize spectra-specific characteristics to perform the alignment. In this work, we aim to address the dense correspondence estimation problem in a way that generalizes to more than one spectrum. We do this by introducing a novel cycle-consistency metric that allows us to self-supervise. This, combined with our spectra-agnostic loss functions, allows us to train the same network across multiple spectra. We demonstrate our approach on the challenging task of dense RGB-FIR correspondence estimation. We also show the performance of our unmodified network on the cases of RGB-NIR and RGB-RGB, where we achieve higher accuracy than similar self-supervised approaches. Our work shows that cross-spectral correspondence estimation can be solved in a common framework that learns to generalize alignment across spectra.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا