ترغب بنشر مسار تعليمي؟ اضغط هنا

Birds of a Feather: Capturing Avian Shape Models from Images

134   0   0.0 ( 0 )
 نشر من قبل Yufu Wang
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Animals are diverse in shape, but building a deformable shape model for a new species is not always possible due to the lack of 3D data. We present a method to capture new species using an articulated template and images of that species. In this work, we focus mainly on birds. Although birds represent almost twice the number of species as mammals, no accurate shape model is available. To capture a novel species, we first fit the articulated template to each training sample. By disentangling pose and shape, we learn a shape space that captures variation both among species and within each species from image evidence. We learn models of multiple species from the CUB dataset, and contribute new species-specific and multi-species shape models that are useful for downstream reconstruction tasks. Using a low-dimensional embedding, we show that our learned 3D shape space better reflects the phylogenetic relationships among birds than learned perceptual features.

قيم البحث

اقرأ أيضاً

We study properties of some standard network models when the population is split into two types and the connection pattern between the types is varied. The studied models are generalizations of the ErdH{o}s-R{e}nyi graph, the configuration model and a preferential attachment graph. For the ErdH{o}s-R{e}nyi graph and the configuration model, the focus is on the component structure. We derive expressions for the critical parameter, indicating when there is a giant component in the graph, and study the size of the largest component by aid of simulations. When the expected degrees in the graph are fixed and the connections are shifted so that more edges connect vertices of different types, we find that the critical parameter decreases. The size of the largest component in the supercritical regime can be both increasing and decreasing as the connections change, depending on the combination of types. For the preferential attachment model, we analyze the degree distributions of the two types and derive explicit expressions for the degree exponents. The exponents are confirmed by simulations that also illustrate other properties of the degree structure.
This paper establishes a close relationship among the four information theoretic problems, namely Campbell source coding, Arikan guessing, Huleihel et al. memoryless guessing and Bunte and Lapidoth tasks partitioning problems. We first show that the aforementioned problems are mathematically related via a general moment minimization problem whose optimum solution is given in terms of Renyi entropy. We then propose a general framework for the mismatched version of these problems and establish all the asymptotic results using this framework. Further, we study an ordered tasks partitioning problem that turns out to be a generalisation of Arikans guessing problem. Finally, with the help of this general framework, we establish an equivalence among all these problems, in the sense that, knowing an asymptotically optimal solution in one problem helps us find the same in all other problems.
How cooperation emerges is a long-standing and interdisciplinary problem. Game-theoretical studies on social dilemmas reveal that altruistic incentives are critical to the emergence of cooperation but their analyses are limited to stateless games. Fo r more realistic scenarios, multi-agent reinforcement learning has been used to study sequential social dilemmas (SSDs). Recent works show that learning to incentivize other agents can promote cooperation in SSDs. However, we find that, with these incentivizing mechanisms, the team cooperation level does not converge and regularly oscillates between cooperation and defection during learning. We show that a second-order social dilemma resulting from the incentive mechanisms is the main reason for such fragile cooperation. We formally analyze the dynamics of second-order social dilemmas and find that a typical tendency of humans, called homophily, provides a promising solution. We propose a novel learning framework to encourage homophilic incentives and show that it achieves stable cooperation in both SSDs of public goods and tragedy of the commons.
Birds-Eye-View (BEV) maps have emerged as one of the most powerful representations for scene understanding due to their ability to provide rich spatial context while being easy to interpret and process. However, generating BEV maps requires complex m ulti-stage paradigms that encapsulate a series of distinct tasks such as depth estimation, ground plane estimation, and semantic segmentation. These sub-tasks are often learned in a disjoint manner which prevents the model from holistic reasoning and results in erroneous BEV maps. Moreover, existing algorithms only predict the semantics in the BEV space, which limits their use in applications where the notion of object instances is critical. In this work, we present the first end-to-end learning approach for directly predicting dense panoptic segmentation maps in the BEV, given a single monocular image in the frontal view (FV). Our architecture follows the top-down paradigm and incorporates a novel dense transformer module consisting of two distinct transformers that learn to independently map vertical and flat regions in the input image from the FV to the BEV. Additionally, we derive a mathematical formulation for the sensitivity of the FV-BEV transformation which allows us to intelligently weight pixels in the BEV space to account for the varying descriptiveness across the FV image. Extensive evaluations on the KITTI-360 and nuScenes datasets demonstrate that our approach exceeds the state-of-the-art in the PQ metric by 3.61 pp and 4.93 pp respectively.
131 - Lun Luo , Si-Yuan Cao , Bin Han 2021
Recognizing places using Lidar in large-scale environments is challenging due to the sparse nature of point cloud data. In this paper we present BVMatch, a Lidar-based frame-to-frame place recognition framework, that is capable of estimating 2D relat ive poses. Based on the assumption that the ground area can be approximated as a plane, we uniformly discretize the ground area into grids and project 3D Lidar scans to birds-eye view (BV) images. We further use a bank of Log-Gabor filters to build a maximum index map (MIM) that encodes the orientation information of the structures in the images. We analyze the orientation characteristics of MIM theoretically and introduce a novel descriptor called birds-eye view feature transform (BVFT). The proposed BVFT is insensitive to rotation and intensity variations of BV images. Leveraging the BVFT descriptors, we unify the Lidar place recognition and pose estimation tasks into the BVMatch framework. The experiments conducted on three large-scale datasets show that BVMatch outperforms the state-of-the-art methods in terms of both recall rate of place recognition and pose estimation accuracy.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا