ترغب بنشر مسار تعليمي؟ اضغط هنا

The Whole Is Greater Than the Sum of Its Nonrigid Parts

74   0   0.0 ( 0 )
 نشر من قبل Oshri Halimi
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

According to Aristotle, a philosopher in Ancient Greece, the whole is greater than the sum of its parts. This observation was adopted to explain human perception by the Gestalt psychology school of thought in the twentieth century. Here, we claim that observing part of an object which was previously acquired as a whole, one could deal with both partial matching and shape completion in a holistic manner. More specifically, given the geometry of a full, articulated object in a given pose, as well as a partial scan of the same object in a different pose, we address the problem of matching the part to the whole while simultaneously reconstructing the new pose from its partial observation. Our approach is data-driven, and takes the form of a Siamese autoencoder without the requirement of a consistent vertex labeling at inference time; as such, it can be used on unorganized point clouds as well as on triangle meshes. We demonstrate the practical effectiveness of our model in the applications of single-view deformable shape completion and dense shape correspondence, both on synthetic and real-world geometric data, where we outperform prior work on these tasks by a large margin.



قيم البحث

اقرأ أيضاً

292 - B. Jain , D. Spergel , R. Bean 2015
The focus of this report is on the opportunities enabled by the combination of LSST, Euclid and WFIRST, the optical surveys that will be an essential part of the next decades astronomy. The sum of these surveys has the potential to be significantly g reater than the contributions of the individual parts. As is detailed in this report, the combination of these surveys should give us multi-wavelength high-resolution images of galaxies and broadband data covering much of the stellar energy spectrum. These stellar and galactic data have the potential of yielding new insights into topics ranging from the formation history of the Milky Way to the mass of the neutrino. However, enabling the astronomy community to fully exploit this multi-instrument data set is a challenging technical task: for much of the science, we will need to combine the photometry across multiple wavelengths with varying spectral and spatial resolution. We identify some of the key science enabled by the combined surveys and the key technical challenges in achieving the synergies.
We prove a new lower bound on the parity decision tree complexity $mathsf{D}_{oplus}(f)$ of a Boolean function $f$. Namely, granularity of the Boolean function $f$ is the smallest $k$ such that all Fourier coefficients of $f$ are integer multiples of $1/2^k$. We show that $mathsf{D}_{oplus}(f)geq k+1$. This lower bound is an improvement of lower bounds through the sparsity of $f$ and through the degree of $f$ over $mathbb{F}_2$. Using our lower bound we determine the exact parity decision tree complexity of several important Boolean functions including majority and recursive majority. For majority the complexity is $n - mathsf{B}(n)+1$, where $mathsf{B}(n)$ is the number of ones in the binary representation of $n$. For recursive majority the complexity is $frac{n+1}{2}$. Finally, we provide an example of a function for which our lower bound is not tight. Our results imply new lower bound of $n - mathsf{B}(n)$ on the multiplicative complexity of majority.
This entry is aimed at describing cloud physics with an emphasis on fluid dynamics. As is inevitable for a review of an enormously complicated problem, it is highly selective and reflects of the authors focus. The range of scales involved, and the re levant physics at each scale is described. Particular attention is given to droplet dynamics and growth, and turbulence with and without thermodynamics.
As an essential ingredient of modern deep learning, attention mechanism, especially self-attention, plays a vital role in the global correlation discovery. However, is hand-crafted attention irreplaceable when modeling the global context? Our intrigu ing finding is that self-attention is not better than the matrix decomposition (MD) model developed 20 years ago regarding the performance and computational cost for encoding the long-distance dependencies. We model the global context issue as a low-rank recovery problem and show that its optimization algorithms can help design global information blocks. This paper then proposes a series of Hamburgers, in which we employ the optimization algorithms for solving MDs to factorize the input representations into sub-matrices and reconstruct a low-rank embedding. Hamburgers with different MDs can perform favorably against the popular global context module self-attention when carefully coping with gradients back-propagated through MDs. Comprehensive experiments are conducted in the vision tasks where it is crucial to learn the global context, including semantic segmentation and image generation, demonstrating significant improvements over self-attention and its variants.
The goal of few-shot learning is to learn a classifier that can recognize unseen classes from limited support data with labels. A common practice for this task is to train a model on the base set first and then transfer to novel classes through fine- tuning (Here fine-tuning procedure is defined as transferring knowledge from base to novel data, i.e. learning to transfer in few-shot scenario.) or meta-learning. However, as the base classes have no overlap to the novel set, simply transferring whole knowledge from base data is not an optimal solution since some knowledge in the base model may be biased or even harmful to the novel class. In this paper, we propose to transfer partial knowledge by freezing or fine-tuning particular layer(s) in the base model. Specifically, layers will be imposed different learning rates if they are chosen to be fine-tuned, to control the extent of preserved transferability. To determine which layers to be recast and what values of learning rates for them, we introduce an evolutionary search based method that is efficient to simultaneously locate the target layers and determine their individual learning rates. We conduct extensive experiments on CUB and mini-ImageNet to demonstrate the effectiveness of our proposed method. It achieves the state-of-the-art performance on both meta-learning and non-meta based frameworks. Furthermore, we extend our method to the conventional pre-training + fine-tuning paradigm and obtain consistent improvement.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا