ﻻ يوجد ملخص باللغة العربية
Matching of images and analysis of shape differences is traditionally pursued by energy minimization of paths of deformations acting to match the shape objects. In the Large Deformation Diffeomorphic Metric Mapping (LDDMM) framework, iterative gradient descents on the matching functional lead to matching algorithms informally known as Beg algorithms. When stochasticity is introduced to model stochastic variability of shapes and to provide more realistic models of observed shape data, the corresponding matching problem can be solved with a stochastic Beg algorithm, similar to the finite temperature string method used in rare event sampling. In this paper, we apply a stochastic model compatible with the geometry of the LDDMM framework to obtain a stochastic model of images and we derive the stochastic version of the Beg algorithm which we compare with the string method and an expectation-maximization optimization of posterior likelihoods. The algorithm and its use for statistical inference is tested on stochastic LDDMM landmarks and images.
Depth scans acquired from different views may contain nuisances such as noise, occlusion, and varying point density. We propose a novel Signature of Geometric Centroids descriptor, supporting direct shape matching on the scans, without requiring any
We propose a self-supervised approach to deep surface deformation. Given a pair of shapes, our algorithm directly predicts a parametric transformation from one shape to the other respecting correspondences. Our insight is to use cycle-consistency to
Traditional image recognition involves identifying the key object in a portrait-type image with a single object focus (ILSVRC, AlexNet, and VGG). More recent approaches consider dense image recognition - segmenting an image with appropriate bounding
We propose a new approach to determine correspondences between image pairs in the wild under large changes in illumination, viewpoint, context, and material. While other approaches find correspondences between pairs of images by treating the images i
Image-to-image translation (I2I) aims to transfer images from a source domain to a target domain while preserving the content representations. I2I has drawn increasing attention and made tremendous progress in recent years because of its wide range o