No Arabic abstract
We consider distance functions between conditional distributions functions. We focus on the Wasserstein metric and its Gaussian case known as the Frechet Inception Distance (FID).We develop condition
Generative adversarial networks or GANs are a type of generative modeling framework. GANs involve a pair of neural networks engaged in a competition in iteratively creating fake data, indistinguishable from the real data. One notable application of GANs is developing fake human faces, also known as deep fakes, due to the deep learning algorithms at the core of the GAN framework. Measuring the quality of the generated images is inherently subjective but attempts to objectify quality using standardized metrics have been made. One example of objective metrics is the Frechet Inception Distance (FID), which measures the difference between distributions of feature vectors for two separate datasets of images. There are situations that images with low perceptual qualities are not assigned appropriate FID scores. We propose to improve the robustness of the evaluation process by integrating lower-level features to cover a wider array of visual defects. Our proposed method integrates three levels of feature abstractions to evaluate the quality of generated images. Experimental evaluations show better performance of the proposed method for distorted images.
In this paper we study a wide range of variants for computing the (discrete and continuous) Frechet distance between uncertain curves. We define an uncertain curve as a sequence of uncertainty regions, where each region is a disk, a line segment, or a set of points. A realisation of a curve is a polyline connecting one point from each region. Given an uncertain curve and a second (certain or uncertain) curve, we seek to compute the lower and upper bound Frechet distance, which are the minimum and maximum Frechet distance for any realisations of the curves. We prove that both the upper and lower bound problems are NP-hard for the continuous Frechet distance in several uncertainty models, and that the upper bound problem remains hard for the discrete Frechet distance. In contrast, the lower bound (discrete and continuous) Frechet distance can be computed in polynomial time. Furthermore, we show that computing the expected discrete Frechet distance is #P-hard when the uncertainty regions are modelled as point sets or line segments. The construction also extends to show #P-hardness for computing the continuous Frechet distance when regions are modelled as point sets. On the positive side, we argue that in any constant dimension there is a FPTAS for the lower bound problem when $Delta / delta$ is polynomially bounded, where $delta$ is the Frechet distance and $Delta$ bounds the diameter of the regions. We then argue there is a near-linear-time 3-approximation for the decision problem when the regions are convex and roughly $delta$-separated. Finally, we also study the setting with Sakoe--Chiba time bands, where we restrict the alignment between the two curves, and give polynomial-time algorithms for upper bound and expected discrete and continuous Frechet distance for uncertainty regions modelled as point sets.
The Frechet distance is a popular similarity measure between curves. For some applications, it is desirable to match the curves under translation before computing the Frechet distance between them. This variant is called the Translation Invariant Frechet distance, and algorithms to compute it are well studied. The query version, finding an optimal placement in the plane for a query segment where the Frechet distance becomes minimized, is much less well understood. We study Translation Invariant Frechet distance queries in a restricted setting of horizontal query segments. More specifically, we preprocess a trajectory in $mathcal O(n^2 log^2 n) $ time and $mathcal O(n^{3/2})$ space, such that for any subtrajectory and any horizontal query segment we can compute their Translation Invariant Frechet distance in $mathcal O(text{polylog } n)$ time. We hope this will be a step towards answering Translation Invariant Frechet queries between arbitrary trajectories.
The Frechet distance is a popular distance measure for curves which naturally lends itself to fundamental computational tasks, such as clustering, nearest-neighbor searching, and spherical range searching in the corresponding metric space. However, its inherent complexity poses considerable computational challenges in practice. To address this problem we study distortion of the probabilistic embedding that results from projecting the curves to a randomly chosen line. Such an embedding could be used in combination with, e.g. locality-sensitive hashing. We show that in the worst case and under reasonable assumptions, the discrete Frechet distance between two polygonal curves of complexity $t$ in $mathbb{R}^d$, where $dinlbrace 2,3,4,5rbrace$, degrades by a factor linear in $t$ with constant probability. We show upper and lower bounds on the distortion. We also evaluate our findings empirically on a benchmark data set. The preliminary experimental results stand in stark contrast with our lower bounds. They indicate that highly distorted projections happen very rarely in practice, and only for strongly conditioned input curves. Keywords: Frechet distance, metric embeddings, random projections
As a vital problem in classification-oriented transfer, unsupervised domain adaptation (UDA) has attracted widespread attention in recent years. Previous UDA methods assume the marginal distributions of different domains are shifted while ignoring the discriminant information in the label distributions. This leads to classification performance degeneration in real applications. In this work, we focus on the conditional distribution shift problem which is of great concern to current conditional invariant models. We aim to seek a kernel covariance embedding for conditional distribution which remains yet unexplored. Theoretically, we propose the Conditional Kernel Bures (CKB) metric for characterizing conditional distribution discrepancy, and derive an empirical estimation for the CKB metric without introducing the implicit kernel feature map. It provides an interpretable approach to understand the knowledge transfer mechanism. The established consistency theory of the empirical estimation provides a theoretical guarantee for convergence. A conditional distribution matching network is proposed to learn the conditional invariant and discriminative features for UDA. Extensive experiments and analysis show the superiority of our proposed model.