No Arabic abstract
Synonymy and translational equivalence are the relations of sameness of meaning within and across languages. As the principal relations in wordnets and multi-wordnets, they are vital to computational lexical semantics, yet the field suffers from the absence of a common formal framework to define their properties and relationship. This paper proposes a unifying treatment of these two relations, which is validated by experiments on existing resources. In our view, synonymy and translational equivalence are simply different types of semantic identity. The theory establishes a solid foundation for critically re-evaluating prior work in cross-lingual semantics, and facilitating the creation, verification, and amelioration of lexical resources.
In a recent issue of Linguistics and Philosophy Kasmi and Pelletier (1998) (K&P), and Westerstahl (1998) criticize Zadroznys (1994) argument that any semantics can be represented compositionally. The argument is based upon Zadroznys theorem that every meaning function m can be encoded by a function mu such that (i) for any expression E of a specified language L, m(E) can be recovered from mu(E), and (ii) mu is a homomorphism from the syntactic structures of L to interpretations of L. In both cases, the primary motivation for the objections brought against Zadroznys argument is the view that his encoding of the original meaning function does not properly reflect the synonymy relations posited for the language. In this paper, we argue that these technical criticisms do not go through. In particular, we prove that mu properly encodes synonymy relations, i.e. if two expressions are synonymous, then their compositional meanings are identical. This corrects some misconceptions about the function mu, e.g. Janssen (1997). We suggest that the reason that semanticists have been anxious to preserve compositionality as a significant constraint on semantic theory is that it has been mistakenly regarded as a condition that must be satisfied by any theory that sustains a systematic connection between the meaning of an expression and the meanings of its parts. Recent developments in formal and computational semantics show that systematic theories of meanings need not be compositional.
Conventional Knowledge Graph Completion (KGC) assumes that all test entities appear during training. However, in real-world scenarios, Knowledge Graphs (KG) evolve fast with out-of-knowledge-graph (OOKG) entities added frequently, and we need to represent these entities efficiently. Most existing Knowledge Graph Embedding (KGE) methods cannot represent OOKG entities without costly retraining on the whole KG. To enhance efficiency, we propose a simple and effective method that inductively represents OOKG entities by their optimal estimation under translational assumptions. Given pretrained embeddings of the in-knowledge-graph (IKG) entities, our method needs no additional learning. Experimental results show that our method outperforms the state-of-the-art methods with higher efficiency on two KGC tasks with OOKG entities.
We explore a free-space polarization modulator in which a variable phase introduction between right- and left-handed circular polarization components is used to rotate the linear polarization of the outgoing beam relative to that of the incoming beam. In this device, the polarization states are separated by a circular polarizer that consists of a quarter-wave plate in combination with a wire grid. A movable mirror is positioned behind and parallel to the circular polarizer. As the polarizer-mirror distance is separated, an incident linear polarization will be rotated through an angle that is proportional to the introduced phase delay. We demonstrate a prototype device that modulates Stokes Q and U over a 20% bandwidth, from 77 to 94 GHz.
We consider curvature depending variational models for image regularization, such as Eulers elastica. These models are known to provide strong priors for the continuity of edges and hence have important applications in shape-and image processing. We consider a lifted convex representation of these models in the roto-translation space: In this space, curvature depending variational energies are represented by means of a convex functional defined on divergence free vector fields. The line energies are then easily extended to any scalar function. It yields a natural generalization of the total variation to the roto-translation space. As our main result, we show that the proposed convex representation is tight for characteristic functions of smooth shapes. We also discuss cases where this representation fails. For numerical solution, we propose a staggered grid discretization based on an averaged Raviart-Thomas finite elements approximation. This discretization is consistent, up to minor details, with the underlying continuous model. The resulting non-smooth convex optimization problem is solved using a first-order primal-dual algorithm. We illustrate the results of our numerical algorithm on various problems from shape-and image processing.
The WiC task has attracted considerable attention in the NLP community, as demonstrated by the popularity of the recent MCL-WiC SemEval task. WSD systems and lexical resources have been used for the WiC task, as well as for WiC dataset construction. TSV is another task related to both WiC and WSD. We aim to establish the exact relationship between WiC, TSV, and WSD. We demonstrate that these semantic classification problems can be pairwise reduced to each other, and so they are theoretically equivalent. We analyze the existing WiC datasets to validate this equivalence hypothesis. We conclude that our understanding of semantic tasks can be increased through the applications of tools from theoretical computer science. Our findings also suggests that more efficient and simpler methods for one of these tasks could be successfully applied in the other two.