Do you want to publish a course? Click here

Bio-inspired Structure Identification in Language Embeddings

110   0   0.0 ( 0 )
 Added by Hongwei Zhou
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Word embeddings are a popular way to improve downstream performances in contemporary language modeling. However, the underlying geometric structure of the embedding space is not well understood. We present a series of explorations using bio-inspired methodology to traverse and visualize word embeddings, demonstrating evidence of discernible structure. Moreover, our model also produces word similarity rankings that are plausible yet very different from common similarity metrics, mainly cosine similarity and Euclidean distance. We show that our bio-inspired model can be used to investigate how different word embedding techniques result in different semantic outputs, which can emphasize or obscure particular interpretations in textual data.



rate research

Read More

Though language model text embeddings have revolutionized NLP research, their ability to capture high-level semantic information, such as relations between entities in text, is limited. In this paper, we propose a novel contrastive learning framework that trains sentence embeddings to encode the relations in a graph structure. Given a sentence (unstructured text) and its graph, we use contrastive learning to impose relation-related structure on the token-level representations of the sentence obtained with a CharacterBERT (El Boukkouri et al.,2020) model. The resulting relation-aware sentence embeddings achieve state-of-the-art results on the relation extraction task using only a simple KNN classifier, thereby demonstrating the success of the proposed method. Additional visualization by a tSNE analysis shows the effectiveness of the learned representation space compared to baselines. Furthermore, we show that we can learn a different space for named entity recognition, again using a contrastive learning objective, and demonstrate how to successfully combine both representation spaces in an entity-relation task.
Pretrained Language Models (LMs) have been shown to possess significant linguistic, common sense, and factual knowledge. One form of knowledge that has not been studied yet in this context is information about the scalar magnitudes of objects. We show that pretrained language models capture a significant amount of this information but are short of the capability required for general common-sense reasoning. We identify contextual information in pre-training and numeracy as two key factors affecting their performance and show that a simple method of canonicalizing numbers can have a significant effect on the results.
Multilingual pretrained language models (MPLMs) exhibit multilinguality and are well suited for transfer across languages. Most MPLMs are trained in an unsupervised fashion and the relationship between their objective and multilinguality is unclear. More specifically, the question whether MPLM representations are language-agnostic or they simply interleave well with learned task prediction heads arises. In this work, we locate language-specific information in MPLMs and identify its dimensionality and the layers where this information occurs. We show that language-specific information is scattered across many dimensions, which can be projected into a linear subspace. Our study contributes to a better understanding of MPLM representations, going beyond treating them as unanalyzable blobs of information.
Deep convolutional neural networks (DCNNs) have revolutionized computer vision and are often advocated as good models of the human visual system. However, there are currently many shortcomings of DCNNs, which preclude them as a model of human vision. For example, in the case of adversarial attacks, where adding small amounts of noise to an image, including an object, can lead to strong misclassification of that object. But for humans, the noise is often invisible. If vulnerability to adversarial noise cannot be fixed, DCNNs cannot be taken as serious models of human vision. Many studies have tried to add features of the human visual system to DCNNs to make them robust against adversarial attacks. However, it is not fully clear whether human vision inspired components increase robustness because performance evaluations of these novel components in DCNNs are often inconclusive. We propose a set of criteria for proper evaluation and analyze different models according to these criteria. We finally sketch future efforts to make DCCNs one step closer to the model of human vision.
Though sunlight is by far the most abundant renewable energy source available to humanity, its dilute and variable nature has kept efficient ways to collect, store, and distribute this energy tantalisingly out of reach. Turning the incoherent energy supply of sunlight into a coherent laser beam would overcome several practical limitations inherent in using sunlight as a source of clean energy: laser beams travel nearly losslessly over large distances, and they are effective at driving chemical reactions which convert sunlight into chemical energy. Here we propose a bio-inspired blueprint for a novel type of laser with the aim of upgrading unconcentrated natural sunlight into a coherent laser beam. Our proposed design constitutes an improvement of several orders of magnitude over existing comparable technologies: state-of-the-art solar pumped lasers operate above 1000 suns (corresponding to 1000 times the natural sunlight power). In order to achieve lasing with the extremely dilute power provided by sunlight, we here propose a laser medium comprised of molecular aggregates inspired by the architecture of photosynthetic complexes. Such complexes, by exploiting a highly symmetric arrangement of molecules organized in a hierarchy of energy scales, exhibit a very large internal efficiency in harvesting photons from a power source as dilute as natural sunlight. Specifically, we consider substituting the reaction center of photosynthetic complexes in purple bacteria with a suitably engineered molecular dimer composed of two strongly coupled chromophores. We show that if pumped by the surrounding photosynthetic complex, which efficiently collects and concentrates solar energy, the core dimer structure can reach population inversion, and reach the lasing threshold under natural sunlight. The design principles proposed here will also pave the way for developing other bio-inspired quantum devices.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا