ترغب بنشر مسار تعليمي؟ اضغط هنا

Mixing Metaphors

347   0   0.0 ( 0 )
 نشر من قبل Mark Lee
 تاريخ النشر 1999
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Mixed metaphors have been neglected in recent metaphor research. This paper suggests that such neglect is short-sighted. Though mixing is a more complex phenomenon than straight metaphors, the same kinds of reasoning and knowledge structures are required. This paper provides an analysis of both parallel and serial mixed metaphors within the framework of an AI system which is already capable of reasoning about straight metaphorical manifestations and argues that the processes underlying mixing are central to metaphorical meaning. Therefore, any theory of metaphors must be able to account for mixing.

قيم البحث

اقرأ أيضاً

Metaphorical expressions are difficult linguistic phenomena, challenging diverse Natural Language Processing tasks. Previous works showed that paraphrasing a metaphor as its literal counterpart can help machines better process metaphors on downstream tasks. In this paper, we interpret metaphors with BERT and WordNet hypernyms and synonyms in an unsupervised manner, showing that our method significantly outperforms the state-of-the-art baseline. We also demonstrate that our method can help a machine translation system improve its accuracy in translating English metaphors to 8 target languages.
59 - Samson Tan , Shafiq Joty 2021
Multilingual models have demonstrated impressive cross-lingual transfer performance. However, test sets like XNLI are monolingual at the example level. In multilingual communities, it is common for polyglots to code-mix when conversing with each othe r. Inspired by this phenomenon, we present two strong black-box adversarial attacks (one word-level, one phrase-level) for multilingual models that push their ability to handle code-mixed sentences to the limit. The former uses bilingual dictionaries to propose perturbations and translations of the clean example for sense disambiguation. The latter directly aligns the clean example with its translations before extracting phrases as perturbations. Our phrase-level attack has a success rate of 89.75% against XLM-R-large, bringing its average accuracy of 79.85 down to 8.18 on XNLI. Finally, we propose an efficient adversarial training scheme that trains in the same number of steps as the original model and show that it improves model accuracy.
The work is devoted to a problem of search of metaphors for interactive systems and systems based on Virtual Reality (VR) environments. The analysis of magic fairy tales as a source of metaphors for interface and virtual reality is offered. Some resu lts of design process based on magic metaphors are considered.
Teaching about energy in interdisciplinary settings that emphasize coherence among physics, chemistry, and biology leads to a more central role for chemical bond energy. We argue that an interdisciplinary approach to chemical energy leads to modeling chemical bonds in terms of negative energy. While recent work on ontological metaphors for energy has emphasized the affordances of the substance ontology, this ontology is problematic in the context of negative energy. Instead, we apply a dynamic ontologies perspective to argue that blending the substance and location ontologies for energy can be effective in reasoning about negative energy in the context of reasoning about chemical bonds. We present data from an introductory physics for the life sciences (IPLS) course in which both experts and students successfully use this blended ontology. Blending these ontologies is most successful when the substance and location ontologies are combined such that each is strategically utilized in reasoning about particular aspects of energetic processes.
We show that Transformer encoder architectures can be sped up, with limited accuracy costs, by replacing the self-attention sublayers with simple linear transformations that mix input tokens. These linear mixers, along with standard nonlinearities in feed-forward layers, prove competent at modeling semantic relationships in several text classification tasks. Most surprisingly, we find that replacing the self-attention sublayer in a Transformer encoder with a standard, unparameterized Fourier Transform achieves 92-97% of the accuracy of BERT counterparts on the GLUE benchmark, but trains 80% faster on GPUs and 70% faster on TPUs at standard 512 input lengths. At longer input lengths, our FNet model is significantly faster: when compared to the efficient Transformers on the Long Range Arena benchmark, FNet matches the accuracy of the most accurate models, while outpacing the fastest models across all sequence lengths on GPUs (and across relatively shorter lengths on TPUs). Finally, FNet has a light memory footprint and is particularly efficient at smaller model sizes; for a fixed speed and accuracy budget, small FNet models outperform Transformer counterparts.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا