Do you want to publish a course? Click here

An unsupervised framework for tracing textual sources of moral change

إطار عمل غير مخالف لتتبع مصادر نصية من التغيير الأخلاقي

264   0   0   0.0 ( 0 )
 Publication date 2021
and research's language is English
 Created by Shamra Editor




Ask ChatGPT about the research

Morality plays an important role in social well-being, but people's moral perception is not stable and changes over time. Recent advances in natural language processing have shown that text is an effective medium for informing moral change, but no attempt has been made to quantify the origins of these changes. We present a novel unsupervised framework for tracing textual sources of moral change toward entities through time. We characterize moral change with probabilistic topical distributions and infer the source text that exerts prominent influence on the moral time course. We evaluate our framework on a diverse set of data ranging from social media to news articles. We show that our framework not only captures fine-grained human moral judgments, but also identifies coherent source topics of moral change triggered by historical events. We apply our methodology to analyze the news in the COVID-19 pandemic and demonstrate its utility in identifying sources of moral change in high-impact and real-time social events.



References used
https://aclanthology.org/
rate research

Read More

In this work, we consider the problem of designing secure and efficient federated learning (FL) frameworks for NLP. Existing solutions under this literature either consider a trusted aggregator or require heavy-weight cryptographic primitives, which makes the performance significantly degraded. Moreover, many existing secure FL designs work only under the restrictive assumption that none of the clients can be dropped out from the training protocol. To tackle these problems, we propose SEFL, a secure and efficient federated learning framework that (1) eliminates the need for the trusted entities; (2) achieves similar and even better model accuracy compared with existing FL designs; (3) is resilient to client dropouts.
Codifying commonsense knowledge in machines is a longstanding goal of artificial intelligence. Recently, much progress toward this goal has been made with automatic knowledge base (KB) construction techniques. However, such techniques focus primarily on the acquisition of positive (true) KB statements, even though negative (false) statements are often also important for discriminative reasoning over commonsense KBs. As a first step toward the latter, this paper proposes NegatER, a framework that ranks potential negatives in commonsense KBs using a contextual language model (LM). Importantly, as most KBs do not contain negatives, NegatER relies only on the positive knowledge in the LM and does not require ground-truth negative examples. Experiments demonstrate that, compared to multiple contrastive data augmentation approaches, NegatER yields negatives that are more grammatical, coherent, and informative---leading to statistically significant accuracy improvements in a challenging KB completion task and confirming that the positive knowledge in LMs can be re-purposed'' to generate negative knowledge.
An increasingly common requirement in distributed network environments is the need to distribute security mechanisms across several network components. This includes both cryptographic key distribution and cryptographic computation. Most proposed se curity mechanisms are based on threshold cryptography, which allows a cryptographic computation to be shared amongst network components in such a way that a threshold of active components are required for the security operation to be successfully enabled. Although there are many different proposed techniques available, we feel that the practical issues that determine both what kind of technique is selected for implementation and how it is implemented are often glossed over. In this paper we thus establish a new framework for network security architects to apply when considering adoption of such mechanisms. This framework identifies the critical design decisions that need to be taken into account and is intended to aid both design and implementation. As part of this framework we propose a taxonomy of management models and application environments. We also demonstrate the utility of the framework by applying it to a VPN environment.
This research attempts to shed light on the issue of growing or uncontrolled population growth, especially from the point of view of Robert Maltus as one of the inhabitants who left their silence in this area. This study also addresses several key aspects: First, the reasons behind population growth such as migration, low mortality due to improved health care, attention to women's reproductive health and availability of medication. Second: the relationship between both the population increase and the food problem, from the point of view of Maltos, who believes that there is a direct relationship between the two variables, the more the population has worsened the problem of food. Thirdly, reference is made to the main effects that unbalanced population growth may have on the environment on the one hand, such as continued logging, population expansion, the need for fresh drinking water, pollution of air, water, soil, and the inability to absorb waste. On the social side, poverty, unemployment and the low social level, . The most prominent solutions presented by Maltos to solve the population problem include ethical barriers and natural contraindications. Fifth: To review some attitudes on the population issue such as the theory of Thomas Sadler, James Stewart, Herbert Spencer, Karl Marx, and to indicate the extent of intersection or difference with the theory of Maltos.
Abstract The metrics standardly used to evaluate Natural Language Generation (NLG) models, such as BLEU or METEOR, fail to provide information on which linguistic factors impact performance. Focusing on Surface Realization (SR), the task of convertin g an unordered dependency tree into a well-formed sentence, we propose a framework for error analysis which permits identifying which features of the input affect the models' results. This framework consists of two main components: (i) correlation analyses between a wide range of syntactic metrics and standard performance metrics and (ii) a set of techniques to automatically identify syntactic constructs that often co-occur with low performance scores. We demonstrate the advantages of our framework by performing error analysis on the results of 174 system runs submitted to the Multilingual SR shared tasks; we show that dependency edge accuracy correlate with automatic metrics thereby providing a more interpretable basis for evaluation; and we suggest ways in which our framework could be used to improve models and data. The framework is available in the form of a toolkit which can be used both by campaign organizers to provide detailed, linguistically interpretable feedback on the state of the art in multilingual SR, and by individual researchers to improve models and datasets.1

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا