Do you want to publish a course? Click here

TICO-19: the Translation Initiative for Covid-19

208   0   0.0 ( 0 )
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

The COVID-19 pandemic is the worst pandemic to strike the world in over a century. Crucial to stemming the tide of the SARS-CoV-2 virus is communicating to vulnerable populations the means by which they can protect themselves. To this end, the collaborators forming the Translation Initiative for COvid-19 (TICO-19) have made test and development data available to AI and MT researchers in 35 different languages in order to foster the development of tools and resources for improving access to information about COVID-19 in these languages. In addition to 9 high-resourced, pivot languages, the team is targeting 26 lesser resourced languages, in particular languages of Africa, South Asia and South-East Asia, whose populations may be the most vulnerable to the spread of the virus. The same data is translated into all of the languages represented, meaning that testing or development can be done for any pairing of languages in the set. Further, the team is converting the test and development data into translation memories (TMXs) that can be used by localizers from and to any of the languages.



rate research

Read More

Curated scientific databases play an important role in the scientific endeavour and support is needed for the significant effort that goes into their creation and maintenance. This demonstration and case study illustrate how curation support has been developed in the Links cross-tier programming language, a functional, strongly typed language with language-integrated query and support for temporal databases. The chosen case study uses weekly released Covid-19 fatality figures from the Scottish government which exhibit updates to previously released data. This data allows the capture and query of update provenance in our prototype. This demonstration will highlight the potential for language-integrated support for curation to simplify and streamline prototyping of web-applications in support of scientific databases
Every day, more people are becoming infected and dying from exposure to COVID-19. Some countries in Europe like Spain, France, the UK and Italy have suffered particularly badly from the virus. Others such as Germany appear to have coped extremely well. Both health professionals and the general public are keen to receive up-to-date information on the effects of the virus, as well as treatments that have proven to be effective. In cases where language is a barrier to access of pertinent information, machine translation (MT) may help people assimilate information published in different languages. Our MT systems trained on COVID-19 data are freely available for anyone to use to help translate information published in German, French, Italian, Spanish into English, as well as the reverse direction.
In this study, we investigate the scientific research response from the early stages of the pandemic, and we review key findings on how the early warning systems developed in previous epidemics responded to contain the virus. The data records are analysed with commutable statistical methods, including R Studio, Bibliometrix package, and the Web of Science data mining tool. We identified few different clusters, containing references to exercise, inflammation, smoking, obesity and many additional factors. From the analysis on Covid-19 and vaccine, we discovered that although the USA is leading in volume of scientific research on Covid 19 vaccine, the leading 3 research institutions (Fudan, Melbourne, Oxford) are not based in the USA. Hence, it is difficult to predict which country would be first to produce a Covid 19 vaccine.
The massive spread of false information on social media has become a global risk especially in a global pandemic situation like COVID-19. False information detection has thus become a surging research topic in recent months. NLP4IF-2021 shared task on fighting the COVID-19 infodemic has been organised to strengthen the research in false information detection where the participants are asked to predict seven different binary labels regarding false information in a tweet. The shared task has been organised in three languages; Arabic, Bulgarian and English. In this paper, we present our approach to tackle the task objective using transformers. Overall, our approach achieves a 0.707 mean F1 score in Arabic, 0.578 mean F1 score in Bulgarian and 0.864 mean F1 score in English ranking 4th place in all the languages.
Under the pandemic of COVID-19, people experiencing COVID19-related symptoms or exposed to risk factors have a pressing need to consult doctors. Due to hospital closure, a lot of consulting services have been moved online. Because of the shortage of medical professionals, many people cannot receive online consultations timely. To address this problem, we aim to develop a medical dialogue system that can provide COVID19-related consultations. We collected two dialogue datasets -- CovidDialog -- (in English and Chinese respectively) containing conversations between doctors and patients about COVID-19. On these two datasets, we train several dialogue generation models based on Transformer, GPT, and BERT-GPT. Since the two COVID-19 dialogue datasets are small in size, which bear high risk of overfitting, we leverage transfer learning to mitigate data deficiency. Specifically, we take the pretrained models of Transformer, GPT, and BERT-GPT on dialog datasets and other large-scale texts, then finetune them on our CovidDialog tasks. We perform both automatic and human evaluation of responses generated by these models. The results show that the generated responses are promising in being doctor-like, relevant to the conversation history, and clinically informative. The data and code are available at https://github.com/UCSD-AI4H/COVID-Dialogue.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا