ﻻ يوجد ملخص باللغة العربية
Supervised deep learning has achieved remarkable success in various applications. Successful machine learning application however depends on the availability of sufficiently large amount of data. In the absence of data from the target domain, representative data collection from multiple sources is often needed. However, a model trained on existing multi-source data might generalize poorly on the unseen target domain. This problem is referred to as domain shift. In this paper, we explore the suitability of multi-source training data selection to tackle the domain shift challenge in the context of domain generalization. We also propose a microservice-oriented methodology for supporting this solution. We perform our experimental study on the use case of building energy consumption prediction. Experimental results suggest that minimal building description is capable of improving cross-building generalization performances when used to select energy consumption data.
We introduce a framework for AI-based medical consultation system with knowledge graph embedding and reinforcement learning components and its implement. Our implement of this framework leverages knowledge organized as a graph to have diagnosis accor
Correctly detecting the semantic type of data columns is crucial for data science tasks such as automated data cleaning, schema matching, and data discovery. Existing data preparation and analysis systems rely on dictionary lookups and regular expres
Spoken conversational question answering (SCQA) requires machines to model complex dialogue flow given the speech utterances and text corpora. Different from traditional text question answering (QA) tasks, SCQA involves audio signal processing, passa
The Internet has enabled the creation of a growing number of large-scale knowledge bases in a variety of domains containing complementary information. Tools for automatically aligning these knowledge bases would make it possible to unify many sources
Transformers that are pre-trained on multilingual corpora, such as, mBERT and XLM-RoBERTa, have achieved impressive cross-lingual transfer capabilities. In the zero-shot transfer setting, only English training data is used, and the fine-tuned model i