ترغب بنشر مسار تعليمي؟ اضغط هنا

LPar -- A Distributed Multi Agent platform for building Polyglot, Omni Channel and Industrial grade Natural Language Interfaces

57   0   0.0 ( 0 )
 نشر من قبل Pranav Sharma
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Pranav Sharma




اسأل ChatGPT حول البحث

The goal of serving and delighting customers in a personal and near human like manner is very high on automation agendas of most Enterprises. Last few years, have seen huge progress in Natural Language Processing domain which has led to deployments of conversational agents in many enterprises. Most of the current industrial deployments tend to use Monolithic Single Agent designs that model the entire knowledge and skill of the Domain. While this approach is one of the fastest to market, the monolithic design makes it very hard to scale beyond a point. There are also challenges in seamlessly leveraging many tools offered by sub fields of Natural Language Processing and Information Retrieval in a single solution. The sub fields that can be leveraged to provide relevant information are, Question and Answer system, Abstractive Summarization, Semantic Search, Knowledge Graph etc. Current deployments also tend to be very dependent on the underlying Conversational AI platform (open source or commercial) , which is a challenge as this is a fast evolving space and no one platform can be considered future proof even in medium term of 3-4 years. Lately,there is also work done to build multi agent solutions that tend to leverage a concept of master agent. While this has shown promise, this approach still makes the master agent in itself difficult to scale. To address these challenges, we introduce LPar, a distributed multi agent platform for large scale industrial deployment of polyglot, diverse and inter-operable agents. The asynchronous design of LPar supports dynamically expandable domain. We also introduce multiple strategies available in the LPar system to elect the most suitable agent to service a customer query.



قيم البحث

اقرأ أيضاً

We present a method for combining multi-agent communication and traditional data-driven approaches to natural language learning, with an end goal of teaching agents to communicate with humans in natural language. Our starting point is a language mode l that has been trained on generic, not task-specific language data. We then place this model in a multi-agent self-play environment that generates task-specific rewards used to adapt or modulate the model, turning it into a task-conditional language model. We introduce a new way for combining the two types of learning based on the idea of reranking language model samples, and show that this method outperforms others in communicating with humans in a visual referential communication task. Finally, we present a taxonomy of different types of language drift that can occur alongside a set of measures to detect them.
A number of recent works have proposed techniques for end-to-end learning of communication protocols among cooperative multi-agent populations, and have simultaneously found the emergence of grounded human-interpretable language in the protocols deve loped by the agents, all learned without any human supervision! In this paper, using a Task and Tell reference game between two agents as a testbed, we present a sequence of negative results culminating in a positive one -- showing that while most agent-invented languages are effective (i.e. achieve near-perfect task rewards), they are decidedly not interpretable or compositional. In essence, we find that natural language does not emerge naturally, despite the semblance of ease of natural-language-emergence that one may gather from recent literature. We discuss how it is possible to coax the invented languages to become more and more human-like and compositional by increasing restrictions on how two agents may communicate.
In this paper, we examine and analyze the challenges associated with developing and introducing language technologies to low-resource language communities. While doing so, we bring to light the successes and failures of past work in this area, challe nges being faced in doing so, and what they have achieved. Throughout this paper, we take a problem-facing approach and describe essential factors which the success of such technologies hinges upon. We present the various aspects in a manner which clarify and lay out the different tasks involved, which can aid organizations looking to make an impact in this area. We take the example of Gondi, an extremely-low resource Indian language, to reinforce and complement our discussion.
We have recently seen the emergence of several publicly available Natural Language Understanding (NLU) toolkits, which map user utterances to structured, but more abstract, Dialogue Act (DA) or Intent specifications, while making this process accessi ble to the lay developer. In this paper, we present the first wide coverage evaluation and comparison of some of the most popular NLU services, on a large, multi-domain (21 domains) dataset of 25K user utterances that we have collected and annotated with Intent and Entity Type specifications and which will be released as part of this submission. The results show that on Intent classification Watson significantly outperforms the other platforms, namely, Dialogflow, LUIS and Rasa; though these also perform well. Interestingly, on Entity Type recognition, Watson performs significantly worse due to its low Precision. Again, Dialogflow, LUIS and Rasa perform well on this task.
Increasing concerns and regulations about data privacy, necessitate the study of privacy-preserving methods for natural language processing (NLP) applications. Federated learning (FL) provides promising methods for a large number of clients (i.e., pe rsonal devices or organizations) to collaboratively learn a shared global model to benefit all clients, while allowing users to keep their data locally. To facilitate FL research in NLP, we present the FedNLP, a research platform for federated learning in NLP. FedNLP supports various popular task formulations in NLP such as text classification, sequence tagging, question answering, seq2seq generation, and language modeling. We also implement an interface between Transformer language models (e.g., BERT) and FL methods (e.g., FedAvg, FedOpt, etc.) for distributed training. The evaluation protocol of this interface supports a comprehensive collection of non-IID partitioning strategies. Our preliminary experiments with FedNLP reveal that there exists a large performance gap between learning on decentralized and centralized datasets -- opening intriguing and exciting future research directions aimed at developing FL methods suited to NLP tasks.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا