النصوص القانونية تستخدم بشكل روتيني المفاهيم التي يصعب فهمها.يعتمد المحامون على معنى هذه المفاهيم من جانب أمور أخرى، والتحقيق بعناية في كيفية استخدامها في الماضي.العثور على قصاصات نصية تذكر مفهوم معين بطريقة مفيدة ومملة واسعة من الوقت، وبالتالي مكلفة.لقد جمعنا مجموعة بيانات قدرها 26،959 جمل، من قرارات القضية القانونية، وعلقتهم من حيث فائدتهم لشرح مفاهيم قانونية مختارة.باستخدام DataSet نقوم بدراسة فعالية نماذج المحولات المدربة مسبقا على لغة بلغة كبيرة للكشف عن أي من الجمل مفيدة.في ضوء تنبؤات النماذج، نقوم بتحليل الخصائص اللغوية المختلفة للجمل التوضيحية وكذلك علاقتها بالمفهوم القانوني الذي يجب تفسيره.نظهر أن النماذج القائمة على المحولات قادرة على تعلم ميزات متطورة بشكل مدهش وتتفوق على النهج المسبقة للمهمة.
Legal texts routinely use concepts that are difficult to understand. Lawyers elaborate on the meaning of such concepts by, among other things, carefully investigating how they have been used in the past. Finding text snippets that mention a particular concept in a useful way is tedious, time-consuming, and hence expensive. We assembled a data set of 26,959 sentences, coming from legal case decisions, and labeled them in terms of their usefulness for explaining selected legal concepts. Using the dataset we study the effectiveness of transformer models pre-trained on large language corpora to detect which of the sentences are useful. In light of models' predictions, we analyze various linguistic properties of the explanatory sentences as well as their relationship to the legal concept that needs to be explained. We show that the transformer-based models are capable of learning surprisingly sophisticated features and outperform the prior approaches to the task.
References used
https://aclanthology.org/
Modern transformer-based language models are revolutionizing NLP. However, existing studies into language modelling with BERT have been mostly limited to English-language material and do not pay enough attention to the implicit knowledge of language,
Pre-trained language models (PrLM) have to carefully manage input units when training on a very large text with a vocabulary consisting of millions of words. Previous works have shown that incorporating span-level information over consecutive words i
In this study, we propose a self-supervised learning method that distils representations of word meaning in context from a pre-trained masked language model. Word representations are the basis for context-aware lexical semantics and unsupervised sema
Pretrained language models (PTLMs) yield state-of-the-art performance on many natural language processing tasks, including syntax, semantics and commonsense. In this paper, we focus on identifying to what extent do PTLMs capture semantic attributes a
Pre-trained language models have achieved huge success on a wide range of NLP tasks. However, contextual representations from pre-trained models contain entangled semantic and syntactic information, and therefore cannot be directly used to derive use