الاستدلال السببية هو عملية التقاط علاقة تأثير السبب بين المتغيرات.تركز معظم الأعمال الموجودة على التعامل مع البيانات المنظمة، أثناء التعدين العلاقة السببية بين عوامل البيانات غير المنظمة، مثل النص، أقل فحصا، ولكنها ذات أهمية كبيرة، خاصة في المجال القانوني.في هذه الورقة، نقترح إطار الاستدلال السببية المستندة إلى الرسم البياني (GCI) على الرواية، والذي يبني الرسوم البيانية السببية من أوصاف الحقائق دون تورط إنساني كبير ويمكن الاستدلال السببية لتسهيل الممارسين القانونيين لإجراء قرارات مناسبة.نقيم الإطار على مهمة تحدي مهمة غموض مماثلة.تظهر النتائج التجريبية أن GCI يمكن أن تلتقط نفاد الفقراء من أوصاف الحقائق بين رسوم مربكة متعددة وتوفير تمييز قابل للتفسير، وخاصة في إعدادات قليلة.نلاحظ أيضا أن المعرفة السببية الواردة في GCI يمكن حقنها بشكل فعال في شبكات عصبية قوية لتحسين الأداء والتفسيرية.
Causal inference is the process of capturing cause-effect relationship among variables. Most existing works focus on dealing with structured data, while mining causal relationship among factors from unstructured data, like text, has been less examined, but is of great importance, especially in the legal domain. In this paper, we propose a novel Graph-based Causal Inference (GCI) framework, which builds causal graphs from fact descriptions without much human involvement and enables causal inference to facilitate legal practitioners to make proper decisions. We evaluate the framework on a challenging similar charge disambiguation task. Experimental results show that GCI can capture the nuance from fact descriptions among multiple confusing charges and provide explainable discrimination, especially in few-shot settings. We also observe that the causal knowledge contained in GCI can be effectively injected into powerful neural networks for better performance and interpretability.
References used
https://aclanthology.org/
Language models have proven to be very useful when adapted to specific domains. Nonetheless, little research has been done on the adaptation of domain-specific BERT models in the French language. In this paper, we focus on creating a language model a
Learning a good latent representation is essential for text style transfer, which generates a new sentence by changing the attributes of a given sentence while preserving its content. Most previous works adopt disentangled latent representation learn
Large-scale language models such as GPT-3 are excellent few-shot learners, allowing them to be controlled via natural text prompts. Recent studies report that prompt-based direct classification eliminates the need for fine-tuning but lacks data and i
Zero-shot cross-lingual information extraction (IE) describes the construction of an IE model for some target language, given existing annotations exclusively in some other language, typically English. While the advance of pretrained multilingual enc
The application of predictive coding techniques to legal texts has the potential to greatly reduce the cost of legal review of documents, however, there is such a wide array of legal tasks and continuously evolving legislation that it is hard to cons