العمل الحالي على طاولات نماذج التعلم المشتركة في التمثيل الجداول والنص المرتبط باستخدام الوظائف الموضوعية الخاضعة للإشراف ذاتي المستمدة من نماذج اللغة المحددة مسبقا مثل Bert.في حين أن هذا المحاط بالمفصل يحسن المهام التي تنطوي على الجداول والنص المقترن (على سبيل المثال، الرد على أسئلة حول الجداول)، نوضح أنه لا يقلل من المهام التي تعمل على الجداول دون أي نص مرتبط (E.G.، ملء الخلايا المفقودة).نحن نركض موضوعا بسيطا محددا (اكتشاف الخلايا الفاسدة) التي تتعلم حصريا من البيانات الجدولة وتصل إلى أحدث من بين الفن على مجموعة من مهام التنبؤ القائمة على الطاولة.على عكس النهج المتنافسة، يوفر النموذج الخاص بنا (TABBIE) Asspeddings من جميع درجات الباطن الأساسي (الخلايا والصفوف والأعمدة)، كما أنها تتطلب أيضا حساب أقل بكثير للتدريب.يوضح تحليل نوعي للخلية المستفادة في النموذج، العمود، وتمثيلات الصف أنه يفهم دلالات الجدول المعقدة والاتجاهات العددية.
Existing work on tabular representation-learning jointly models tables and associated text using self-supervised objective functions derived from pretrained language models such as BERT. While this joint pretraining improves tasks involving paired tables and text (e.g., answering questions about tables), we show that it underperforms on tasks that operate over tables without any associated text (e.g., populating missing cells). We devise a simple pretraining objective (corrupt cell detection) that learns exclusively from tabular data and reaches the state-of-the-art on a suite of table-based prediction tasks. Unlike competing approaches, our model (TABBIE) provides embeddings of all table substructures (cells, rows, and columns), and it also requires far less compute to train. A qualitative analysis of our model's learned cell, column, and row representations shows that it understands complex table semantics and numerical trends.
References used
https://aclanthology.org/
Schema translation is the task of automatically translating headers of tabular data from one language to another. High-quality schema translation plays an important role in cross-lingual table searching, understanding and analysis. Despite its import
Understanding tables is an important and relevant task that involves understanding table structure as well as being able to compare and contrast information within cells. In this paper, we address this challenge by presenting a new dataset and tasks
Recent progress in natural language processing has led to Transformer architectures becoming the predominant model used for natural language tasks. However, in many real- world datasets, additional modalities are included which the Transformer does n
In modern natural language processing pipelines, it is common practice to pretrain'' a generative language model on a large corpus of text, and then to finetune'' the created representations by continuing to train them on a discriminative textual inf
Determining whether two documents were composed by the same author, also known as authorship verification, has traditionally been tackled using statistical methods. Recently, authorship representations learned using neural networks have been found to