لتطبيق الروبوتات بفعالية في بيئات العمل ومساعدة البشر، من الضروري تطوير وتقييم كيفية تأثير التأريض البصري (VG) على أداء الجهاز على الكائنات المستحقة. ومع ذلك، فإن أعمال VG الحالية محدودة في بيئات العمل، مثل المكاتب والمستودعات، حيث عادة ما يتم قطع الكائنات نظرا لقضايا استخدام الفضاء. في عملنا، نقترح مجموعة بيانات رواية OCID-REF التي تتميز بمهمة تجزئة تعبيرية بالإحالة مع تعبيرات إحالة الكائنات المستحقة. يتكون OCID-REF من 305،694 أشير إلى التعبيرات من 2،300 مشاهد مع توفير صورة RGB ومدخلات السحابة نقطة. لحل مشكلات انسداد تحديا، نجمع بأنه من الأهمية بمكان الاستفادة من إشارات 2D و 3D لحل مشكلات انسداد تحديا. توضح نتائجنا التجريبية فعالية الإشارات 2D و 3D تجميع ولكن تشير إلى الكائنات المغطاة لا تزال تحديا لأنظمة التأريض البصرية الحديثة. OCID-REF متوفر علنا في https://github.com/lluma/ocid-ref
To effectively apply robots in working environments and assist humans, it is essential to develop and evaluate how visual grounding (VG) can affect machine performance on occluded objects. However, current VG works are limited in working environments, such as offices and warehouses, where objects are usually occluded due to space utilization issues. In our work, we propose a novel OCID-Ref dataset featuring a referring expression segmentation task with referring expressions of occluded objects. OCID-Ref consists of 305,694 referring expressions from 2,300 scenes with providing RGB image and point cloud inputs. To resolve challenging occlusion issues, we argue that it's crucial to take advantage of both 2D and 3D signals to resolve challenging occlusion issues. Our experimental results demonstrate the effectiveness of aggregating 2D and 3D signals but referring to occluded objects still remains challenging for the modern visual grounding systems. OCID-Ref is publicly available at https://github.com/lluma/OCID-Ref
References used
https://aclanthology.org/
This paper introduces a new video-and-language dataset with human actions for multimodal logical inference, which focuses on intentional and aspectual expressions that describe dynamic human actions. The dataset consists of 200 videos, 5,554 action l
This paper describes the annotation process of an offensive language data set for Romanian on social media. To facilitate comparable multi-lingual research on offensive language, the annotation guidelines follow some of the recent annotation efforts
People live in various environments, although they can understand scenes around them with just a glance. To do this, they depend on their high ability to effectively process visual data and connect it to wide pre-knowledge about what they are expecte
In this paper we present a prototypical implementation of a pipeline that allows the automatic generation of a German Sign Language avatar from 2D video material. The presentation is accompanied by the source code. We record human pose movements duri
Reviewing contracts is a time-consuming procedure that incurs large expenses to companies and social inequality to those who cannot afford it. In this work, we propose document-level natural language inference (NLI) for contracts'', a novel, real-wor