Transformer models fine-tuned with a sequence labeling objective have become the dominant choice for named entity recognition tasks. However, a self-attention mechanism with unconstrained length can fail to fully capture local dependencies, particula
rly when training data is limited. In this paper, we propose a novel joint training objective which better captures the semantics of words corresponding to the same entity. By augmenting the training objective with a group-consistency loss component we enhance our ability to capture local dependencies while still enjoying the advantages of the unconstrained self-attention mechanism. On the CoNLL2003 dataset, our method achieves a test F1 of 93.98 with a single transformer model. More importantly our fine-tuned CoNLL2003 model displays significant gains in generalization to out of domain datasets: on the OntoNotes subset we achieve an F1 of 72.67 which is 0.49 points absolute better than the baseline, and on the WNUT16 set an F1 of 68.22 which is a gain of 0.48 points. Furthermore, on the WNUT17 dataset we achieve an F1 of 55.85, yielding a 2.92 point absolute improvement.
Code-mixing (CM) is a frequently observed phenomenon that uses multiple languages in an utterance or sentence. There are no strict grammatical constraints observed in code-mixing, and it consists of non-standard variations of spelling. The linguistic
complexity resulting from the above factors made the computational analysis of the code-mixed language a challenging task. Language identification (LI) and part of speech (POS) tagging are the fundamental steps that help analyze the structure of the code-mixed text. Often, the LI and POS tagging tasks are interdependent in the code-mixing scenario. We project the problem of dealing with multilingualism and grammatical structure while analyzing the code-mixed sentence as a joint learning task. In this paper, we jointly train and optimize language detection and part of speech tagging models in the code-mixed scenario. We used a Transformer with convolutional neural network architecture. We train a joint learning method by combining POS tagging and LI models on code-mixed social media text obtained from the ICON shared task.
The project aims primarily to employ the benefits of artificial intelligence, specifically the characteristics of programming a neuronal network where neuronal networks, in turn, are networks that are interested in trainin
g and learning from error, and employing this error to achieve optimal results.Convolution NeuralNetworks(CNN)in particular are one of the most important neuronal networks that address classification problems and issues. Thus, this project is to design a convolution neuronal network that classifies vehicles into several types where we will design the network and train them on the database as the database includes pictures of several types of vehicles The network will classify each Image to its type, after adjusting the images, making the appropriate changes, turning them gray, and discovering the edges and lines.After the images are ready, the training process will begin, and after the training process is finished, we will produce classification results, and then we will test with a new set of images.One of the most important applications of this project is to abide by the paving places of cars, trucks, and vehicles in general, as if a picture was entered as a car for the car sample, which is a truck, for example, this will give an error where the network will discover this by examining and classifying it. As a truck, we discover that there is a violation of the paving laws
The deep learning algorithm has recently achieved a lot of success, especially in the field of computer vision. This research aims to describe the classification method applied to the dataset of multiple types of images (Synthetic Aperture Radar (SAR
) images and non-SAR images). In such a classification, transfer learning was used followed by fine-tuning methods. Besides, pre-trained architectures were used on the known image database ImageNet. The model VGG16 was indeed used as a feature extractor and a new classifier was trained based on extracted features.The input data mainly focused on the dataset consist of five classes including the SAR images class (houses) and the non-SAR images classes (Cats, Dogs, Horses, and Humans). The Convolutional Neural Network (CNN) has been chosen as a better option for the training process because it produces a high accuracy. The final accuracy has reached 91.18% in five different classes. The results are discussed in terms of the probability of accuracy for each class in the image classification in percentage. Cats class got 99.6 %, while houses class got 100 %.Other types of classes were with an average score of 90 % and above.
إعادة تشكيل وضعيات الإنسان ثلاثية الأبعاد من صورة واحدة ثنائية الأبعاد هي مشكلة تمثل تحديا للعديد من الباحثين.
وفي السنوات الأخيرة، كان هناك اتجاه صاعد نحو تحليل الهندسة ثلاثية الأبعاد للكائنات بما في ذلك الأشكال والوضع بدلاً من مجرد تقديم مربعات مر
بوطة. حيث أن التفكير الهندسي ثلاثي الأبعاد يؤدي إلى توفير معلومات أكثر ثراءً عن المشهد لمهام لاحقة عالية المستوى مثل فهم المشهد والواقع المعزز والتفاعل مع الكمبيوتر البشري، بالإضافة أيضًا تحسين اكتشاف الكائنات [3]، [4]. ولذلك كانت إعادة التشكيل ثلاثية الأبعاد مشكلة مدروسة جيداً، وكانت هناك العديد من التقنيات القابلة للتطبيق عمليًا مثل البنية من الحركة، والأنظمة الصوتية متعددة المقاييس ومستشعرات العمق، ولكن هذه التقنيات محدودة في بعض السيناريوهات.
هنا في هذه الورقة، نعرض كيف تم التعامل مع المشكلة في العقود القليلة الماضية، وتحليل التطورات الأخيرة في هذا المجال، والاتجاهات المحتملة للبحث في المستقبل.
يهدف البحث إلى تقديم دراسة مرجعيّة مفصلة عن استخدام الشبكات العصبونية الإلتفافية (CNNs) في استخراج الميزات (Features) من الصور.
وسيتطرق البحث إلى التعريف بمعنى الميزات (Features) الخاصة بالصور وأهميتها في تطبيقات معالجة الصورة.
وسيتم أيضاً التعريف
بالشبكات العصبونية الإلتفافية (CNNs) وبنيتها و طريقة عملها وأنواع المقاربات والمنهجيات المستخدمة في تدريبها لاستخراج الميزات (Features) من الصور.