Do you want to publish a course? Click here

Recognition of Hand Written Arabic Names Using Deep Learning

التعرف على الأسماء العربية المكتوبة بخط اليد بإستخدام التعلم العميق

1864   2   1   0.0 ( 0 )
 Publication date 2016
and research's language is العربية
 Created by Shamra Editor




Ask ChatGPT about the research

Designing Computerized Systems which posses reading and hearing faculties is an active research area for more than four decades. Many methods and algorithms have been suggested by researches for this purpose as part of pattern recognition research. Recently, more research work has been devoted to the holist approach the recognition system recognizes a complete word as one object without going through the long and erroneous character segmentation process. In this paper, a convolutional neural network has been designed to recognize the popular Arabic names holistically. SUSt ARG names data set has been used to test the network performance (collected and compiled by pattern recognition research in Sudan University of Science and Technology-SUSt). Selecting an appropriate deep learning toolbox, after five stages of training, the network was able to recognize all the names and 100%


Artificial intelligence review:
Research summary
البحث المقدم من جامعة السودان للعلوم والتكنولوجيا يركز على تصميم نظام للتعرف على الأسماء العربية المكتوبة بخط اليد باستخدام تقنيات التعلم العميق، وتحديداً الشبكات العصبية الالتفافية. تمت تجربة النظام على مجموعة بيانات SUST-ARG التي تحتوي على أسماء عربية شائعة. بعد خمس مراحل من التدريب، تمكنت الشبكة العصبية من تحقيق نسبة دقة تصل إلى 100% في التعرف على الأسماء. البحث يتناول أيضاً مراحل معالجة الصور الرقمية، بدءاً من الإعداد المسبق للصور، مروراً بإزالة الشوائب وتوحيد الأحجام، وصولاً إلى مرحلة التعرف باستخدام الشبكة العصبية. النتائج أظهرت فعالية النظام في التعرف على الأسماء بدقة عالية، مما يعزز إمكانية استخدامه في تطبيقات عملية متنوعة.
Critical review
دراسة نقدية: البحث قدم إسهاماً مهماً في مجال التعرف على النصوص العربية المكتوبة بخط اليد باستخدام تقنيات التعلم العميق. ومع ذلك، هناك بعض النقاط التي يمكن تحسينها. أولاً، التركيز على مجموعة بيانات محدودة قد يقلل من تعميم النتائج على نطاق أوسع من الأسماء والنصوص. ثانياً، لم يتم التطرق بشكل كافٍ إلى التحديات التي قد تواجه النظام في التعرف على خط اليد غير المقروء أو المتداخل. ثالثاً، يمكن تحسين البحث بإضافة مقارنات مع تقنيات أخرى للتعرف على النصوص لمعرفة مدى تفوق النظام المقترح.
Questions related to the research
  1. ما هي التقنية الرئيسية المستخدمة في البحث للتعرف على الأسماء العربية المكتوبة بخط اليد؟

    التقنية الرئيسية المستخدمة هي الشبكات العصبية الالتفافية (Convolutional Neural Networks).

  2. ما هي مجموعة البيانات التي تم استخدامها لاختبار أداء الشبكة العصبية؟

    تم استخدام مجموعة بيانات SUST-ARG التي تحتوي على أسماء عربية شائعة.

  3. ما هي نسبة الدقة التي حققها النظام في التعرف على الأسماء بعد التدريب؟

    النظام حقق نسبة دقة تصل إلى 100% في التعرف على الأسماء.

  4. ما هي المراحل التي تمر بها الصور قبل إدخالها إلى الشبكة العصبية للتعرف عليها؟

    المراحل تشمل الإعداد المسبق للصور، إزالة الشوائب، توحيد الأحجام، وتحويل الصور إلى مصفوفات يمكن معالجتها بواسطة الشبكة العصبية.


References used
Li Deng and Dong Yu (2014), "Deep Learning: Methods and Applications", Foundations and Trends® in Signal Processing
Ian Goodfellow, Yoshua Bengio, and Aaron Courville (2016): Deep Learning. MIT Press
rate research

Read More

This research describes a system for recognition of handwritten Arabic word without prior segmentation of the word into characters. In this system, the recognition will be happened at two levels. It is evolved basing on OCR (Optical Character Reco gnition), Hidden Markov Model, CBIR(Content Based Image Retrieval), it also involves Mathematical Morphology.
Due to its great power in modeling non-Euclidean data like graphs or manifolds, deep learning on graph techniques (i.e., Graph Neural Networks (GNNs)) have opened a new door to solving challenging graph-related NLP problems. There has seen a surge of interests in applying deep learning on graph techniques to NLP, and has achieved considerable success in many NLP tasks, ranging from classification tasks like sentence classification, semantic role labeling and relation extraction, to generation tasks like machine translation, question generation and summarization. Despite these successes, deep learning on graphs for NLP still face many challenges, including automatically transforming original text sequence data into highly graph-structured data, and effectively modeling complex data that involves mapping between graph-based inputs and other highly structured output data such as sequences, trees, and graph data with multi-types in both nodes and edges. This tutorial will cover relevant and interesting topics on applying deep learning on graph techniques to NLP, including automatic graph construction for NLP, graph representation learning for NLP, advanced GNN based models (e.g., graph2seq, graph2tree, and graph2graph) for NLP, and the applications of GNNs in various NLP tasks (e.g., machine translation, natural language generation, information extraction and semantic parsing). In addition, hands-on demonstration sessions will be included to help the audience gain practical experience on applying GNNs to solve challenging NLP problems using our recently developed open source library -- Graph4NLP, the first library for researchers and practitioners for easy use of GNNs for various NLP tasks.
Deep learning is at the heart of the current rise of artificial intelligence. In the field of Computer Vision, it has become the workhorse for applications ranging from self-driving cars to surveillance and security. Whereas deep neural networks have demonstrated phenomenal success (often beyond human capabilities) in solving complex problems, recent studies show that they are vulnerable to adversarial attacks in the form of subtle perturbations to inputs that lead a model to predict incorrect outputs. For images, such perturbations are often too small to be perceptible, yet they completely fool the deep learning models. Adversarial attacks pose a serious threat to the success of deep learning in practice. This fact has recently lead to a large influx of contributions in this direction. This article presents a survey on adversarial attacks on deep learning in Computer Vision. We review the works that design adversarial attacks, analyze the existence of such attacks and propose defenses against them
Modelling and understanding dialogues in a conversation depends on identifying the user intent from the given text. Unknown or new intent detection is a critical task, as in a realistic scenario a user intent may frequently change over time and diver t even to an intent previously not encountered. This task of separating the unknown intent samples from known intents one is challenging as the unknown user intent can range from intents similar to the predefined intents to something completely different. Prior research on intent discovery often consider it as a classification task where an unknown intent can belong to a predefined set of known intent classes. In this paper we tackle the problem of detecting a completely unknown intent without any prior hints about the kind of classes belonging to unknown intents. We propose an effective post-processing method using multi-objective optimization to tune an existing neural network based intent classifier and make it capable of detecting unknown intents. We perform experiments using existing state-of-the-art intent classifiers and use our method on top of them for unknown intent detection. Our experiments across different domains and real-world datasets show that our method yields significant improvements compared with the state-of-the-art methods for unknown intent detection.
Deep neural language models such as BERT have enabled substantial recent advances in many natural language processing tasks. However, due to the effort and computational cost involved in their pre-training, such models are typically introduced only f or a small number of high-resource languages such as English. While multilingual models covering large numbers of languages are available, recent work suggests monolingual training can produce better models, and our understanding of the tradeoffs between mono- and multilingual training is incomplete. In this paper, we introduce a simple, fully automated pipeline for creating language-specific BERT models from Wikipedia data and introduce 42 new such models, most for languages up to now lacking dedicated deep neural language models. We assess the merits of these models using cloze tests and the state-of-the-art UDify parser on Universal Dependencies data, contrasting performance with results using the multilingual BERT (mBERT) model. We find that the newly introduced WikiBERT models outperform mBERT in cloze tests for nearly all languages, and that UDify using WikiBERT models outperforms the parser using mBERT on average, with the language-specific models showing substantially improved performance for some languages, yet limited improvement or a decrease in performance for others. All of the methods and models introduced in this work are available under open licenses from https://github.com/turkunlp/wikibert.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا