Do you want to publish a course? Click here

Adversarial Attacks on Deep Learning Systems

الهجمات الخادعة ضد شبكات التعلم العميق

2009   0   140   0 ( 0 )
 Publication date 2018
and research's language is العربية
 Created by محمد زاهر عيروط




Ask ChatGPT about the research

Deep learning is at the heart of the current rise of artificial intelligence. In the field of Computer Vision, it has become the workhorse for applications ranging from self-driving cars to surveillance and security. Whereas deep neural networks have demonstrated phenomenal success (often beyond human capabilities) in solving complex problems, recent studies show that they are vulnerable to adversarial attacks in the form of subtle perturbations to inputs that lead a model to predict incorrect outputs. For images, such perturbations are often too small to be perceptible, yet they completely fool the deep learning models. Adversarial attacks pose a serious threat to the success of deep learning in practice. This fact has recently lead to a large influx of contributions in this direction. This article presents a survey on adversarial attacks on deep learning in Computer Vision. We review the works that design adversarial attacks, analyze the existence of such attacks and propose defenses against them


Artificial intelligence review:
Research summary
يعتبر التعلم العميق القلب النابض للذكاء الصنعي في السنوات الأخيرة، حيث يستخدم في تطبيقات متنوعة مثل السيارات ذاتية القيادة والتحليلات الطبية. ومع ذلك، فإن الهجمات الخادعة أصبحت عائقًا كبيرًا أمام توظيف هذه التقنيات بشكل آمن. تتناول هذه الورقة البحثية تعريف الهجمات الخادعة، طرق تنفيذها، وتأثيرها على الأنظمة المختلفة، مع التركيز على الأنظمة الطبية. كما تستعرض الورقة استراتيجيات الدفاع المختلفة ضد هذه الهجمات، مثل الدفاع التفاعلي والدفاع الاستباقي، وتقدم أمثلة على الهجمات الخادعة المادية وكيفية التصدي لها. الهدف من البحث هو توضيح خطورة الهجمات الخادعة وضرورة تطوير استراتيجيات دفاعية فعالة لحماية الأنظمة التي تعتمد على التعلم العميق.
Critical review
دراسة نقدية: تقدم هذه الورقة البحثية نظرة شاملة ومفصلة حول الهجمات الخادعة ضد شبكات التعلم العميق، وتستعرض العديد من الأمثلة والتطبيقات العملية. ومع ذلك، يمكن القول أن الورقة تفتقر إلى بعض التفاصيل العملية حول كيفية تنفيذ الدفاعات المقترحة بشكل فعلي في الأنظمة الحقيقية. كما أن التركيز الكبير على الجانب النظري قد يجعل من الصعب على القراء غير المتخصصين فهم بعض النقاط المعقدة. بالإضافة إلى ذلك، يمكن تحسين الورقة بإضافة دراسات حالة عملية توضح فعالية الدفاعات المقترحة في مواجهة الهجمات الخادعة في بيئات حقيقية.
Questions related to the research
  1. ما هي الهجمات الخادعة ولماذا تعتبر مشكلة في أنظمة التعلم العميق؟

    الهجمات الخادعة هي تعديلات طفيفة على مدخلات النموذج تؤدي إلى مخرجات خاطئة. تعتبر مشكلة لأنها تهدد دقة وموثوقية الأنظمة التي تعتمد على التعلم العميق في تطبيقات حساسة مثل السيارات ذاتية القيادة والتحليلات الطبية.

  2. ما هي استراتيجيات الدفاع الرئيسية ضد الهجمات الخادعة التي تناولتها الورقة؟

    تناولت الورقة استراتيجيتين رئيسيتين للدفاع ضد الهجمات الخادعة: الدفاع التفاعلي، الذي يعتمد على اكتشاف الهجمات ورفضها، والدفاع الاستباقي، الذي يعتمد على تدريب النموذج باستخدام أمثلة عدائية لتحسين متانته.

  3. كيف يمكن للهجمات الخادعة التأثير على الأنظمة الطبية المعتمدة على التعلم العميق؟

    يمكن للهجمات الخادعة التأثير بشكل كبير على الأنظمة الطبية من خلال تقديم تشخيصات خاطئة، مما قد يؤدي إلى قرارات علاجية غير صحيحة. هذا يبرز الحاجة إلى تطوير استراتيجيات دفاعية فعالة لحماية هذه الأنظمة.

  4. ما هي التحديات الرئيسية في تطبيق الدفاعات ضد الهجمات الخادعة في البيئات الحقيقية؟

    تشمل التحديات الرئيسية في تطبيق الدفاعات ضد الهجمات الخادعة في البيئات الحقيقية صعوبة تنفيذ الدفاعات بشكل فعال دون التأثير على أداء النظام، والحاجة إلى معالجة الظروف البيئية المختلفة، مثل التغيرات في الإضاءة وزوايا الرؤية.


References used
Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno: “Robust Physical-World Attacks on Deep Learning Models”, 2017; arXiv:1707.08945.
Wieland Brendel, Jonas Rauber: “Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models”, 2017; arXiv:1712.04248.
Samuel G. Finlayson, Isaac S. Kohane: “Adversarial Attacks Against Medical Deep Learning Systems”, 2018; arXiv:1804.05296.
Naveed Akhtar: “Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey”, 2018; arXiv:1801.00553.
Ian J. Goodfellow, Jonathon Shlens: “Explaining and Harnessing Adversarial Examples”, 2014; arXiv:1412.6572.
Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville: “Generative Adversarial Networks”, 2014; arXiv:1406.2661.
Pouya Samangouei, Maya Kabkab: “Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models”, 2018; arXiv:1805.06605.
rate research

Read More

Deep neural networks are vulnerable to adversarial attacks, where a small perturbation to an input alters the model prediction. In many cases, malicious inputs intentionally crafted for one model can fool another model. In this paper, we present the first study to systematically investigate the transferability of adversarial examples for text classification models and explore how various factors, including network architecture, tokenization scheme, word embedding, and model capacity, affect the transferability of adversarial examples. Based on these studies, we propose a genetic algorithm to find an ensemble of models that can be used to induce adversarial examples to fool almost all existing models. Such adversarial examples reflect the defects of the learning process and the data bias in the training set. Finally, we derive word replacement rules that can be used for model diagnostics from these adversarial examples.
Due to its great power in modeling non-Euclidean data like graphs or manifolds, deep learning on graph techniques (i.e., Graph Neural Networks (GNNs)) have opened a new door to solving challenging graph-related NLP problems. There has seen a surge of interests in applying deep learning on graph techniques to NLP, and has achieved considerable success in many NLP tasks, ranging from classification tasks like sentence classification, semantic role labeling and relation extraction, to generation tasks like machine translation, question generation and summarization. Despite these successes, deep learning on graphs for NLP still face many challenges, including automatically transforming original text sequence data into highly graph-structured data, and effectively modeling complex data that involves mapping between graph-based inputs and other highly structured output data such as sequences, trees, and graph data with multi-types in both nodes and edges. This tutorial will cover relevant and interesting topics on applying deep learning on graph techniques to NLP, including automatic graph construction for NLP, graph representation learning for NLP, advanced GNN based models (e.g., graph2seq, graph2tree, and graph2graph) for NLP, and the applications of GNNs in various NLP tasks (e.g., machine translation, natural language generation, information extraction and semantic parsing). In addition, hands-on demonstration sessions will be included to help the audience gain practical experience on applying GNNs to solve challenging NLP problems using our recently developed open source library -- Graph4NLP, the first library for researchers and practitioners for easy use of GNNs for various NLP tasks.
The security of several recently proposed ciphers relies on the fact:" that the classical methods of cryptanalysis (e.g. linear or differential attacks) are based on probabilistic characteristics, which makes their security grow exponentially with the number of rounds". So they haven’t the suitable immunity against the algebraic attacks which becomes more powerful after XSL algorithm. in this research we will try some method to increase the immunity of AES algorithm against the algebraic attacks then we will study the effect of this adjustment.
Deep neural language models such as BERT have enabled substantial recent advances in many natural language processing tasks. However, due to the effort and computational cost involved in their pre-training, such models are typically introduced only f or a small number of high-resource languages such as English. While multilingual models covering large numbers of languages are available, recent work suggests monolingual training can produce better models, and our understanding of the tradeoffs between mono- and multilingual training is incomplete. In this paper, we introduce a simple, fully automated pipeline for creating language-specific BERT models from Wikipedia data and introduce 42 new such models, most for languages up to now lacking dedicated deep neural language models. We assess the merits of these models using cloze tests and the state-of-the-art UDify parser on Universal Dependencies data, contrasting performance with results using the multilingual BERT (mBERT) model. We find that the newly introduced WikiBERT models outperform mBERT in cloze tests for nearly all languages, and that UDify using WikiBERT models outperforms the parser using mBERT on average, with the language-specific models showing substantially improved performance for some languages, yet limited improvement or a decrease in performance for others. All of the methods and models introduced in this work are available under open licenses from https://github.com/turkunlp/wikibert.
Recent work has demonstrated the vulnerability of modern text classifiers to universal adversarial attacks, which are input-agnostic sequences of words added to text processed by classifiers. Despite being successful, the word sequences produced in s uch attacks are often ungrammatical and can be easily distinguished from natural text. We develop adversarial attacks that appear closer to natural English phrases and yet confuse classification systems when added to benign inputs. We leverage an adversarially regularized autoencoder (ARAE) to generate triggers and propose a gradient-based search that aims to maximize the downstream classifier's prediction loss. Our attacks effectively reduce model accuracy on classification tasks while being less identifiable than prior models as per automatic detection metrics and human-subject studies. Our aim is to demonstrate that adversarial attacks can be made harder to detect than previously thought and to enable the development of appropriate defenses.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا