ترغب بنشر مسار تعليمي؟ اضغط هنا

3D Invisible Cloak

196   0   0.0 ( 0 )
 نشر من قبل Mingfu Xue
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we propose a novel physical stealth attack against the person detectors in real world. The proposed method generates an adversarial patch, and prints it on real clothes to make a three dimensional (3D) invisible cloak. Anyone wearing the cloak can evade the detection of person detectors and achieve stealth. We consider the impacts of those 3D physical constraints (i.e., radian, wrinkle, occlusion, angle, etc.) on person stealth attacks, and propose 3D transformations to generate 3D invisible cloak. We launch the person stealth attacks in 3D physical space instead of 2D plane by printing the adversarial patches on real clothes under challenging and complex 3D physical scenarios. The conventional and 3D transformations are performed on the patch during its optimization process. Further, we study how to generate the optimal 3D invisible cloak. Specifically, we explore how to choose input images with specific shapes and colors to generate the optimal 3D invisible cloak. Besides, after successfully making the object detector misjudge the person as other objects, we explore how to make a person completely disappeared, i.e., the person will not be detected as any objects. Finally, we present a systematic evaluation framework to methodically evaluate the performance of the proposed attack in digital domain and physical world. Experimental results in various indoor and outdoor physical scenarios show that, the proposed person stealth attack method is robust and effective even under those complex and challenging physical conditions, such as the cloak is wrinkled, obscured, curved, and from different angles. The attack success rate in digital domain (Inria data set) is 86.56%, while the static and dynamic stealth attack performance in physical world is 100% and 77%, respectively, which are significantly better than existing works.

قيم البحث

اقرأ أيضاً

Invisible cloaks provide a way to hide an object under the detection of waves. A perfect cloak guides the incident waves through the cloaking shell without any distortion. In most cases, some important quantum degrees of freedom, e.g. electron spin o r photon polarization, are not taken into account when designing a cloak. Here, we propose to use the temporal steering inequality of these degrees of freedom to detect the existence of an invisible cloak.
Monocular object detection and tracking have improved drastically in recent years, but rely on a key assumption: that objects are visible to the camera. Many offline tracking approaches reason about occluded objects post-hoc, by linking together trac klets after the object re-appears, making use of reidentification (ReID). However, online tracking in embodied robotic agents (such as a self-driving vehicle) fundamentally requires object permanence, which is the ability to reason about occluded objects before they re-appear. In this work, we re-purpose tracking benchmarks and propose new metrics for the task of detecting invisible objects, focusing on the illustrative case of people. We demonstrate that current detection and tracking systems perform dramatically worse on this task. We introduce two key innovations to recover much of this performance drop. We treat occluded object detection in temporal sequences as a short-term forecasting challenge, bringing to bear tools from dynamic sequence prediction. Second, we build dynamic models that explicitly reason in 3D, making use of observations produced by state-of-the-art monocular depth estimation networks. To our knowledge, ours is the first work to demonstrate the effectiveness of monocular depth estimation for the task of tracking and detecting occluded objects. Our approach strongly improves by 11.4% over the baseline in ablations and by 5.0% over the state-of-the-art in F1 score.
Printed and digitally displayed photos have the ability to hide imperceptible digital data that can be accessed through internet-connected imaging systems. Another way to think about this is physical photographs that have unique QR codes invisibly em bedded within them. This paper presents an architecture, algorithms, and a prototype implementation addressing this vision. Our key technical contribution is StegaStamp, a learned steganographic algorithm to enable robust encoding and decoding of arbitrary hyperlink bitstrings into photos in a manner that approaches perceptual invisibility. StegaStamp comprises a deep neural network that learns an encoding/decoding algorithm robust to image perturbations approximating the space of distortions resulting from real printing and photography. We demonstrates real-time decoding of hyperlinks in photos from in-the-wild videos that contain variation in lighting, shadows, perspective, occlusion and viewing distance. Our prototype system robustly retrieves 56 bit hyperlinks after error correction - sufficient to embed a unique code within every photo on the internet.
A kind of transformation media, which we shall call the anti-cloak, is proposed to partially defeat the cloaking effect of the invisibility cloak. An object with an outer shell of anti-cloak is visible to the outside if it is coated with the invisibl e cloak. Fourier-Bessel analysis confirms this finding by showing that external electromagnetic wave can penetrate into the interior of the invisibility cloak with the help of the anti-cloak.
In this paper, we propose a novel iterative multi-task framework to complete the segmentation mask of an occluded vehicle and recover the appearance of its invisible parts. In particular, to improve the quality of the segmentation completion, we pres ent two coupled discriminators and introduce an auxiliary 3D model pool for sampling authentic silhouettes as adversarial samples. In addition, we propose a two-path structure with a shared network to enhance the appearance recovery capability. By iteratively performing the segmentation completion and the appearance recovery, the results will be progressively refined. To evaluate our method, we present a dataset, the Occluded Vehicle dataset, containing synthetic and real-world occluded vehicle images. We conduct comparison experiments on this dataset and demonstrate that our model outperforms the state-of-the-art in tasks of recovering segmentation mask and appearance for occluded vehicles. Moreover, we also demonstrate that our appearance recovery approach can benefit the occluded vehicle tracking in real-world videos.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا