ترغب بنشر مسار تعليمي؟ اضغط هنا

164 - Qizhang Li , Haiyu He (1 2021
Hyperbolic metamaterials (HMMs) support propagating waves with arbitrarily large wavevectors over broad spectral ranges, and are uniquely valuable for engineering radiative thermal transport in the near field. Here, by employing a rational design app roach based on the electromagnetic local density of states, we demonstrate the ability of HMMs to substantially rectify radiative heat flow. Our idea is to establish a forward-biased scenario where the two HMM-based terminals of a thermal diode feature overlapped hyperbolic bands which result in a large heat current, and suppress the reverse heat flow by creating spectrally mismatched density of states as the temperature bias is flipped. As an example, we present a few high-performance thermal diodes by pairing HMMs made of polar dielectrics and metal-to-insulator transition (MIT) materials in the form of periodic nanowire arrays, and considering three representative kinds of substrates. Upon optimization, we theoretically achieve a rectification ratio of 324 at a 100 nm gap, which remains greater than 148 for larger gap sizes up to 1 um over a wide temperature range. The maximum rectification represents an almost 1000-fold increase compared to a bulk diode using the same materials, and is twice that of state-of-the-art designs. Our work highlights the potential of HMMs for rectifying radiative heat flow, and may find applications in advanced thermal management and energy conversion systems.
128 - Qizhang Li , Haiyu He (1 2021
We propose a mechanism to substantially rectify radiative heat flow by matching thin films of metal-to-insulator transition materials and polar dielectrics in the electromagnetic near field. By leveraging the distinct scaling behaviors of the local d ensity of states with film thickness for metals and insulators, we theoretically achieve rectification ratios over 140-a 10-fold improvement over the state of the art-with nanofilms of vanadium dioxide and cubic boron nitride in the parallel-plane geometry at experimentally feasible gap sizes (~100 nm). Our rational design offers relative ease of fabrication, flexible choice of materials, and robustness against deviations from optimal film thicknesses. We expect this work to facilitate the application of thermal diodes in solid-state thermal circuits and energy conversion devices.
248 - Yiwen Guo , Qizhang Li , Hao Chen 2020
The vulnerability of deep neural networks (DNNs) to adversarial examples has drawn great attention from the community. In this paper, we study the transferability of such examples, which lays the foundation of many black-box attacks on DNNs. We revis it a not so new but definitely noteworthy hypothesis of Goodfellow et al.s and disclose that the transferability can be enhanced by improving the linearity of DNNs in an appropriate manner. We introduce linear backpropagation (LinBP), a method that performs backpropagation in a more linear fashion using off-the-shelf attacks that exploit gradients. More specifically, it calculates forward as normal but backpropagates loss as if some nonlinear activations are not encountered in the forward pass. Experimental results demonstrate that this simple yet effective method obviously outperforms current state-of-the-arts in crafting transferable adversarial examples on CIFAR-10 and ImageNet, leading to more effective attacks on a variety of DNNs.
347 - Qizhang Li , Yiwen Guo , Hao Chen 2020
The study of adversarial vulnerabilities of deep neural networks (DNNs) has progressed rapidly. Existing attacks require either internal access (to the architecture, parameters, or training set of the victim model) or external access (to query the mo del). However, both the access may be infeasible or expensive in many scenarios. We investigate no-box adversarial examples, where the attacker can neither access the model information or the training set nor query the model. Instead, the attacker can only gather a small number of examples from the same problem domain as that of the victim model. Such a stronger threat model greatly expands the applicability of adversarial attacks. We propose three mechanisms for training with a very small dataset (on the order of tens of examples) and find that prototypical reconstruction is the most effective. Our experiments show that adversarial examples crafted on prototypical auto-encoding models transfer well to a variety of image classification and face verification models. On a commercial celebrity recognition system held by clarifai.com, our approach significantly diminishes the average prediction accuracy of the system to only 15.40%, which is on par with the attack that transfers adversarial examples from a pre-trained Arcface model.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا