ترغب بنشر مسار تعليمي؟ اضغط هنا

The Power of Linear Reconstruction Attacks

55   0   0.0 ( 0 )
 نشر من قبل Shiva Kasiviswanathan
 تاريخ النشر 2012
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider the power of linear reconstruction attacks in statistical data privacy, showing that they can be applied to a much wider range of settings than previously understood. Linear attacks have been studied before (Dinur and Nissim PODS03, Dwork, McSherry and Talwar STOC07, Kasiviswanathan, Rudelson, Smith and Ullman STOC10, De TCC12, Muthukrishnan and Nikolov STOC12) but have so far been applied only in settings with releases that are obviously linear. Consider a database curator who manages a database of sensitive information but wants to release statistics about how a sensitive attribute (say, disease) in the database relates to some nonsensitive attributes (e.g., postal code, age, gender, etc). We show one can mount linear reconstruction attacks based on any release that gives: a) the fraction of records that satisfy a given non-degenerate boolean function. Such releases include contingency tables (previously studied by Kasiviswanathan et al., STOC10) as well as more complex outputs like the error rate of classifiers such as decision trees; b) any one of a large class of M-estimators (that is, the output of empirical risk minimization algorithms), including the standard estimators for linear and logistic regression. We make two contributions: first, we show how these types of releases can be transformed into a linear format, making them amenable to existing polynomial-time reconstruction algorithms. This is already perhaps surprising, since many of the above releases (like M-estimators) are obtained by solving highly nonlinear formulations. Second, we show how to analyze the resulting attacks under various distributional assumptions on the data. Specifically, we consider a setting in which the same statistic (either a) or b) above) is released about how the sensitive attribute relates to all subsets of size k (out of a total of d) nonsensitive boolean attributes.

قيم البحث

اقرأ أيضاً

Linear pseudorandom number generators are very popular due to their high speed, to the ease with which generators with a sizable state space can be created, and to their provable theoretical properties. However, they suffer from linear artifacts whic h show as failures in linearity-related statistical tests such as the binary-rank and the linear-complexity test. In this paper, we give three new contributions. First, we introduce two new linear transformations that have been handcrafted to have good statistical properties and at the same time to be programmable very efficiently on superscalar processors, or even directly in hardware. Then, we describe a new test for Hamming-weight dependencies that is able to discover subtle, previously unknown biases in existing generators (in particular, in linear ones). Finally, we describe a number of scramblers, that is, nonlinear functions applied to the state array that reduce or delete the linear artifacts, and propose combinations of linear transformations and scramblers that give extremely fast pseudorandom generators of high quality. A novelty in our approach is that we use ideas from the theory of filtered linear-feedback shift register to prove some properties of our scramblers, rather than relying purely on heuristics. In the end, we provide simple, extremely fast generators that use a few hundred bits of memory, have provable properties and pass very strong statistical tests.
The electric power grid is a complex cyberphysical energy system (CPES) in which information and communication technologies (ICT) are integrated into the operations and services of the power grid infrastructure. The growing number of Internet-of-thin gs (IoT) high-wattage appliances, such as air conditioners and electric vehicles, being connected to the power grid, together with the high dependence of ICT and control interfaces, make CPES vulnerable to high-impact, low-probability load-changing cyberattacks. Moreover, the side-effects of the COVID-19 pandemic demonstrate a modification of electricity consumption patterns with utilities experiencing significant net-load and peak reductions. These unusual sustained low load demand conditions could be leveraged by adversaries to cause frequency instabilities in CPES by compromising hundreds of thousands of IoT-connected high-wattage loads. This paper presents a feasibility study of the impacts of load-changing attacks on CPES during the low loading conditions caused by the lockdown measures implemented during the COVID-19 pandemic. The load demand reductions caused by the lockdown measures are analyzed using dynamic mode decomposition (DMD), focusing on the March-to-July 2020 period and the New York region as the most impacted time period and location in terms of load reduction due to the lockdowns being in full execution. Our feasibility study evaluates load-changing attack scenarios using real load consumption data from the New York Independent System Operator (NYISO) and shows that an attacker with sufficient knowledge and resources could be capable of producing frequency stability problems, with frequency excursions going up to 60.5 Hz and 63.4 Hz, when no mitigation measures are taken.
To accelerate research on adversarial examples and robustness of machine learning classifiers, Google Brain organized a NIPS 2017 competition that encouraged researchers to develop new methods to generate adversarial examples as well as to develop ne w ways to defend against them. In this chapter, we describe the structure and organization of the competition and the solutions developed by several of the top-placing teams.
Security of machine learning models is a concern as they may face adversarial attacks for unwarranted advantageous decisions. While research on the topic has mainly been focusing on the image domain, numerous industrial applications, in particular in finance, rely on standard tabular data. In this paper, we discuss the notion of adversarial examples in the tabular domain. We propose a formalization based on the imperceptibility of attacks in the tabular domain leading to an approach to generate imperceptible adversarial examples. Experiments show that we can generate imperceptible adversarial examples with a high fooling rate.
Neural network classifiers (NNCs) are known to be vulnerable to malicious adversarial perturbations of inputs including those modifying a small fraction of the input features named sparse or $L_0$ attacks. Effective and fast $L_0$ attacks, such as th e widely used Jacobian-based Saliency Map Attack (JSMA) are practical to fool NNCs but also to improve their robustness. In this paper, we show that penalising saliency maps of JSMA by the output probabilities and the input features of the NNC allows to obtain more powerful attack algorithms that better take into account each inputs characteristics. This leads us to introduce improv

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا