ترغب بنشر مسار تعليمي؟ اضغط هنا

Covert Embodied Choice: Decision-Making and the Limits of Privacy Under Biometric Surveillance

52   0   0.0 ( 0 )
 نشر من قبل Jeremy Gordon
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Algorithms engineered to leverage rich behavioral and biometric data to predict individual attributes and actions continue to permeate public and private life. A fundamental risk may emerge from misconceptions about the sensitivity of such data, as well as the agency of individuals to protect their privacy when fine-grained (and possibly involuntary) behavior is tracked. In this work, we examine how individuals adjust their behavior when incentivized to avoid the algorithmic prediction of their intent. We present results from a virtual reality task in which gaze, movement, and other physiological signals are tracked. Participants are asked to decide which card to select without an algorithmic adversary anticipating their choice. We find that while participants use a variety of strategies, data collected remains highly predictive of choice (80% accuracy). Additionally, a significant portion of participants became more predictable despite efforts to obfuscate, possibly indicating mistaken priors about the dynamics of algorithmic prediction.

قيم البحث

اقرأ أيضاً

Methods to find counterfactual explanations have predominantly focused on one step decision making processes. In this work, we initiate the development of methods to find counterfactual explanations for decision making processes in which multiple, de pendent actions are taken sequentially over time. We start by formally characterizing a sequence of actions and states using finite horizon Markov decision processes and the Gumbel-Max structural causal model. Building upon this characterization, we formally state the problem of finding counterfactual explanations for sequential decision making processes. In our problem formulation, the counterfactual explanation specifies an alternative sequence of actions differing in at most k actions from the observed sequence that could have led the observed process realization to a better outcome. Then, we introduce a polynomial time algorithm based on dynamic programming to build a counterfactual policy that is guaranteed to always provide the optimal counterfactual explanation on every possible realization of the counterfactual environment dynamics. We validate our algorithm using both synthetic and real data from cognitive behavioral therapy and show that the counterfactual explanations our algorithm finds can provide valuable insights to enhance sequential decision making under uncertainty.
Machine learning (ML) is increasingly being used in image retrieval systems for medical decision making. One application of ML is to retrieve visually similar medical images from past patients (e.g. tissue from biopsies) to reference when making a me dical decision with a new patient. However, no algorithm can perfectly capture an experts ideal notion of similarity for every case: an image that is algorithmically determined to be similar may not be medically relevant to a doctors specific diagnostic needs. In this paper, we identified the needs of pathologists when searching for similar images retrieved using a deep learning algorithm, and developed tools that empower users to cope with the search algorithm on-the-fly, communicating what types of similarity are most important at different moments in time. In two evaluations with pathologists, we found that these refinement tools increased the diagnostic utility of images found and increased user trust in the algorithm. The tools were preferred over a traditional interface, without a loss in diagnostic accuracy. We also observed that users adopted new strategies when using refinement tools, re-purposing them to test and understand the underlying algorithm and to disambiguate ML errors from their own errors. Taken together, these findings inform future human-ML collaborative systems for expert decision-making.
98 - Zihao Cheng , Jiangbo Si , Zan Li 2020
Surveillance performance is studied for a wireless eavesdropping system, where a full-duplex legitimate monitor eavesdrops a suspicious link efficiently with the artificial noise (AN) assistance. Different from the existing work in the literature, th e suspicious receiver in this paper is assumed to be capable of detecting the presence of AN. Once such receiver detects the AN, the suspicious user will stop transmission, which is harmful for the surveillance performance. Hence, to improve the surveillance performance, AN should be transmitted covertly with a low detection probability by the suspicious receiver. Under these assumptions, an optimization problem is formulated to maximize the eavesdropping non-outage probability under a covert constraint. Based on the detection ability at the suspicious receiver, a novel scheme is proposed to solve the optimization problem by iterative search. Moreover, we investigate the impact of both the suspicious link uncertainty and the jamming link uncertainty on the covert surveillance performance. Simulations are performed to verify the analyses. We show that the suspicious link uncertainty benefits the surveillance performance, while the jamming link uncertainty can degrade the surveillance performance.
We study the design of autonomous agents that are capable of deceiving outside observers about their intentions while carrying out tasks in stochastic, complex environments. By modeling the agents behavior as a Markov decision process, we consider a setting where the agent aims to reach one of multiple potential goals while deceiving outside observers about its true goal. We propose a novel approach to model observer predictions based on the principle of maximum entropy and to efficiently generate deceptive strategies via linear programming. The proposed approach enables the agent to exhibit a variety of tunable deceptive behaviors while ensuring the satisfaction of probabilistic constraints on the behavior. We evaluate the performance of the proposed approach via comparative user studies and present a case study on the streets of Manhattan, New York, using real travel time distributions.
Data collected about individuals is regularly used to make decisions that impact those same individuals. We consider settings where sensitive personal data is used to decide who will receive resources or benefits. While it is well known that there is a tradeoff between protecting privacy and the accuracy of decisions, we initiate a first-of-its-kind study into the impact of formally private mechanisms (based on differential privacy) on fair and equitable decision-making. We empirically investigate novel tradeoffs on two real-world decisions made using U.S. Census data (allocation of federal funds and assignment of voting rights benefits) as well as a classic apportionment problem. Our results show that if decisions are made using an $epsilon$-differentially private version of the data, under strict privacy constraints (smaller $epsilon$), the noise added to achieve privacy may disproportionately impact some groups over others. We propose novel measures of fairness in the context of randomized differentially private algorithms and identify a range of causes of outcome disparities.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا