Do you want to publish a course? Click here

Correspondences between Privacy and Nondiscrimination: Why They Should Be Studied Together

64   0   0.0 ( 0 )
 Added by Michael Tschantz
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Privacy and nondiscrimination are related but different. We make this observation precise in two ways. First, we show that both privacy and nondiscrimination have t

rate research

Read More

81 - Marten Lohstroh 2017
Data security, which is concerned with the prevention of unauthorized access to computers, databases, and websites, helps protect digital privacy and ensure data integrity. It is extremely difficult, however, to make security watertight, and security breaches are not uncommon. The consequences of stolen credentials go well beyond the leakage of other types of information because they can further compromise other systems. This paper criticizes the practice of using clear-text identity attributes, such as Social Security or drivers license numbers -- which are in principle not even secret -- as acceptable authentication tokens or assertions of ownership, and proposes a simple protocol that straightforwardly applies public-key cryptography to make identity claims verifiable, even when they are issued remotely via the Internet. This protocol has the potential of elevating the business practices of credit providers, rental agencies, and other service companies that have hitherto exposed consumers to the risk of identity theft, to where identity theft becomes virtually impossible.
The huge computation demand of deep learning models and limited computation resources on the edge devices calls for the cooperation between edge device and cloud service by splitting the deep models into two halves. However, transferring the intermediates results from the partial models between edge device and cloud service makes the user privacy vulnerable since the attacker can intercept the intermediate results and extract privacy information from them. Existing research works rely on metrics that are either impractical or insufficient to measure the effectiveness of privacy protection methods in the above scenario, especially from the aspect of a single user. In this paper, we first present a formal definition of the privacy protection problem in the edge-cloud system running DNN models. Then, we analyze the-state-of-the-art methods and point out the drawbacks of their methods, especially the evaluation metrics such as the Mutual Information (MI). In addition, we perform several experiments to demonstrate that although existing methods perform well under MI, they are not effective enough to protect the privacy of a single user. To address the drawbacks of the evaluation metrics, we propose two new metrics that are more accurate to measure the effectiveness of privacy protection methods. Finally, we highlight several potential research directions to encourage future efforts addressing the privacy protection problem.
International challenges have become the standard for validation of biomedical image analysis methods. Given their scientific impact, it is surprising that a critical analysis of common practices related to the organization of challenges has not yet been performed. In this paper, we present a comprehensive analysis of biomedical image analysis challenges conducted up to now. We demonstrate the importance of challenges and show that the lack of quality control has critical consequences. First, reproducibility and interpretation of the results is often hampered as only a fraction of relevant information is typically provided. Second, the rank of an algorithm is generally not robust to a number of variables such as the test data used for validation, the ranking scheme applied and the observers that make the reference annotations. To overcome these problems, we recommend best practice guidelines and define open research questions to be addressed in the future.
Intuitively, obedience -- following the order that a human gives -- seems like a good property for a robot to have. But, we humans are not perfect and we may give orders that are not best aligned to our preferences. We show that when a human is not perfectly rational then a robot that tries to infer and act according to the humans underlying preferences can always perform better than a robot that simply follows the humans literal order. Thus, there is a tradeoff between the obedience of a robot and the value it can attain for its owner. We investigate how this tradeoff is impacted by the way the robot infers the humans preferences, showing that some methods err more on the side of obedience than others. We then analyze how performance degrades when the robot has a misspecified model of the features that the human cares about or the level of rationality of the human. Finally, we study how robots can start detecting such model misspecification. Overall, our work suggests that there might be a middle ground in which robots intelligently decide when to obey human orders, but err on the side of obedience.
79 - Adam J. Aviv , Ravi Kuber 2018
In this study, we examine the ways in which user attitudes towards privacy and security relating to mobile devices and the data stored thereon may impact the strength of unlock authentication, focusing on Androids graphical unlock patterns. We conducted an online study with Amazon Mechanical Turk ($N=750$) using self-reported unlock authentication choices, as well as Likert scale agreement/disagreement responses to a set of seven privacy/security prompts. We then analyzed the responses in multiple dimensions, including a straight average of the Likert responses as well as using Principle Component Analysis to expose latent factors. We found that responses to two of the seven questions proved relevant and significant. These two questions considered attitudes towards general concern for data stored on mobile devices, and attitudes towards concerns for unauthorized access by known actors. Unfortunately, larger conclusions cannot be drawn on the efficacy of the broader set of questions for exposing connections between unlock authentication strength (Pearson Rank $r=-0.08$, $p<0.1$). However, both of our factor solutions exposed differences in responses for demographics groups, including age, gender, and residence type. The findings of this study suggests that there is likely a link between perceptions of privacy/security on mobile devices and the perceived threats therein, but more research is needed, particularly on developing better survey and measurement techniques of privacy/security attitudes that relate to mobile devices specifically.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا