ﻻ يوجد ملخص باللغة العربية
Privacy dashboards and transparency tools help users review and manage the data collected about them online. Since 2016, Google has offered such a tool, My Activity, which allows users to review and delete their activity data from Google services. We conducted an online survey with $n = 153$ participants to understand if Googles My Activity, as an example of a privacy transparency tool, increases or decreases end-users concerns and benefits regarding data collection. While most participants were aware of Googles data collection, the volume and detail was surprising, but after exposure to My Activity, participants were significantly more likely to be both less concerned about data collection and to view data collection more beneficially. Only $25,%$ indicated that they would change any settings in the My Activity service or change any behaviors. This suggests that privacy transparency tools are quite beneficial for online services as they garner trust with their users and improve their perceptions without necessarily changing users behaviors. At the same time, though, it remains unclear if such transparency tools actually improve end user privacy by sufficiently assisting or motivating users to change or review data collection settings.
Many celebrate the Internets ability to connect individuals and facilitate collective action toward a common goal. While numerous systems have been designed to support particular aspects of collective action, few systems support participatory, end-to
The introduction of robots into our society will also introduce new concerns about personal privacy. In order to study these concerns, we must do human-subject experiments that involve measuring privacy-relevant constructs. This paper presents a taxo
Risk-limiting audits (RLAs) are expected to strengthen the public confidence in the correctness of an election outcome. We hypothesize that this is not always the case, in part because for large margins between the winner and the runner-up, the numbe
In this work, we present an approach for unsupervised domain adaptation (DA) with the constraint, that the labeled source data are not directly available, and instead only access to a classifier trained on the source data is provided. Our solution, i
News recommendation and personalization is not a solved problem. People are growing concerned of their data being collected in excess in the name of personalization and the usage of it for purposes other than the ones they would think reasonable. Our