ترغب بنشر مسار تعليمي؟ اضغط هنا

Are Privacy Dashboards Good for End Users? Evaluating User Perceptions and Reactions to Googles My Activity (Extended Version)

89   0   0.0 ( 0 )
 نشر من قبل Florian M. Farke
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Florian M. Farke




اسأل ChatGPT حول البحث

Privacy dashboards and transparency tools help users review and manage the data collected about them online. Since 2016, Google has offered such a tool, My Activity, which allows users to review and delete their activity data from Google services. We conducted an online survey with $n = 153$ participants to understand if Googles My Activity, as an example of a privacy transparency tool, increases or decreases end-users concerns and benefits regarding data collection. While most participants were aware of Googles data collection, the volume and detail was surprising, but after exposure to My Activity, participants were significantly more likely to be both less concerned about data collection and to view data collection more beneficially. Only $25,%$ indicated that they would change any settings in the My Activity service or change any behaviors. This suggests that privacy transparency tools are quite beneficial for online services as they garner trust with their users and improve their perceptions without necessarily changing users behaviors. At the same time, though, it remains unclear if such transparency tools actually improve end user privacy by sufficiently assisting or motivating users to change or review data collection settings.

قيم البحث

اقرأ أيضاً

Many celebrate the Internets ability to connect individuals and facilitate collective action toward a common goal. While numerous systems have been designed to support particular aspects of collective action, few systems support participatory, end-to -end collective action in which a crowd or community identifies opportunities, formulates goals, brainstorms ideas, develops plans, mobilizes, and takes action. To explore the possibilities and barriers in supporting such interactions, we have developed WeDo, a system aimed at promoting simple forms of participatory, end-to-end collective action. Pilot deployments of WeDo illustrate that sociotechnical systems can support automated transitions through different phases of end-to-end collective action, but that challenges, such as the elicitation of leadership and the accommodation of existing group norms, remain.
The introduction of robots into our society will also introduce new concerns about personal privacy. In order to study these concerns, we must do human-subject experiments that involve measuring privacy-relevant constructs. This paper presents a taxo nomy of privacy constructs based on a review of the privacy literature. Future work in operationalizing privacy constructs for HRI studies is also discussed.
Risk-limiting audits (RLAs) are expected to strengthen the public confidence in the correctness of an election outcome. We hypothesize that this is not always the case, in part because for large margins between the winner and the runner-up, the numbe r of ballots to be drawn can be so small that voters lose confidence. We conduct a user study with 105 participants resident in the US. Our findings confirm the hypothesis, showing that our study participants felt less confident when they were told the number of ballots audited for RLAs. We elaborate on our findings and propose recommendations for future use of RLAs.
In this work, we present an approach for unsupervised domain adaptation (DA) with the constraint, that the labeled source data are not directly available, and instead only access to a classifier trained on the source data is provided. Our solution, i teratively labels only high confidence sub-regions of the target data distribution, based on the belief of the classifier. Then it iteratively learns new classifiers from the expanding high-confidence dataset. The goal is to apply the proposed approach on DA for the task of sleep apnea detection and achieve personalization based on the needs of the patient. In a series of experiments with both open and closed sleep monitoring datasets, the proposed approach is applied to data from different sensors, for DA between the different datasets. The proposed approach outperforms in all experiments the classifier trained in the source domain, with an improvement of the kappa coefficient that varies from 0.012 to 0.242. Additionally, our solution is applied to digit classification DA between three well established digit datasets, to investigate the generalizability of the approach, and to allow for comparison with related work. Even without direct access to the source data, it achieves good results, and outperforms several well established unsupervised DA methods.
News recommendation and personalization is not a solved problem. People are growing concerned of their data being collected in excess in the name of personalization and the usage of it for purposes other than the ones they would think reasonable. Our experience in building personalization products for publishers while adhering to safeguard user privacy led us to investigate more on the user perspective of privacy and personalization. We conducted a survey to explore peoples experience with personalization and privacy and the viewpoints of different age groups. In this paper, we share our major findings with publishers and the community that can inform algorithmic design and implementation of the next generation of news recommender systems, which must put the human at its core and reach a balance between personalization experiences and privacy to reap the benefits of both.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا