Do you want to publish a course? Click here

Cybercasing 2.0: You Get What You Pay For

133   0   0.0 ( 0 )
 Added by Michael Tschantz
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Under U.S. law, marketing databases exist under almost no legal restrictions concerning accuracy, access, or confidentiality. We explore the possible (mis)use of these databases in a criminal context by conducting two experiments. First, we show how this data can be used for cybercasing by using this data to resolve the physical addresses of individuals who are likely to be on vacation. Second, we evaluate the utility of a bride to be mailing list augmented with data obtained by searching both Facebook and a bridal registry aggregator. We conclude that marketing data is not necessarily harmless and can represent a fruitful target for criminal misuse.



rate research

Read More

Mobile applications (hereafter, apps) collect a plethora of information regarding the user behavior and his device through third-party analytics libraries. However, the collection and usage of such data raised several privacy concerns, mainly because the end-user - i.e., the actual owner of the data - is out of the loop in this collection process. Also, the existing privacy-enhanced solutions that emerged in the last years follow an all or nothing approach, leaving the user the sole option to accept or completely deny the access to privacy-related data. This work has the two-fold objective of assessing the privacy implications on the usage of analytics libraries in mobile apps and proposing a data anonymization methodology that enables a trade-off between the utility and privacy of the collected data and gives the user complete control over the sharing process. To achieve that, we present an empirical privacy assessment on the analytics libraries contained in the 4500 most-used Android apps of the Google Play Store between November 2020 and January 2021. Then, we propose an empowered anonymization methodology, based on MobHide, that gives the end-user complete control over the collection and anonymization process. Finally, we empirically demonstrate the applicability and effectiveness of such anonymization methodology thanks to HideDroid, a fully-fledged anonymization app for the Android ecosystem.
Prior work on personalized recommendations has focused on exploiting explicit signals from user-specific queries, clicks, likes, and ratings. This paper investigates tapping into a different source of implicit signals of interests and tastes: online chats between users. The paper develops an expressive model and effective methods for personalizing search-based entity recommendations. User models derived from chats augment different methods for re-ranking entity answers for medium-grained queries. The paper presents specific techniques to enhance the user models by capturing domain-specific vocabularies and by entity-based expansion. Experiments are based on a collection of online chats from a controlled user study covering three domains: books, travel, food. We evaluate different configurations and compare chat-based user models against concise user profiles from questionnaires. Overall, these two variants perform on par in terms of NCDG@20, but each has advantages in certain domains.
We introduce Tanbih, a news aggregator with intelligent analysis tools to help readers understanding whats behind a news story. Our system displays news grouped into events and generates media profiles that show the general factuality of reporting, the degree of propagandistic content, hyper-partisanship, leading political ideology, general frame of reporting, and stance with respect to various claims and topics of a news outlet. In addition, we automatically analyse each article to detect whether it is propagandistic and to determine its stance with respect to a number of controversial topics.
GW190412 is the first observation of a black hole binary with definitively unequal masses. GW190412s mass asymmetry, along with the measured positive effective inspiral spin, allowed for inference of a component black hole spin: the primary black hole in the system was found to have a dimensionless spin magnitude between 0.17 and 0.59 (90% credible range). We investigate how the choice of priors for the spin magnitudes and tilts of the component black holes affect the robustness of parameter estimates for GW190412, and report Bayes factors across a suite of prior assumptions. Depending on the waveform family used to describe the signal, we find either marginal to moderate (2:1-6:1) or strong ($gtrsim$ 20:1) support for the primary black hole being spinning compared to cases where only the secondary is allowed to have spin. We show how these choices influence parameter estimates, and find the asymmetric masses and positive effective inspiral spin of GW190412 to be qualitatively, but not quantitatively, robust to prior assumptions. Our results highlight the importance of both considering astrophysically motivated or population-based priors in interpreting observations and considering their relative support from the data.
Model extraction increasingly attracts research attentions as keeping commercial AI models private can retain a competitive advantage. In some scenarios, AI models are trained proprietarily, where neither pre-trained models nor sufficient in-distribution data is publicly available. Model extraction attacks against these models are typically more devastating. Therefore, in this paper, we empirically investigate the behaviors of model extraction under such scenarios. We find the effectiveness of existing techniques significantly affected by the absence of pre-trained models. In addition, the impacts of the attackers hyperparameters, e.g. model architecture and optimizer, as well as the utilities of information retrieved from queries, are counterintuitive. We provide some insights on explaining the possible causes of these phenomena. With these observations, we formulate model extraction attacks into an adaptive framework that captures these factors with deep reinforcement learning. Experiments show that the proposed framework can be used to improve existing techniques, and show that model extraction is still possible in such strict scenarios. Our research can help system designers to construct better defense strategies based on their scenarios.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا