No Arabic abstract
The contextual information (i.e., the time and location) in which a photo is taken can be easily tampered with or falsely claimed by forgers to achieve malicious purposes, e.g., creating fear among the general public. A rich body of work has focused on detecting photo tampering and manipulation by verifying the integrity of image content. Instead, we aim to detect photo misuse by verifying the capture time and location of photos. This paper is motivated by the law of nature that sun position varies with the time and location, which can be used to determine whether the claimed contextual information corresponds with the sun position that the image content actually indicates. Prior approaches to inferring sun position from images mainly rely on vanishing points associated with at least two shadows, while we propose novel algorithms which utilize only one shadow in the image to infer the sun position. Meanwhile, we compute the sun position by applying astronomical algorithms which take as input the claimed capture time and location. Only when the two estimated sun positions are consistent can the claimed contextual information be genuine. We have developed a prototype called IMAGEGUARD. The experimental results show that our method can successfully estimate sun position and detect the time-location inconsistency with high accuracy. By setting the thresholds to be 9.4 degrees and 5 degrees for the sun position distance and the altitude angle distance, respectively, our system can correctly identify 91.5% of falsified photos with fake contextual information.
One of the main tasks of cybersecurity is recognizing malicious interactions with an arbitrary system. Currently, the logging information from each interaction can be collected in almost unrestricted amounts, but identification of attacks requires a lot of effort and time of security experts. We propose an approach for identifying fraud activity through modeling normal behavior in interactions with a system via machine learning methods, in particular LSTM neural networks. In order to enrich the modeling with system specific knowledge, we propose to use an interactive visual interface that allows security experts to identify semantically meaningful clusters of interactions. These clusters incorporate domain knowledge and lead to more precise behavior modeling via informed machine learning. We evaluate the proposed approach on a dataset containing logs of interactions with an administrative interface of login and security server. Our empirical results indicate that the informed modeling is capable of capturing normal behavior, which can then be used to detect abnormal behavior.
Boot firmware, like UEFI-compliant firmware, has been the target of numerous attacks, giving the attacker control over the entire system while being undetected. The measured boot mechanism of a computer platform ensures its integrity by using cryptographic measurements to detect such attacks. This is typically performed by relying on a Trusted Platform Module (TPM). Recent work, however, shows that vendors do not respect the specifications that have been devised to ensure the integrity of the firmwares loading process. As a result, attackers may bypass such measurement mechanisms and successfully load a modified firmware image while remaining unnoticed. In this paper we introduce BootKeeper, a static analysis approach verifying a set of key security properties on boot firmware images before deployment, to ensure the integrity of the measured boot process. We evaluate BootKeeper against several attacks on common boot firmware implementations and demonstrate its applicability.
The correct use of cryptography is central to ensuring data security in modern software systems. Hence, several academic and commercial static analysis tools have been developed for detecting and mitigating crypto-API misuse. While developers are optimistically adopting these crypto-API misuse detectors (or crypto-detectors) in their software development cycles, this momentum must be accompanied by a rigorous understanding of their effectiveness at finding crypto-API misuse in practice. This paper presents the MASC framework, which enables a systematic and data-driven evaluation of crypto-detectors using mutation testing. We ground MASC in a comprehensive view of the problem space by developing a data-driven taxonomy of existing crypto-API misuse, containing $105$ misuse cases organized among nine semantic clusters. We develop $12$ generalizable usage-based mutation operators and three mutation scopes that can expressively instantiate thousands of compilable variants of the misuse cases for thoroughly evaluating crypto-detectors. Using MASC, we evaluate nine major crypto-detectors and discover $19$ unique, undocumented flaws that severely impact the ability of crypto-detectors to discover misuses in practice. We conclude with a discussion on the diverse perspectives that influence the design of crypto-detectors and future directions towards building security-focused crypto-detectors by design.
In recent decades, criminals have increasingly used the web to research, assist and perpetrate criminal behaviour. One of the most important ways in which law enforcement can battle this growing trend is through accessing pertinent information about suspects in a timely manner. A significant hindrance to this is the difficulty of accessing any system a suspect uses that requires authentication via password. Password guessing techniques generally consider common user behaviour while generating their passwords, as well as the password policy in place. Such techniques can offer a modest success rate considering a large/average population. However, they tend to fail when focusing on a single target -- especially when the latter is an educated user taking precautions as a savvy criminal would be expected to do. Open Source Intelligence is being increasingly leveraged by Law Enforcement in order to gain useful information about a suspect, but very little is currently being done to integrate this knowledge in an automated way within password cracking. The purpose of this research is to delve into the techniques that enable the gathering of the necessary context about a suspect and find ways to leverage this information within password guessing techniques.
Recently, coordinated attack campaigns started to become more widespread on the Internet. In May 2017, WannaCry infected more than 300,000 machines in 150 countries in a few days and had a large impact on critical infrastructure. Existing threat sharing platforms cannot easily adapt to emerging attack patterns. At the same time, enterprises started to adopt machine learning-based threat detection tools in their local networks. In this paper, we pose the question: emph{What information can defenders share across multiple networks to help machine learning-based threat detection adapt to new coordinated attacks?} We propose three information sharing methods across two networks, and show how the shared information can be used in a machine-learning network-traffic model to significantly improve its ability of detecting evasive self-propagating malware.