No Arabic abstract
Adversarial attacks for machine learning models have become a highly studied topic both in academia and industry. These attacks, along with traditional security threats, can compromise confidentiality, integrity, and availability of organizations assets that are dependent on the usage of machine learning models. While it is not easy to predict the types of new attacks that might be developed over time, it is possible to evaluate the risks connected to using machine learning models and design measures that help in minimizing these risks. In this paper, we outline a novel framework to guide the risk management process for organizations reliant on machine learning models. First, we define sets of evaluation factors (EFs) in the data domain, model domain, and security controls domain. We develop a method that takes the asset and task importance, sets the weights of EFs contribution to confidentiality, integrity, and availability, and based on implementation scores of EFs, it determines the overall security state in the organization. Based on this information, it is possible to identify weak links in the implemented security measures and find out which measures might be missing completely. We believe our framework can help in addressing the security issues related to usage of machine learning models in organizations and guide them in focusing on the adequate security measures to protect their assets.
Security is considered one of the top ranked risks of Cloud Computing (CC) due to the outsourcing of sensitive data onto a third party. In addition, the complexity of the cloud model results in a large number of heterogeneous security controls that must be consistently managed. Hence, no matter how strongly the cloud model is secured, organizations continue suffering from lack of trust on CC and remain uncertain about its security risk consequences. Traditional risk management frameworks do not consider the impact of CC security risks on the business objectives of the organizations. In this paper, we propose a novel Cloud Security Risk Management Framework (CSRMF) that helps organizations adopting CC identify, analyze, evaluate, and mitigate security risks in their Cloud platforms. Unlike traditional risk management frameworks, CSRMF is driven by the business objectives of the organizations. It allows any organization adopting CC to be aware of cloud security risks and align their low-level management decisions according to high-level business objectives. In essence, it is designed to address impacts of cloud-specific security risks into business objectives in a given organization. Consequently, organizations are able to conduct a cost-value analysis regarding the adoption of CC technology and gain an adequate level of confidence in Cloud technology. On the other hand, Cloud Service Providers (CSP) are able to improve productivity and profitability by managing cloud-related risks. The proposed framework has been validated and evaluated through a use-case scenario.
Cyber Physical Systems (CPS) are characterized by their ability to integrate the physical and information or cyber worlds. Their deployment in critical infrastructure have demonstrated a potential to transform the world. However, harnessing this potential is limited by their critical nature and the far reaching effects of cyber attacks on human, infrastructure and the environment. An attraction for cyber concerns in CPS rises from the process of sending information from sensors to actuators over the wireless communication medium, thereby widening the attack surface. Traditionally, CPS security has been investigated from the perspective of preventing intruders from gaining access to the system using cryptography and other access control techniques. Most research work have therefore focused on the detection of attacks in CPS. However, in a world of increasing adversaries, it is becoming more difficult to totally prevent CPS from adversarial attacks, hence the need to focus on making CPS resilient. Resilient CPS are designed to withstand disruptions and remain functional despite the operation of adversaries. One of the dominant methodologies explored for building resilient CPS is dependent on machine learning (ML) algorithms. However, rising from recent research in adversarial ML, we posit that ML algorithms for securing CPS must themselves be resilient. This paper is therefore aimed at comprehensively surveying the interactions between resilient CPS using ML and resilient ML when applied in CPS. The paper concludes with a number of research trends and promising future research directions. Furthermore, with this paper, readers can have a thorough understanding of recent advances on ML-based security and securing ML for CPS and countermeasures, as well as research trends in this active research area.
Some recent incidents have shown that possibly the vulnerability of IT systems in railway automation has been underestimated. Fortunately, so far, almost only denial-of-service attacks were successful, but due to several trends, such as the use of commercial IT and communication systems or privatization, the threat potential could increase in the near future. However, up to now, no harmonized IT security risk assessment framework for railway automation exists. This paper defines an IT security risk assessment framework which aims to separate IT security and safety requirements as well as certification processes as far as possible. It builds on the well-known safety and approval processes from IEC 62425 and integrates IT security requirements based on the ISA99/IEC62443 standard series. While the detailed results are related to railway automation the general concepts are also applicable to other safety-critical application areas.
Although cyberattacks on machine learning (ML) production systems can be destructive, many industry practitioners are ill equipped, lacking tactical and strategic tools that would allow them to analyze, detect, protect against, and respond to cyberattacks targeting their ML-based systems. In this paper, we take a significant step toward securing ML production systems by integrating these systems and their vulnerabilities into cybersecurity risk assessment frameworks. Specifically, we performed a comprehensive threat analysis of ML production systems and developed an extension to the MulVAL attack graph generation and analysis framework to incorporate cyberattacks on ML production systems. Using the proposed extension, security practitioners can apply attack graph analysis methods in environments that include ML components, thus providing security experts with a practical tool for evaluating the impact and quantifying the risk of a cyberattack targeting an ML production system.
We propose some machine-learning-based algorithms to solve hedging problems in incomplete markets. Sources of incompleteness cover illiquidity, untradable risk factors, discrete hedging dates and transaction costs. The proposed algorithms resulting strategies are compared to classical stochastic control techniques on several payoffs using a variance criterion. One of the proposed algorithm is flexible enough to be used with several existing risk criteria. We furthermore propose a new moment-based risk criteria.