No Arabic abstract
Recent advances in artificial intelligence (AI) have lead to an explosion of multimedia applications (e.g., computer vision (CV) and natural language processing (NLP)) for different domains such as commercial, industrial, and intelligence. In particular, the use of AI applications in a national security environment is often problematic because the opaque nature of the systems leads to an inability for a human to understand how the results came about. A reliance on black boxes to generate predictions and inform decisions is potentially disastrous. This paper explores how the application of standards during each stage of the development of an AI system deployed and used in a national security environment would help enable trust. Specifically, we focus on the standards outlined in Intelligence Community Directive 203 (Analytic Standards) to subject machine outputs to the same rigorous standards as analysis performed by humans.
Increasingly, smart computing devices, with powerful sensors and internet connectivity, are being embedded into all new forms of infrastructure, from hospitals to roads to factories. These devices are part of the Internet of Things (IoT) and the economic value of their widespread deployment is estimated to be trillions of dollars, with billions of devices deployed. Consider the example of smart meters for electricity utilities. Because of clear economic benefits, including a reduction in the cost of reading meters, more precise information about outages and diagnostics, and increased benefits from predicting and balancing electric loads, such meters are already being rolled out across North America. With residential solar collection, smart meters allow individuals to sell power back to the grid providing economic incentives for conservation. Similarly, smart water meters allow water conservation in a drought. Such infrastructure upgrades are infrequent (with smart meters expected to be in service for 20-30 years) but the benefits from the upgrade justify the significant cost. A long-term benefit of such upgrades is that unforeseen savings might be realized in the future when new analytic techniques are applied to the data that is collected. The same benefits accrue to any infrastructure that embeds increased sensing and actuation capabilities via IoT devices, including roads and traffic control, energy and water management in buildings, and public health monitoring.
The rise of Artificial Intelligence (AI) will bring with it an ever-increasing willingness to cede decision-making to machines. But rather than just giving machines the power to make decisions that affect us, we need ways to work cooperatively with AI systems. There is a vital need for research in AI and Cooperation that seeks to understand the ways in which systems of AIs and systems of AIs with people can engender cooperative behavior. Trust in AI is also key: trust that is intrinsic and trust that can only be earned over time. Here we use the term AI in its broadest sense, as employed by the recent 20-Year Community Roadmap for AI Research (Gil and Selman, 2019), including but certainly not limited to, recent advances in deep learning. With success, cooperation between humans and AIs can build society just as human-human cooperation has. Whether coming from an intrinsic willingness to be helpful, or driven through self-interest, human societies have grown strong and the human species has found success through cooperation. We cooperate in the small -- as family units, with neighbors, with co-workers, with strangers -- and in the large as a global community that seeks cooperative outcomes around questions of commerce, climate change, and disarmament. Cooperation has evolved in nature also, in cells and among animals. While many cases involving cooperation between humans and AIs will be asymmetric, with the human ultimately in control, AI systems are growing so complex that, even today, it is impossible for the human to fully comprehend their reasoning, recommendations, and actions when functioning simply as passive observers.
In this paper we discuss how systems with Artificial Intelligence (AI) can undergo safety assessment. This is relevant, if AI is used in safety related applications. Taking a deeper look into AI models, we show, that many models of artificial intelligence, in particular machine learning, are statistical models. Safety assessment would then have t o concentrate on the model that is used in AI, besides the normal assessment procedure. Part of the budget of dangerous random failures for the relevant safety integrity level needs to be used for the probabilistic faulty behavior of the AI system. We demonstrate our thoughts with a simple example and propose a research challenge that may be decisive for the use of AI in safety related systems.
The increased adoption of Artificial Intelligence (AI) presents an opportunity to solve many socio-economic and environmental challenges; however, this cannot happen without securing AI-enabled technologies. In recent years, most AI models are vulnerable to advanced and sophisticated hacking techniques. This challenge has motivated concerted research efforts into adversarial AI, with the aim of developing robust machine and deep learning models that are resilient to different types of adversarial scenarios. In this paper, we present a holistic cyber security review that demonstrates adversarial attacks against AI applications, including aspects such as adversarial knowledge and capabilities, as well as existing methods for generating adversarial examples and existing cyber defence models. We explain mathematical AI models, especially new variants of reinforcement and federated learning, to demonstrate how attack vectors would exploit vulnerabilities of AI models. We also propose a systematic framework for demonstrating attack techniques against AI applications and reviewed several cyber defences that would protect AI applications against those attacks. We also highlight the importance of understanding the adversarial goals and their capabilities, especially the recent attacks against industry applications, to develop adaptive defences that assess to secure AI applications. Finally, we describe the main challenges and future research directions in the domain of security and privacy of AI technologies.
The Internet of Things (IoT) and edge computing applications aim to support a variety of societal needs, including the global pandemic situation that the entire world is currently experiencing and responses to natural disasters. The need for real-time interactive applications such as immersive video conferencing, augmented/virtual reality, and autonomous vehicles, in education, healthcare, disaster recovery and other domains, has never been higher. At the same time, there have been recent technological breakthroughs in highly relevant fields such as artificial intelligence (AI)/machine learning (ML), advanced communication systems (5G and beyond), privacy-preserving computations, and hardware accelerators. 5G mobile communication networks increase communication capacity, reduce transmission latency and error, and save energy -- capabilities that are essential for new applications. The envisioned future 6G technology will integrate many more technologies, including for example visible light communication, to support groundbreaking applications, such as holographic communications and high precision manufacturing. Many of these applications require computations and analytics close to application end-points: that is, at the edge of the network, rather than in a centralized cloud. AI techniques applied at the edge have tremendous potential both to power new applications and to need more efficient operation of edge infrastructure. However, it is critical to understand where to deploy AI systems within complex ecosystems consisting of advanced applications and the specific real-time requirements towards AI systems.