No Arabic abstract
The purpose of this paper is to examine the opportunities and barriers of Integrated Human-Machine Intelligence (IHMI) in civil engineering. Integrating artificial intelligences high efficiency and repeatability with humans adaptability in various contexts can advance timely and reliable decision-making during civil engineering projects and emergencies. Successful cases in other domains, such as biomedical science, healthcare, and transportation, showed the potential of IHMI in data-driven, knowledge-based decision-making in numerous civil engineering applications. However, whether the industry and academia are ready to embrace the era of IHMI and maximize its benefit to the industry is still questionable due to several knowledge gaps. This paper thus calls for future studies in exploring the value, method, and challenges of applying IHMI in civil engineering. Our systematic review of the literature and motivating cases has identified four knowledge gaps in achieving effective IHMI in civil engineering. First, it is unknown what types of tasks in the civil engineering domain can be assisted by AI and to what extent. Second, the interface between human and AI in civil engineering-related tasks need more precise and formal definition. Third, the barriers that impede collecting detailed behavioral data from humans and contextual environments deserve systematic classification and prototyping. Lastly, it is unknown what expected and unexpected impacts will IHMI have on the AEC industry and entrepreneurship. Analyzing these knowledge gaps led to a list of identified research questions. This paper will lay the foundation for identifying relevant studies to form a research roadmap to address the four knowledge gaps identified.
Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these stakeholders desiderata) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders desiderata. This paper discusses the main classes of stakeholders calling for explainability of artificial systems and reviews their desiderata. We provide a model that explicitly spells out the main concepts and relations necessary to consider and investigate when evaluating, adjusting, choosing, and developing explainability approaches that aim to satisfy stakeholders desiderata. This model can serve researchers from the variety of different disciplines involved in XAI as a common ground. It emphasizes where there is interdisciplinary potential in the evaluation and the development of explainability approaches.
This article reviews the Once learning mechanism that was proposed 23 years ago and the subsequent successes of One-shot learning in image classification and You Only Look Once - YOLO in objective detection. Analyzing the current development of Artificial Intelligence (AI), the proposal is that AI should be clearly divided into the following categories: Artificial Human Intelligence (AHI), Artificial Machine Intelligence (AMI), and Artificial Biological Intelligence (ABI), which will also be the main directions of theory and application development for AI. As a watershed for the branches of AI, some classification standards and methods are discussed: 1) Human-oriented, machine-oriented, and biological-oriented AI R&D; 2) Information input processed by Dimensionality-up or Dimensionality-reduction; 3) The use of one/few or large samples for knowledge learning.
Recent work has demonstrated the promise of combining local explanations with active learning for understanding and supervising black-box models. Here we show that, under specific conditions, these algorithms may misrepresent the quality of the model being learned. The reason is that the machine illustrates its beliefs by predicting and explaining the labels of the query instances: if the machine is unaware of its own mistakes, it may end up choosing queries on which it performs artificially well. This biases the narrative presented by the machine to the user.We address this narrative bias by introducing explanatory guided learning, a novel interactive learning strategy in which: i) the supervisor is in charge of choosing the query instances, while ii) the machine uses global explanations to illustrate its overall behavior and to guide the supervisor toward choosing challenging, informative instances. This strategy retains the key advantages of explanatory interaction while avoiding narrative bias and compares favorably to active learning in terms of sample complexity. An initial empirical evaluation with a clustering-based prototype highlights the promise of our approach.
Along with the development of modern computing technology and social sciences, both theoretical research and practical applications of social computing have been continuously extended. In particular with the boom of artificial intelligence (AI), social computing is significantly influenced by AI. However, the conventional technologies of AI have drawbacks in dealing with more complicated and dynamic problems. Such deficiency can be rectified by hybrid human-artificial intelligence (H-AI) which integrates both human intelligence and AI into one unity, forming a new enhanced intelligence. H-AI in dealing with social problems shows the advantages that AI can not surpass. This paper firstly introduces the concept of H-AI. AI is the intelligence in the transition stage of H-AI, so the latest research progresses of AI in social computing are reviewed. Secondly, it summarizes typical challenges faced by AI in social computing, and makes it possible to introduce H-AI to solve these challenges. Finally, the paper proposes a holistic framework of social computing combining with H-AI, which consists of four layers: object layer, base layer, analysis layer, and application layer. It represents H-AI has significant advantages over AI in solving social problems.
Civil Asset Forfeiture (CAF) is a longstanding and controversial legal process viewed on the one hand as a powerful tool for combating drug crimes and on the other hand as a violation of the rights of US citizens. Data used to support both sides of the controversy to date has come from government sources representing records of the events at the time of occurrence. Court dockets represent litigation events initiated following the forfeiture, however, and can thus provide a new perspective on the CAF legal process. This paper will show new evidence supporting existing claims about the growth of the practice and bias in its application based on the quantitative analysis of data derived from these court cases.