No Arabic abstract
This paper proposes a new general approach based on Bayesian networks to model the human behaviour. This approach represents human behaviour withprobabilistic cause-effect relations based not only on previous works, but also with conditional probabilities coming either from expert knowledge or deduced from observations. The approach has been used in the co-simulation of building physics and human behaviour in order to assess the CO 2 concentration in an office.
This paper analyzes two modeling approaches for occupant behaviour in buildings. It compares a purely statistical approach with a multi-agent social simulation based approach. The study concerns the door openings in an office.
Crowd models can be used for the simulation of people movement in the built environment. Crowd model outputs have been used for evaluating safety and comfort of pedestrians, inform crowd management and perform forensic investigations. Microscopic crowd models allow the representation of each person and the obtainment of information concerning their location over time and interactions with the physical space/other people. Pandemics such as COVID-19 have posed several questions on safe building usage, given the risk of disease transmission among building occupants. Here we show how crowd modelling can be used to assess occupant exposure in confined spaces. The policies adopted concerning building usage and social distancing during a pandemic can vary greatly, and they are mostly based on the macroscopic analysis of the spread of disease rather than a safety assessment performed at a building level. The proposed model allows the investigation of occupant exposure in buildings based on the analysis of microscopic people movement. Risk assessment is performed by retrofitting crowd models with a universal model for exposure assessment which can account for different types of disease transmissions. This work allows policy makers to perform informed decisions concerning building usage during a pandemic.
Reasoning in a temporal knowledge graph (TKG) is a critical task for information retrieval and semantic search. It is particularly challenging when the TKG is updated frequently. The model has to adapt to changes in the TKG for efficient training and inference while preserving its performance on historical knowledge. Recent work approaches TKG completion (TKGC) by augmenting the encoder-decoder framework with a time-aware encoding function. However, naively fine-tuning the model at every time step using these methods does not address the problems of 1) catastrophic forgetting, 2) the models inability to identify the change of facts (e.g., the change of the political affiliation and end of a marriage), and 3) the lack of training efficiency. To address these challenges, we present the Time-aware Incremental Embedding (TIE) framework, which combines TKG representation learning, experience replay, and temporal regularization. We introduce a set of metrics that characterizes the intransigence of the model and propose a constraint that associates the deleted facts with negative labels. Experimental results on Wikidata12k and YAGO11k datasets demonstrate that the proposed TIE framework reduces training time by about ten times and improves on the proposed metrics compared to vanilla full-batch training. It comes without a significant loss in performance for any traditional measures. Extensive ablation studies reveal performance trade-offs among different evaluation metrics, which is essential for decision-making around real-world TKG applications.
With the rapid development in online education, knowledge tracing (KT) has become a fundamental problem which traces students knowledge status and predicts their performance on new questions. Questions are often numerous in online education systems, and are always associated with much fewer skills. However, the previous literature fails to involve question information together with high-order question-skill correlations, which is mostly limited by data sparsity and multi-skill problems. From the model perspective, previous models can hardly capture the long-term dependency of student exercise history, and cannot model the interactions between student-questions, and student-skills in a consistent way. In this paper, we propose a Graph-based Interaction model for Knowledge Tracing (GIKT) to tackle the above probems. More specifically, GIKT utilizes graph convolutional network (GCN) to substantially incorporate question-skill correlations via embedding propagation. Besides, considering that relevant questions are usually scattered throughout the exercise history, and that question and skill are just different instantiations of knowledge, GIKT generalizes the degree of students master of the question to the interactions between the students current state, the students history related exercises, the target question, and related skills. Experiments on three datasets demonstrate that GIKT achieves the new state-of-the-art performance, with at least 1% absolute AUC improvement.
There is a significant lack of unified approaches to building generally intelligent machines. The majority of current artificial intelligence research operates within a very narrow field of focus, frequently without considering the importance of the big picture. In this document, we seek to describe and unify principles that guide the basis of our development of general artificial intelligence. These principles revolve around the idea that intelligence is a tool for searching for general solutions to problems. We define intelligence as the ability to acquire skills that narrow this search, diversify it and help steer it to more promising areas. We also provide suggestions for studying, measuring, and testing the various skills and abilities that a human-level intelligent machine needs to acquire. The document aims to be both implementation agnostic, and to provide an analytic, systematic, and scalable way to generate hypotheses that we believe are needed to meet the necessary conditions in the search for general artificial intelligence. We believe that such a framework is an important stepping stone for bringing together definitions, highlighting open problems, connecting researchers willing to collaborate, and for unifying the arguably most significant search of this century.