No Arabic abstract
Labelling user data is a central part of the design and evaluation of pervasive systems that aim to support the user through situation-aware reasoning. It is essential both in designing and training the system to recognise and reason about the situation, either through the definition of a suitable situation model in knowledge-driven applications, or through the preparation of training data for learning tasks in data-driven models. Hence, the quality of annotations can have a significant impact on the performance of the derived systems. Labelling is also vital for validating and quantifying the performance of applications. In particular, comparative evaluations require the production of benchmark datasets based on high-quality and consistent annotations. With pervasive systems relying increasingly on large datasets for designing and testing models of users activities, the process of data labelling is becoming a major concern for the community. In this work we present a qualitative and quantitative analysis of the challenges associated with annotation of user data and possible strategies towards addressing these challenges. The analysis was based on the data gathered during the 1st International Workshop on Annotation of useR Data for UbiquitOUs Systems (ARDUOUS) and consisted of brainstorming as well as annotation and questionnaire data gathered during the talks, poster session, live annotation session, and discussion session.
This volume contains the proceedings of the first workshop on Advances in Systems of Systems (AISOS13), held in Roma, Italy, March 16. System-of-Systems describes the large scale integration of many independent self-contained systems to satisfy global needs or multi-system requests. Examples are smart grid, intelligent buildings, smart cities, transport systems, etc. There is a need for new modeling formalisms, analysis methods and tools to help make trade-off decisions during design and evolution avoiding leading to sub-optimal design and rework during integration and in service. The workshop should focus on the modeling and analysis of System of Systems. AISOS13 aims to gather people from different communities in order to encourage exchange of methods and views.
This competition concerns educational diagnostic questions, which are pedagogically effective, multiple-choice questions (MCQs) whose distractors embody misconceptions. With a large and ever-increasing number of such questions, it becomes overwhelming for teachers to know which questions are the best ones to use for their students. We thus seek to answer the following question: how can we use data on hundreds of millions of answers to MCQs to drive automatic personalized learning in large-scale learning scenarios where manual personalization is infeasible? Success in using MCQ data at scale helps build more intelligent, personalized learning platforms that ultimately improve the quality of education en masse. To this end, we introduce a new, large-scale, real-world dataset and formulate 4 data mining tasks on MCQs that mimic real learning scenarios and target various aspects of the above question in a competition setting at NeurIPS 2020. We report on our NeurIPS competition in which nearly 400 teams submitted approximately 4000 submissions, with encouragingly diverse and effective approaches to each of our tasks.
We explore the commonalities between methods for assuring the security of computer systems (cybersecurity) and the mechanisms that have evolved through natural selection to protect vertebrates against pathogens, and how insights derived from studying the evolution of natural defenses can inform the design of more effective cybersecurity systems. More generally, security challenges are crucial for the maintenance of a wide range of complex adaptive systems, including financial systems, and again lessons learned from the study of the evolution of natural defenses can provide guidance for the protection of such systems.
Annotation is the labeling of data by human effort. Annotation is critical to modern machine learning, and Bloomberg has developed years of experience of annotation at scale. This report captures a wealth of wisdom for applied annotation projects, collected from more than 30 experienced annotation project managers in Bloombergs Global Data department.
This is the proceedings of the 3rd ML4D workshop which was help in Vancouver, Canada on December 13, 2019 as part of the Neural Information Processing Systems conference.