No Arabic abstract
There is a historically unprecedented shift in demographics towards seniors, which will result in significant housing development over the coming decade. This is an enormous opportunity for real-estate operators to innovate and address the demand in this growing market. However, investments in this area are fraught with risk. Seniors often have more health issues, and Covid-19 has exposed just how vulnerable they are -- especially those living in close proximity. Conventionally, most services for seniors are high-touch, requiring close physical contact with trained caregivers. Not only are trained caregivers short in supply, but the pandemic has made it evident that conventional high-touch approaches to senior care are high-cost and greater risk. There are not enough caregivers to meet the needs of this emerging demographic, and even fewer who want to undertake the additional training and risk of working in a senior facility, especially given the current pandemic. In this article, we rethink the design of senior living facilities to mitigate the risks and costs using automation. With AI-enabled pervasive automation, we claim there is an opportunity, if not an urgency, to go from high-touch to almost no touch while dramatically reducing risk and cost. Although our vision goes beyond the current reality, we cite measurements from Caspar AI-enabled senior properties that show the potential benefit of this approach.
In February 2020, the European Commission (EC) published a white paper entitled, On Artificial Intelligence - A European approach to excellence and trust. This paper outlines the ECs policy options for the promotion and adoption of artificial intelligence (AI) in the European Union. The Montreal AI Ethics Institute (MAIEI) reviewed this paper and published a response addressing the ECs plans to build an ecosystem of excellence and an ecosystem of trust, as well as the safety and liability implications of AI, the internet of things (IoT), and robotics. MAIEI provides 15 recommendations in relation to the sections outlined above, including: 1) focus efforts on the research and innovation community, member states, and the private sector; 2) create alignment between trading partners policies and EU policies; 3) analyze the gaps in the ecosystem between theoretical frameworks and approaches to building trustworthy AI; 4) focus on coordination and policy alignment; 5) focus on mechanisms that promote private and secure sharing of data; 6) create a network of AI research excellence centres to strengthen the research and innovation community; 7) promote knowledge transfer and develop AI expertise through Digital Innovation Hubs; 8) add nuance to the discussion regarding the opacity of AI systems; 9) create a process for individuals to appeal an AI systems decision or output; 10) implement new rules and strengthen existing regulations; 11) ban the use of facial recognition technology; 12) hold all AI systems to similar standards and compulsory requirements; 13) ensure biometric identification systems fulfill the purpose for which they are implemented; 14) implement a voluntary labelling system for systems that are not considered high-risk; 15) appoint individuals to the oversight process who understand AI systems well and are able to communicate potential risks.
The history of science and technology shows that seemingly innocuous developments in scientific theories and research have enabled real-world applications with significant negative consequences for humanity. In order to ensure that the science and technology of AI is developed in a humane manner, we must develop research publication norms that are informed by our growing understanding of AIs potential threats and use cases. Unfortunately, its difficult to create a set of publication norms for responsible AI because the field of AI is currently fragmented in terms of how this technology is researched, developed, funded, etc. To examine this challenge and find solutions, the Montreal AI Ethics Institute (MAIEI) co-hosted two public consultations with the Partnership on AI in May 2020. These meetups examined potential publication norms for responsible AI, with the goal of creating a clear set of recommendations and ways forward for publishers. In its submission, MAIEI provides six initial recommendations, these include: 1) create tools to navigate publication decisions, 2) offer a page number extension, 3) develop a network of peers, 4) require broad impact statements, 5) require the publication of expected results, and 6) revamp the peer-review process. After considering potential concerns regarding these recommendations, including constraining innovation and creating a black market for AI research, MAIEI outlines three ways forward for publishers, these include: 1) state clearly and consistently the need for established norms, 2) coordinate and build trust as a community, and 3) change the approach.
The different sets of regulations existing for differ-ent agencies within the government make the task of creating AI enabled solutions in government dif-ficult. Regulatory restrictions inhibit sharing of da-ta across different agencies, which could be a significant impediment to training AI models. We discuss the challenges that exist in environments where data cannot be freely shared and assess tech-nologies which can be used to work around these challenges. We present results on building AI models using the concept of federated AI, which al-lows creation of models without moving the training data around.
In the age of Artificial Intelligence and automation, machines have taken over many key managerial tasks. Replacing managers with AI systems may have a negative impact on workers outcomes. It is unclear if workers receive the same benefits from their relationships with AI systems, raising the question: What degree does the relationship between AI systems and workers impact worker outcomes? We draw on IT identity to understand the influence of identification with AI systems on job performance. From this theoretical perspective, we propose a research model and conduct a survey of 97 MTurk workers to test the model. The findings reveal that work role identity and organizational identity are key determinants of identification with AI systems. Furthermore, the findings show that identification with AI systems does increase job performance.
Artificial intelligence shows promise for solving many practical societal problems in areas such as healthcare and transportation. However, the current mechanisms for AI model diffusion such as Github code repositories, academic project webpages, and commercial AI marketplaces have some limitations; for example, a lack of monetization methods, model traceability, and model auditabilty. In this work, we sketch guidelines for a new AI diffusion method based on a decentralized online marketplace. We consider the technical, economic, and regulatory aspects of such a marketplace including a discussion of solutions for problems in these areas. Finally, we include a comparative analysis of several current AI marketplaces that are already available or in development. We find that most of these marketplaces are centralized commercial marketplaces with relatively few models.