Do you want to publish a course? Click here

The Social Contract for AI

127   0   0.0 ( 0 )
 Added by Abhishek Gupta
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Like any technology, AI systems come with inherent risks and potential benefits. It comes with potential disruption of established norms and methods of work, societal impacts and externalities. One may think of the adoption of technology as a form of social contract, which may evolve or fluctuate in time, scale, and impact. It is important to keep in mind that for AI, meeting the expectations of this social contract is critical, because recklessly driving the adoption and implementation of unsafe, irresponsible, or unethical AI systems may trigger serious backlash against industry and academia involved which could take decades to resolve, if not actually seriously harm society. For the purpose of this paper, we consider that a social contract arises when there is sufficient consensus within society to adopt and implement this new technology. As such, to enable a social contract to arise for the adoption and implementation of AI, developing: 1) A socially accepted purpose, through 2) A safe and responsible method, with 3) A socially aware level of risk involved, for 4) A socially beneficial outcome, is key.



rate research

Read More

In a world increasingly dominated by AI applications, an understudied aspect is the carbon and social footprint of these power-hungry algorithms that require copious computation and a trove of data for training and prediction. While profitable in the short-term, these practices are unsustainable and socially extractive from both a data-use and energy-use perspective. This work proposes an ESG-inspired framework combining socio-technical measures to build eco-socially responsible AI systems. The framework has four pillars: compute-efficient machine learning, federated learning, data sovereignty, and a LEEDesque certificate. Compute-efficient machine learning is the use of compressed network architectures that show marginal decreases in accuracy. Federated learning augments the first pillars impact through the use of techniques that distribute computational loads across idle capacity on devices. This is paired with the third pillar of data sovereignty to ensure the privacy of user data via techniques like use-based privacy and differential privacy. The final pillar ties all these factors together and certifies products and services in a standardized manner on their environmental and social impacts, allowing consumers to align their purchase with their values.
In the age of big data, companies and governments are increasingly using algorithms to inform hiring decisions, employee management, policing, credit scoring, insurance pricing, and many more aspects of our lives. AI systems can help us make evidence-driven, efficient decisions, but can also confront us with unjustified, discriminatory decisions wrongly assumed to be accurate because they are made automatically and quantitatively. It is becoming evident that these technological developments are consequential to peoples fundamental human rights. Despite increasing attention to these urgent challenges in recent years, technical solutions to these complex socio-ethical problems are often developed without empirical study of societal context and the critical input of societal stakeholders who are impacted by the technology. On the other hand, calls for more ethically- and socially-aware AI often fail to provide answers for how to proceed beyond stressing the importance of transparency, explainability, and fairness. Bridging these socio-technical gaps and the deep divide between abstract value language and design requirements is essential to facilitate nuanced, context-dependent design choices that will support moral and social values. In this paper, we bridge this divide through the framework of Design for Values, drawing on methodologies of Value Sensitive Design and Participatory Design to present a roadmap for proactively engaging societal stakeholders to translate fundamental human rights into context-dependent design requirements through a structured, inclusive, and transparent process.
In February 2020, the European Commission (EC) published a white paper entitled, On Artificial Intelligence - A European approach to excellence and trust. This paper outlines the ECs policy options for the promotion and adoption of artificial intelligence (AI) in the European Union. The Montreal AI Ethics Institute (MAIEI) reviewed this paper and published a response addressing the ECs plans to build an ecosystem of excellence and an ecosystem of trust, as well as the safety and liability implications of AI, the internet of things (IoT), and robotics. MAIEI provides 15 recommendations in relation to the sections outlined above, including: 1) focus efforts on the research and innovation community, member states, and the private sector; 2) create alignment between trading partners policies and EU policies; 3) analyze the gaps in the ecosystem between theoretical frameworks and approaches to building trustworthy AI; 4) focus on coordination and policy alignment; 5) focus on mechanisms that promote private and secure sharing of data; 6) create a network of AI research excellence centres to strengthen the research and innovation community; 7) promote knowledge transfer and develop AI expertise through Digital Innovation Hubs; 8) add nuance to the discussion regarding the opacity of AI systems; 9) create a process for individuals to appeal an AI systems decision or output; 10) implement new rules and strengthen existing regulations; 11) ban the use of facial recognition technology; 12) hold all AI systems to similar standards and compulsory requirements; 13) ensure biometric identification systems fulfill the purpose for which they are implemented; 14) implement a voluntary labelling system for systems that are not considered high-risk; 15) appoint individuals to the oversight process who understand AI systems well and are able to communicate potential risks.
These past few months have been especially challenging, and the deployment of technology in ways hitherto untested at an unrivalled pace has left the internet and technology watchers aghast. Artificial intelligence has become the byword for technological progress and is being used in everything from helping us combat the COVID-19 pandemic to nudging our attention in different directions as we all spend increasingly larger amounts of time online. It has never been more important that we keep a sharp eye out on the development of this field and how it is shaping our society and interactions with each other. With this inaugural edition of the State of AI Ethics we hope to bring forward the most important developments that caught our attention at the Montreal AI Ethics Institute this past quarter. Our goal is to help you navigate this ever-evolving field swiftly and allow you and your organization to make informed decisions. This pulse-check for the state of discourse, research, and development is geared towards researchers and practitioners alike who are making decisions on behalf of their organizations in considering the societal impacts of AI-enabled solutions. We cover a wide set of areas in this report spanning Agency and Responsibility, Security and Risk, Disinformation, Jobs and Labor, the Future of AI Ethics, and more. Our staff has worked tirelessly over the past quarter surfacing signal from the noise so that you are equipped with the right tools and knowledge to confidently tread this complex yet consequential domain.
The different sets of regulations existing for differ-ent agencies within the government make the task of creating AI enabled solutions in government dif-ficult. Regulatory restrictions inhibit sharing of da-ta across different agencies, which could be a significant impediment to training AI models. We discuss the challenges that exist in environments where data cannot be freely shared and assess tech-nologies which can be used to work around these challenges. We present results on building AI models using the concept of federated AI, which al-lows creation of models without moving the training data around.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا