ﻻ يوجد ملخص باللغة العربية
There is a growing need for data-driven research efforts on how the public perceives the ethical, moral, and legal issues of autonomous AI systems. The current debate on the responsibility gap posed by these systems is one such example. This work proposes a mixed AI ethics model that allows normative and descriptive research to complement each other, by aiding scholarly discussion with data gathered from the public. We discuss its implications on bridging the gap between optimistic and pessimistic views towards AI systems deployment.
Whether to give rights to artificial intelligence (AI) and robots has been a sensitive topic since the European Parliament proposed advanced robots could be granted electronic personalities. Numerous scholars who favor or disfavor its feasibility hav
The 2nd edition of the Montreal AI Ethics Institutes The State of AI Ethics captures the most relevant developments in the field of AI Ethics since July 2020. This report aims to help anyone, from machine learning experts to human rights activists an
The 3rd edition of the Montreal AI Ethics Institutes The State of AI Ethics captures the most relevant developments in AI Ethics since October 2020. It aims to help anyone, from machine learning experts to human rights activists and policymakers, qui
The 4th edition of the Montreal AI Ethics Institutes The State of AI Ethics captures the most relevant developments in the field of AI Ethics since January 2021. This report aims to help anyone, from machine learning experts to human rights activists
These past few months have been especially challenging, and the deployment of technology in ways hitherto untested at an unrivalled pace has left the internet and technology watchers aghast. Artificial intelligence has become the byword for technolog