Do you want to publish a course? Click here

Montreal AI Ethics Institutes (MAIEI) Submission to the World Intellectual Property Organization (WIPO) Conversation on Intellectual Property (IP) and Artificial Intelligence (AI) Second Session

65   0   0.0 ( 0 )
 Added by Abhishek Gupta
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

This document posits that, at best, a tenuous case can be made for providing AI exclusive IP over their inventions. Furthermore, IP protections for AI are unlikely to confer the benefit of ensuring regulatory compliance. Rather, IP protections for AI inventors present a host of negative externalities and obscures the fact that the genuine inventor, deserving of IP, is the human agent. This document will conclude by recommending strategies for WIPO to bring IP law into the 21st century, enabling it to productively account for AI inventions. Theme: IP Protection for AI-Generated and AI-Assisted Works Based on insights from the Montreal AI Ethics Institute (MAIEI) staff and supplemented by workshop contributions from the AI Ethics community convened by MAIEI on July 5, 2020.



rate research

Read More

The history of science and technology shows that seemingly innocuous developments in scientific theories and research have enabled real-world applications with significant negative consequences for humanity. In order to ensure that the science and technology of AI is developed in a humane manner, we must develop research publication norms that are informed by our growing understanding of AIs potential threats and use cases. Unfortunately, its difficult to create a set of publication norms for responsible AI because the field of AI is currently fragmented in terms of how this technology is researched, developed, funded, etc. To examine this challenge and find solutions, the Montreal AI Ethics Institute (MAIEI) co-hosted two public consultations with the Partnership on AI in May 2020. These meetups examined potential publication norms for responsible AI, with the goal of creating a clear set of recommendations and ways forward for publishers. In its submission, MAIEI provides six initial recommendations, these include: 1) create tools to navigate publication decisions, 2) offer a page number extension, 3) develop a network of peers, 4) require broad impact statements, 5) require the publication of expected results, and 6) revamp the peer-review process. After considering potential concerns regarding these recommendations, including constraining innovation and creating a black market for AI research, MAIEI outlines three ways forward for publishers, these include: 1) state clearly and consistently the need for established norms, 2) coordinate and build trust as a community, and 3) change the approach.
In February 2020, the European Commission (EC) published a white paper entitled, On Artificial Intelligence - A European approach to excellence and trust. This paper outlines the ECs policy options for the promotion and adoption of artificial intelligence (AI) in the European Union. The Montreal AI Ethics Institute (MAIEI) reviewed this paper and published a response addressing the ECs plans to build an ecosystem of excellence and an ecosystem of trust, as well as the safety and liability implications of AI, the internet of things (IoT), and robotics. MAIEI provides 15 recommendations in relation to the sections outlined above, including: 1) focus efforts on the research and innovation community, member states, and the private sector; 2) create alignment between trading partners policies and EU policies; 3) analyze the gaps in the ecosystem between theoretical frameworks and approaches to building trustworthy AI; 4) focus on coordination and policy alignment; 5) focus on mechanisms that promote private and secure sharing of data; 6) create a network of AI research excellence centres to strengthen the research and innovation community; 7) promote knowledge transfer and develop AI expertise through Digital Innovation Hubs; 8) add nuance to the discussion regarding the opacity of AI systems; 9) create a process for individuals to appeal an AI systems decision or output; 10) implement new rules and strengthen existing regulations; 11) ban the use of facial recognition technology; 12) hold all AI systems to similar standards and compulsory requirements; 13) ensure biometric identification systems fulfill the purpose for which they are implemented; 14) implement a voluntary labelling system for systems that are not considered high-risk; 15) appoint individuals to the oversight process who understand AI systems well and are able to communicate potential risks.
In the age of Artificial Intelligence and automation, machines have taken over many key managerial tasks. Replacing managers with AI systems may have a negative impact on workers outcomes. It is unclear if workers receive the same benefits from their relationships with AI systems, raising the question: What degree does the relationship between AI systems and workers impact worker outcomes? We draw on IT identity to understand the influence of identification with AI systems on job performance. From this theoretical perspective, we propose a research model and conduct a survey of 97 MTurk workers to test the model. The findings reveal that work role identity and organizational identity are key determinants of identification with AI systems. Furthermore, the findings show that identification with AI systems does increase job performance.
This paper presents a high-level circuit obfuscation technique to prevent the theft of intellectual property (IP) of integrated circuits. In particular, our technique protects a class of circuits that relies on constant multiplications, such as filters and neural networks, where the constants themselves are the IP to be protected. By making use of decoy constants and a key-based scheme, a reverse engineer adversary at an untrusted foundry is rendered incapable of discerning true constants from decoy constants. The time-multiplexed constant multiplication (TMCM) block of such circuits, which realizes the multiplication of an input variable by a constant at a time, is considered as our case study for obfuscation. Furthermore, two TMCM design architectures are taken into account; an implementation using a multiplier and a multiplierless shift-adds implementation. Optimization methods are also applied to reduce the hardware complexity of these architectures. The well-known satisfiability (SAT) and automatic test pattern generation (ATPG) attacks are used to determine the vulnerability of the obfuscated designs. It is observed that the proposed technique incurs small overheads in area, power, and delay that are comparable to the hardware complexity of prominent logic locking methods. Yet, the advantage of our approach is in the insight that constants -- instead of arbitrary circuit nodes -- become key-protected.
Despite the tremendous success, deep neural networks are exposed to serious IP infringement risks. Given a target deep model, if the attacker knows its full information, it can be easily stolen by fine-tuning. Even if only its output is accessible, a surrogate model can be trained through student-teacher learning by generating many input-output training pairs. Therefore, deep model IP protection is important and necessary. However, it is still seriously under-researched. In this work, we propose a new model watermarking framework for protecting deep networks trained for low-level computer vision or image processing tasks. Specifically, a special task-agnostic barrier is added after the target model, which embeds a unified and invisible watermark into its outputs. When the attacker trains one surrogate model by using the input-output pairs of the barrier target model, the hidden watermark will be learned and extracted afterwards. To enable watermarks from binary bits to high-resolution images, a deep invisible watermarking mechanism is designed. By jointly training the target model and watermark embedding, the extra barrier can even be absorbed into the target model. Through extensive experiments, we demonstrate the robustness of the proposed framework, which can resist attacks with different network structures and objective functions.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا