ترغب بنشر مسار تعليمي؟ اضغط هنا

Social and Governance Implications of Improved Data Efficiency

121   0   0.0 ( 0 )
 نشر من قبل Aaron Tucker
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Many researchers work on improving the data efficiency of machine learning. What would happen if they succeed? This paper explores the social-economic impact of increased data efficiency. Specifically, we examine the intuition that data efficiency will erode the barriers to entry protecting incumbent data-rich AI firms, exposing them to more competition from data-poor firms. We find that this intuition is only partially correct: data efficiency makes it easier to create ML applications, but large AI firms may have more to gain from higher performing AI systems. Further, we find that the effect on privacy, data markets, robustness, and misuse are complex. For example, while it seems intuitive that misuse risk would increase along with data efficiency -- as more actors gain access to any level of capability -- the net effect crucially depends on how much defensive measures are improved. More investigation into data efficiency, as well as research into the AI production function, will be key to understanding the development of the AI industry and its societal impacts.

قيم البحث

اقرأ أيضاً

Autonomous Vehicles (AVs) raise important social and ethical concerns, especially about accountability, dignity, and justice. We focus on the specific concerns arising from how AV technology will affect the lives and livelihoods of professional and s emi-professional drivers. Whereas previous studies of such concerns have focused on the opinions of experts, we seek to understand these ethical and societal challenges from the perspectives of the drivers themselves. To this end, we adopted a qualitative research methodology based on semi-structured interviews. This is an established social science methodology that helps understand the core concerns of stakeholders in depth by avoiding the biases of superficial methods such as surveys. We find that whereas drivers agree with the experts that AVs will significantly impact transportation systems, they are apprehensive about the prospects for their livelihoods and dismiss the suggestions that driving jobs are unsatisfying and their profession does not merit protection. By showing how drivers differ from the experts, our study has ramifications beyond AVs to AI and other advanced technologies. Our findings suggest that qualitative research applied to the relevant, especially disempowered, stakeholders is essential to ensuring that new technologies are introduced ethically.
171 - Abigail Z. Jacobs 2021
Measurement of social phenomena is everywhere, unavoidably, in sociotechnical systems. This is not (only) an academic point: Fairness-related harms emerge when there is a mismatch in the measurement process between the thing we purport to be measurin g and the thing we actually measure. However, the measurement process -- where social, cultural, and political values are implicitly encoded in sociotechnical systems -- is almost always obscured. Furthermore, this obscured process is where important governance decisions are encoded: governance about which systems are fair, which individuals belong in which categories, and so on. We can then use the language of measurement, and the tools of construct validity and reliability, to uncover hidden governance decisions. In particular, we highlight two types of construct validity, content validity and consequential validity, that are useful to elicit and characterize the feedback loops between the measurement, social construction, and enforcement of social categories. We then explore the constructs of fairness, robustness, and responsibility in the context of governance in and for responsible AI. Together, these perspectives help us unpack how measurement acts as a hidden governance process in sociotechnical systems. Understanding measurement as governance supports a richer understanding of the governance processes already happening in AI -- responsible or otherwise -- revealing paths to more effective interventions.
With the recent advances of the Internet of Things, and the increasing accessibility of ubiquitous computing resources and mobile devices, the prevalence of rich media contents, and the ensuing social, economic, and cultural changes, computing techno logy and applications have evolved quickly over the past decade. They now go beyond personal computing, facilitating collaboration and social interactions in general, causing a quick proliferation of social relationships among IoT entities. The increasing number of these relationships and their heterogeneous social features have led to computing and communication bottlenecks that prevent the IoT network from taking advantage of these relationships to improve the offered services and customize the delivered content, known as relationship explosion. On the other hand, the quick advances in artificial intelligence applications in social computing have led to the emerging of a promising research field known as Artificial Social Intelligence (ASI) that has the potential to tackle the social relationship explosion problem. This paper discusses the role of IoT in social relationships detection and management, the problem of social relationships explosion in IoT and reviews the proposed solutions using ASI, including social-oriented machine-learning and deep-learning techniques.
In a world increasingly dominated by AI applications, an understudied aspect is the carbon and social footprint of these power-hungry algorithms that require copious computation and a trove of data for training and prediction. While profitable in the short-term, these practices are unsustainable and socially extractive from both a data-use and energy-use perspective. This work proposes an ESG-inspired framework combining socio-technical measures to build eco-socially responsible AI systems. The framework has four pillars: compute-efficient machine learning, federated learning, data sovereignty, and a LEEDesque certificate. Compute-efficient machine learning is the use of compressed network architectures that show marginal decreases in accuracy. Federated learning augments the first pillars impact through the use of techniques that distribute computational loads across idle capacity on devices. This is paired with the third pillar of data sovereignty to ensure the privacy of user data via techniques like use-based privacy and differential privacy. The final pillar ties all these factors together and certifies products and services in a standardized manner on their environmental and social impacts, allowing consumers to align their purchase with their values.
Like any technology, AI systems come with inherent risks and potential benefits. It comes with potential disruption of established norms and methods of work, societal impacts and externalities. One may think of the adoption of technology as a form of social contract, which may evolve or fluctuate in time, scale, and impact. It is important to keep in mind that for AI, meeting the expectations of this social contract is critical, because recklessly driving the adoption and implementation of unsafe, irresponsible, or unethical AI systems may trigger serious backlash against industry and academia involved which could take decades to resolve, if not actually seriously harm society. For the purpose of this paper, we consider that a social contract arises when there is sufficient consensus within society to adopt and implement this new technology. As such, to enable a social contract to arise for the adoption and implementation of AI, developing: 1) A socially accepted purpose, through 2) A safe and responsible method, with 3) A socially aware level of risk involved, for 4) A socially beneficial outcome, is key.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا