No Arabic abstract
Standardisation is an important component in the maturation of any field of technology. It contributes to the formation of a recognisable identity and enables interactions with a wider community. This article reviews past and current standardisation initiatives in the field of Open Source Hardware (OSH). While early initiatives focused on aspects such as licencing, intellectual property and documentation formats, recent efforts extend to ways for users to exercise their rights under open licences and to keep OSH projects discoverable and accessible online. We specifically introduce two standards that are currently being released and call for early users and contributors, the DIN SPEC 3105 and the Open Know How Manifest Specification. Finally, we reflect on challenges around standardisation in the community and relevant areas for future development such as an open tool chain, modularity and hardware specific interface standards.
Computational research and data analytics increasingly relies on complex ecosystems of open source software (OSS) libraries -- curated collections of reusable code that programmers import to perform a specific task. Software documentation for these libraries is crucial in helping programmers/analysts know what libraries are available and how to use them. Yet documentation for open source software libraries is widely considered low-quality. This article is a collaboration between CSCW researchers and contributors to data analytics OSS libraries, based on ethnographic fieldwork and qualitative interviews. We examine several issues around the formats, practices, and challenges around documentation in these largely volunteer-based projects. There are many different kinds and formats of documentation that exist around such libraries, which play a variety of educational, promotional, and organizational roles. The work behind documentation is similarly multifaceted, including writing, reviewing, maintaining, and organizing documentation. Different aspects of documentation work require contributors to have different sets of skills and overcome various social and technical barriers. Finally, most of our interviewees do not report high levels of intrinsic enjoyment for doing documentation work (compared to writing code). Their motivation is affected by personal and project-specific factors, such as the perceived level of credit for doing documentation work versus more technical tasks like adding new features or fixing bugs. In studying documentation work for data analytics OSS libraries, we gain a new window into the changing practices of data-intensive research, as well as help practitioners better understand how to support this often invisible and infrastructural work in their projects.
In this research article, we explore the use of a design process for adapting existing cyber risk assessment standards to allow the calculation of economic impact from IoT cyber risk. The paper presents a new model that includes a design process with new risk assessment vectors, specific for IoT cyber risk. To design new risk assessment vectors for IoT, the study applied a range of methodologies, including literature review, empirical study and comparative study, followed by theoretical analysis and grounded theory. An epistemological framework emerges from applying the constructivist grounded theory methodology to draw on knowledge from existing cyber risk frameworks, models and methodologies. This framework presents the current gaps in cyber risk standards and policies, and defines the design principles of future cyber risk impact assessment. The core contribution of the article therefore, being the presentation of a new model for impact assessment of IoT cyber risk.
This report is a high-level summary analysis of the 2017 GitHub Open Source Survey dataset, presenting frequency counts, proportions, and frequency or proportion bar plots for every question asked in the survey.
Research institutions are bound to contribute to greenhouse gas emission (GHG) reduction efforts for several reasons. First, part of the scientific communitys research deals with climate change issues. Second, scientists contribute to students education: they must be consistent and role models. Third the literature on the carbon footprint of researchers points to the high level of some individual footprints. In a quest for consistency and role models, scientists, teams of scientists or universities have started to quantify their carbon footprints and debate on reduction options. Indeed, measuring the carbon footprint of research activities requires tools designed to tackle its specific features. In this paper, we present an open-source web application, GES 1point5, developed by an interdisciplinary team of scientists from several research labs in France. GES 1point5 is specifically designed to estimate the carbon footprint of research activities in France. It operates at the scale of research labs, i.e. laboratoires, which are the social structures around which research is organized in France and the smallest decision making entities in the French research system. The application allows French research labs to compute their own carbon footprint along a standardized, open protocol. The data collected in a rapidly growing network of labs will be used as part of the Labos 1point5 project to estimate Frances research carbon footprint. At the time of submitting this manuscript, 89 research labs had engaged with GES 1point5 to estimate their greenhouse gas emissions. We expect that an international adoption of GES 1point5 (adapted to fit domestic specifics) could contribute to establishing a global understanding of the drivers of the research carbon footprint worldwide and the levers to decrease it.
There has been rapidly growing interest in the use of algorithms in hiring, especially as a means to address or mitigate bias. Yet, to date, little is known about how these methods are used in practice. How are algorithmic assessments built, validated, and examined for bias? In this work, we document and analyze the claims and practices of companies offering algorithms for employment assessment. In particular, we identify vendors of algorithmic pre-employment assessments (i.e., algorithms to screen candidates), document what they have disclosed about their development and validation procedures, and evaluate their practices, focusing particularly on efforts to detect and mitigate bias. Our analysis considers both technical and legal perspectives. Technically, we consider the various choices vendors make regarding data collection and prediction targets, and explore the risks and trade-offs that these choices pose. We also discuss how algorithmic de-biasing techniques interface with, and create challenges for, antidiscrimination law.