No Arabic abstract
Web Accessibility for disabled people has posed a challenge to the civilized societies that claim to uphold the principles of equal opportunity and nondiscrimination. Certain concrete measures have been taken to narrow down the digital divide between normal and disabled users of Internet technology. The efforts have resulted in enactment of legislations and laws and mass awareness about the discriminatory nature of the accessibility issue, besides the efforts have resulted in the development of commensurate technological tools to develop and test the Web accessibility. World Wide Web consortiums (W3C) Web Accessibility Initiative (WAI) has framed a comprehensive document comprising of set of guidelines to make the Web sites accessible to the users with disabilities. This paper is about the issues and aspects surrounding Web Accessibility. The details and scope are kept limited to comply with the aim of the paper which is to create awareness and to provide basis for in-depth investigation.
We consider real-time timely tracking of infection status (e.g., covid-19) of individuals in a population. In this work, a health care provider wants to detect infected people as well as people who recovered from the disease as quickly as possible. In order to measure the timeliness of the tracking process, we use the long-term average difference between the actual infection status of the people and their real-time estimate by the health care provider based on the most recent test results. We first find an analytical expression for this average difference for given test rates, and given infection and recovery rates of people. Next, we propose an alternating minimization based algorithm to minimize this average difference. We observe that if the total test rate is limited, instead of testing all members of the population equally, only a portion of the population is tested based on their infection and recovery rates. We also observe that increasing the total test rate helps track the infection status better. In addition, an increased population size increases diversity of people with different infection and recovery rates, which may be exploited to spend testing capacity more efficiently, thereby improving the system performance. Finally, depending on the health care providers preferences, test rate allocation can be altered to detect either the infected people or the recovered people more quickly.
Pedestrian accessibility is an important factor in urban transport and land use policy and critical for creating healthy, sustainable cities. Developing and evaluating indicators measuring inequalities in pedestrian accessibility can help planners and policymakers benchmark and monitor the progress of city planning interventions. However, measuring and assessing indicators of urban design and transport features at high resolution worldwide to enable city comparisons is challenging due to limited availability of official, high quality, and comparable spatial data, as well as spatial analysis tools offering customizable frameworks for indicator construction and analysis. To address these challenges, this study develops an open source software framework to construct pedestrian accessibility indicators for cities using open and consistent data. It presents a generalized method to consistently measure pedestrian accessibility at high resolution and spatially aggregated scale, to allow for both within- and between-city analyses. The open source and open data methods developed in this study can be extended to other cities worldwide to support local planning and policymaking. The software is made publicly available for reuse in an open repository.
Ethics in the emerging world of data science are often discussed through cautionary tales about the dire consequences of missteps taken by high profile companies or organizations. We take a different approach by foregrounding the ways that ethics are implicated in the day-to-day work of data science, focusing on instances in which data scientists recognize, grapple with, and conscientiously respond to ethical challenges. This paper presents a case study of ethical dilemmas that arose in a data science for social good (DSSG) project focused on improving navigation for people with limited mobility. We describe how this particular DSSG team responded to those dilemmas, and how those responses gave rise to still more dilemmas. While the details of the case discussed here are unique, the ethical dilemmas they illuminate can commonly be found across many DSSG projects. These include: the risk of exacerbating disparities; the thorniness of algorithmic accountability; the evolving opportunities for mischief presented by new technologies; the subjective and value- laden interpretations at the heart of any data-intensive project; the potential for data to amplify or mute particular voices; the possibility of privacy violations; and the folly of technological solutionism. Based on our tracing of the teams responses to these dilemmas, we distill lessons for an ethical data science practice that can be more generally applied across DSSG projects. Specifically, this case experience highlights the importance of: 1) Setting the scene early on for ethical thinking 2) Recognizing ethical decision-making as an emergent phenomenon intertwined with the quotidian work of data science for social good 3) Approaching ethical thinking as a thoughtful and intentional balancing of priorities rather than a binary differentiation between right and wrong.
In the last few years the Web has progressively acquired the status of an infrastructure for social computation that allows researchers to coordinate the cognitive abilities of human agents in on-line communities so to steer the collective user activity towards predefined goals. This general trend is also triggering the adoption of web-games as a very interesting laboratory to run experiments in the social sciences and whenever the contribution of human beings is crucially required for research purposes. Nowadays, while the number of on-line users has been steadily growing, there is still a need of systematization in the approach to the web as a laboratory. In this paper we present Experimental Tribe (XTribe in short), a novel general purpose web-based platform for web-gaming and social computation. Ready to use and already operational, XTribe aims at drastically reducing the effort required to develop and run web experiments. XTribe has been designed to speed up the implementation of those general aspects of web experiments that are independent of the specific experiment content. For example, XTribe takes care of user management by handling their registration and profiles and in case of multi-player games, it provides the necessary user grouping functionalities. XTribe also provides communication facilities to easily achieve both bidirectional and asynchronous communication. From a practical point of view, researchers are left with the only task of designing and implementing the game interface and logic of their experiment, on which they maintain full control. Moreover, XTribe acts as a repository of different scientific experiments, thus realizing a sort of showcase that stimulates users curiosity, enhances their participation, and helps researchers in recruiting volunteers.
Systems incorporating biometric technologies have become ubiquitous in personal, commercial, and governmental identity management applications. Both cooperative (e.g. access control) and non-cooperative (e.g. surveillance and forensics) systems have benefited from biometrics. Such systems rely on the uniqueness of certain biological or behavioural characteristics of human beings, which enable for individuals to be reliably recognised using automated algorithms. Recently, however, there has been a wave of public and academic concerns regarding the existence of systemic bias in automated decision systems (including biometrics). Most prominently, face recognition algorithms have often been labelled as racist or biased by the media, non-governmental organisations, and researchers alike. The main contributions of this article are: (1) an overview of the topic of algorithmic bias in the context of biometrics, (2) a comprehensive survey of the existing literature on biometric bias estimation and mitigation, (3) a discussion of the pertinent technical and social matters, and (4) an outline of the remaining challenges and future work items, both from technological and social points of view.