No Arabic abstract
Numerous accessibility features have been developed and included in consumer operating systems to provide people with a variety of disabilities additional ways to access computing devices. Unfortunately, many users, especially older adults who are more likely to experience ability changes, are not aware of these features or do not know which combination to use. In this paper, we first quantify this problem via a survey with 100 participants, demonstrating that very few people are aware of built-in accessibility features on their phones. These observations led us to investigate accessibility recommendation as a way to increase awareness and adoption. We developed four prototype recommenders that span different accessibility categories, which we used to collect insights from 20 older adults. Our work demonstrates the need to increase awareness of existing accessibility features on mobile devices, and shows that automated recommendation could help people find beneficial accessibility features.
Social media platforms support the sharing of written text, video, and audio. All of these formats may be inaccessible to people who are deaf or hard of hearing (DHH), particularly those who primarily communicate via sign language, people who we call Deaf signers. We study how Deaf signers engage with social platforms, focusing on how they share content and the barriers they face. We employ a mixed-methods approach involving seven in-depth interviews and a survey of a larger population (n = 60). We find that Deaf signers share the most in written English, despite their desire to share in sign language. We further identify key areas of difficulty in consuming content (e.g., lack of captions for spoken content in videos) and producing content (e.g., captioning signed videos, signing into a phone camera) on social media platforms. Our results both provide novel insights into social media use by Deaf signers and reinforce prior findings on DHH communication more generally, while revealing potential ways to make social media platforms more accessible to Deaf signers.
Accessibility research has grown substantially in the past few decades, yet there has been no literature review of the field. To understand current and historical trends, we created and analyzed a dataset of accessibility papers appearing at CHI and ASSETS since ASSETS founding in 1994. We qualitatively coded areas of focus and methodological decisions for the past 10 years (2010-2019, N=506 papers), and analyzed paper counts and keywords over the full 26 years (N=836 papers). Our findings highlight areas that have received disproportionate attention and those that are underserved--for example, over 43% of papers in the past 10 years are on accessibility for blind and low vision people. We also capture common study characteristics, such as the roles of disabled and nondisabled participants as well as sample sizes (e.g., a median of 13 for participant groups with disabilities and older adults). We close by critically reflecting on gaps in the literature and offering guidance for future work in the field.
As cities continue to grow globally, air pollution is increasing at an alarming rate, causing a significant negative impact on public health. One way to affect the negative impact is to regulate the producers of such pollution through policy implementation and enforcement. CleanAirNowKC (CAN-KC) is an environmental justice organization based in Kansas City (KC), Kansas. As part of their organizational objectives, they have to date deployed nine PurpleAir air quality sensors in different locations about which the community has expressed concern. In this paper, we have implemented an interactive map that can help the community members to monitor air quality efficiently. The system also allows for reporting and tracking industrial emissions or toxic releases, which will further help identify major contributors to pollution. These resources can serve an important role as evidence that will assist in advocating for community-driven just policies to improve the air quality regulation in Kansas City.
Over the past decade, extended reality (XR) applications have increasingly been used as assistive technology for people with low vision (LV). Here we present a systematic literature review of 216 publications from 109 different venues assessing the potential of XR technology to serve as not just a visual accessibility aid but also as a tool to study perception and behavior in people with low vision and blind people whose vision was restored with a visual neuroprosthesis. These technologies may be used to visually enhance a persons environment for completing daily activities, train LV participants with residual vision, or simulate either a specific visual impairment or the artificial vision generated by a prosthetic implant. We also highlight the need for adequate empirical evaluation, the broadening of end-user participation, and a more nuanced understanding of the suitability and usability of different XR-based accessibility aids.
Super-resolution (SR) is a coveted image processing technique for mobile apps ranging from the basic camera apps to mobile health. Existing SR algorithms rely on deep learning models with significant memory requirements, so they have yet to be deployed on mobile devices and instead operate in the cloud to achieve feasible inference time. This shortcoming prevents existing SR methods from being used in applications that require near real-time latency. In this work, we demonstrate state-of-the-art latency and accuracy for on-device super-resolution using a novel hybrid architecture called SplitSR and a novel lightweight residual block called SplitSRBlock. The SplitSRBlock supports channel-splitting, allowing the residual blocks to retain spatial information while reducing the computation in the channel dimension. SplitSR has a hybrid design consisting of standard convolutional blocks and lightweight residual blocks, allowing people to tune SplitSR for their computational budget. We evaluate our system on a low-end ARM CPU, demonstrating both higher accuracy and up to 5 times faster inference than previous approaches. We then deploy our model onto a smartphone in an app called ZoomSR to demonstrate the first-ever instance of on-device, deep learning-based SR. We conducted a user study with 15 participants to have them assess the perceived quality of images that were post-processed by SplitSR. Relative to bilinear interpolation -- the existing standard for on-device SR -- participants showed a statistically significant preference when looking at both images (Z=-9.270, p<0.01) and text (Z=-6.486, p<0.01).