Do you want to publish a course? Click here

Mobile Sound Recognition for the Deaf and Hard of Hearing

115   0   0.0 ( 0 )
 Added by Leonardo Fanzeres
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Human perception of surrounding events is strongly dependent on audio cues. Thus, acoustic insulation can seriously impact situational awareness. We present an exploratory study in the domain of assistive computing, eliciting requirements and presenting solutions to problems found in the development of an environmental sound recognition system, which aims to assist deaf and hard of hearing people in the perception of sounds. To take advantage of smartphones computational ubiquity, we propose a system that executes all processing on the device itself, from audio features extraction to recognition and visual presentation of results. Our application also presents the confidence level of the classification to the user. A test of the system conducted with deaf users provided important and inspiring feedback from participants.



rate research

Read More

The dynamics of gravitating astrophysical systems such as black holes and neutron stars are fascinatingly complex, offer some of natures most spectacular phenomena, and capture the publics imagination in ways that few subjects can. Here, we describe {it AstroDance}, a multi-media project to engage deaf and hard-of-hearing (DHH) students in astronomy and gravitational physics. {it AstroDance} incorporates multiple means of representation of scientific concepts and was performed primarily for secondary and post-secondary audiences at $sim$20 venues in the northeastern US prior to the historic first detection of gravitational waves. As part of the {it AstroDance} project, we surveyed $sim$1000 audience members roughly split evenly between hearing and DHH audience members. While both groups reported statistically equivalent high-rates of enjoyment of the performance, the DHH group reported an increase in how much they learned about science at a statistically significant rate compared to the hearing audience. Our findings suggest that multi-sensory approaches benefit both hearing and deaf audiences and enable accessible participation for broader groups.
Social media platforms support the sharing of written text, video, and audio. All of these formats may be inaccessible to people who are deaf or hard of hearing (DHH), particularly those who primarily communicate via sign language, people who we call Deaf signers. We study how Deaf signers engage with social platforms, focusing on how they share content and the barriers they face. We employ a mixed-methods approach involving seven in-depth interviews and a survey of a larger population (n = 60). We find that Deaf signers share the most in written English, despite their desire to share in sign language. We further identify key areas of difficulty in consuming content (e.g., lack of captions for spoken content in videos) and producing content (e.g., captioning signed videos, signing into a phone camera) on social media platforms. Our results both provide novel insights into social media use by Deaf signers and reinforce prior findings on DHH communication more generally, while revealing potential ways to make social media platforms more accessible to Deaf signers.
The number of visually impaired or blind (VIB) people in the world is estimated at several hundred million. Based on a series of interviews with the VIB and developers of assistive technology, this paper provides a survey of machine-learning based mobile applications and identifies the most relevant applications. We discuss the functionality of these apps, how they align with the needs and requirements of the VIB users, and how they can be improved with techniques such as federated learning and model compression. As a result of this study we identify promising future directions of research in mobile perception, micro-navigation, and content-summarization.
Mobile Augmented Reality (MAR) integrates computer-generated virtual objects with physical environments for mobile devices. MAR systems enable users to interact with MAR devices, such as smartphones and head-worn wearables, and performs seamless transitions from the physical world to a mixed world with digital entities. These MAR systems support user experiences by using MAR devices to provide universal accessibility to digital contents. Over the past 20 years, a number of MAR systems have been developed, however, the studies and design of MAR frameworks have not yet been systematically reviewed from the perspective of user-centric design. This article presents the first effort of surveying existing MAR frameworks (count: 37) and further discusses the latest studies on MAR through a top-down approach: 1) MAR applications; 2) MAR visualisation techniques adaptive to user mobility and contexts; 3) systematic evaluation of MAR frameworks including supported platforms and corresponding features such as tracking, feature extraction plus sensing capabilities; and 4) underlying machine learning approaches supporting intelligent operations within MAR systems. Finally, we summarise the development of emerging research fields, current state-of-the-art, and discuss the important open challenges and possible theoretical and technical directions. This survey aims to benefit both researchers and MAR system developers alike.
SWift (SignWriting improved fast transcriber) is an advanced editor for SignWriting (SW). At present, SW is a promising alternative to provide documents in an easy-to-grasp written form of (any) Sign Language, the gestural way of communication which is widely adopted by the deaf community. SWift was developed SW users, either deaf or not, to support collaboration and exchange of ideas. The application allows composing and saving desired signs using elementary components, called glyphs. The procedure that was devised guides and simplifies the editing process. SWift aims at breaking the electronic barriers that keep the deaf community away from ICT in general, and from e-learning in particular. The editor can be contained in a pluggable module; therefore, it can be integrated everywhere the use of SW is an advisable alternative to written verbal language, which often hinders information grasping by deaf users.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا