ترغب بنشر مسار تعليمي؟ اضغط هنا

Cochlear implant users struggle to understand speech in reverberant environments. To restore speech perception, artifacts dominated by reverberant reflections can be removed from the cochlear implant stimulus. Artifacts can be identified and removed by applying a matrix of gain values, a technique referred to as time-frequency masking. Gain values are determined by an oracle algorithm that uses knowledge of the undistorted signal to minimize retention of the signal components dominated by reverberant reflections. In practice, gain values are estimated from the distorted signal, with the oracle algorithm providing the estimation objective. Different oracle techniques exist for determining gain values, and each technique must be parameterized to set the amount of signal retention. This work assesses which oracle masking strategies and parameterizations lead to the best improvements in speech intelligibility for cochlear implant users in reverberant conditions using online speech intelligibility testing of normal-hearing individuals with vocoding.
Cochlear implant (CI) users have considerable difficulty in understanding speech in reverberant listening environments. Time-frequency (T-F) masking is a common technique that aims to improve speech intelligibility by multiplying reverberant speech b y a matrix of gain values to suppress T-F bins dominated by reverberation. Recently proposed mask estimation algorithms leverage machine learning approaches to distinguish between target speech and reverberant reflections. However, the spectro-temporal structure of speech is highly variable and dependent on the underlying phoneme. One way to potentially overcome this variability is to leverage explicit knowledge of phonemic information during mask estimation. This study proposes a phoneme-based mask estimation algorithm, where separate mask estimation models are trained for each phoneme. Sentence recognition tests were conducted in normal hearing listeners to determine whether a phoneme-based mask estimation algorithm is beneficial in the ideal scenario where perfect knowledge of the phoneme is available. The results showed that the phoneme-based masks improved the intelligibility of vocoded speech when compared to conventional phoneme-independent masks. The results suggest that a phoneme-based speech enhancement strategy may potentially benefit CI users in reverberant listening environments.
Over the past year, remote speech intelligibility testing has become a popular and necessary alternative to traditional in-person experiments due to the need for physical distancing during the COVID-19 pandemic. A remote framework was developed for c onducting speech intelligibility tests with normal hearing listeners. In this study, subjects used their personal computers to complete sentence recognition tasks in anechoic and reverberant listening environments. The results obtained using this remote framework were compared with previously collected in-lab results, and showed higher levels of speech intelligibility among remote study participants than subjects who completed the test in the laboratory.
Left ventricular assist devices (LVADs) are surgically implanted mechanical pumps that improve survival rates for individuals with advanced heart failure. While life-saving, LVAD therapy is also associated with high morbidity, which can be partially attributed to the difficulties in identifying an LVAD complication before an adverse event occurs. Methods that are currently used to monitor for complications in LVAD-supported individuals require frequent clinical assessments at specialized LVAD centers. Remote analysis of digitally recorded precordial sounds has the potential to provide an inexpensive point-of-care diagnostic tool to assess both device function and the degree of cardiac support in LVAD recipients, facilitating real-time, remote monitoring for early detection of complications. To our knowledge, prior studies of precordial sounds in LVAD-supported individuals have analyzed LVAD noise rather than intrinsic heart sounds, due to a focus on detecting pump complications, and perhaps the obscuring of heart sounds by LVAD noise. In this letter, we describe an adaptive filtering method to remove sounds generated by the LVAD, making it possible to automatically isolate and analyze underlying heart sounds. We present preliminary results describing acoustic signatures of heart sounds extracted from in vivo data obtained from LVAD-supported individuals. These findings are significant as they provide proof-of-concept evidence for further exploration of heart sound analysis in LVAD-supported individuals to identify cardiac abnormalities and changes in LVAD support.
Substantial research has been devoted to the development of algorithms that automate buried threat detection (BTD) with ground penetrating radar (GPR) data, resulting in a large number of proposed algorithms. One popular algorithm GPR-based BTD, orig inally applied by Torrione et al., 2012, is the Histogram of Oriented Gradients (HOG) feature. In a recent large-scale comparison among five veteran institutions, a modified version of HOG referred to here as gprHOG, performed poorly compared to other modern algorithms. In this paper, we provide experimental evidence demonstrating that the modifications to HOG that comprise gprHOG result in a substantially better-performing algorithm. The results here, in conjunction with the large-scale algorithm comparison, suggest that HOG is not competitive with modern GPR-based BTD algorithms. Given HOGs popularity, these results raise some questions about many existing studies, and suggest gprHOG (and especially HOG) should be employed with caution in future studies.
In this work we consider the application of convolutional neural networks (CNNs) for pixel-wise labeling (a.k.a., semantic segmentation) of remote sensing imagery (e.g., aerial color or hyperspectral imagery). Remote sensing imagery is usually stored in the form of very large images, referred to as tiles, which are too large to be segmented directly using most CNNs and their associated hardware. As a result, during label inference, smaller sub-images, called patches, are processed individually and then stitched (concatenated) back together to create a tile-sized label map. This approach suffers from computational ineffiency and can result in discontinuities at output boundaries. We propose a simple alternative approach in which the input size of the CNN is dramatically increased only during label inference. This does not avoid stitching altogether, but substantially mitigates its limitations. We evaluate the performance of the proposed approach against a vonventional stitching approach using two popular segmentation CNN models and two large-scale remote sensing imagery datasets. The results suggest that the proposed approach substantially reduces label inference time, while also yielding modest overall label accuracy increases. This approach contributed to our wining entry (overall performance) in the INRIA building labeling competition.
We consider the problem of automatically detecting small-scale solar photovoltaic arrays for behind-the-meter energy resource assessment in high resolution aerial imagery. Such algorithms offer a faster and more cost-effective solution to collecting information on distributed solar photovoltaic (PV) arrays, such as their location, capacity, and generated energy. The surface area of PV arrays, a characteristic which can be estimated from aerial imagery, provides an important proxy for array capacity and energy generation. In this work, we employ a state-of-the-art convolutional neural network architecture, called SegNet (Badrinarayanan et. al., 2015), to semantically segment (or map) PV arrays in aerial imagery. This builds on previous work focused on identifying the locations of PV arrays, as opposed to their specific shapes and sizes. We measure the ability of our SegNet implementation to estimate the surface area of PV arrays on a large, publicly available, dataset that has been employed in several previous studies. The results indicate that the SegNet model yields substantial performance improvements with respect to estimating shape and size as compared to a recently proposed convolutional neural network PV detection algorithm.
Brain-computer interfaces (BCIs) can provide an alternative means of communication for individuals with severe neuromuscular limitations. The P300-based BCI speller relies on eliciting and detecting transient event-related potentials (ERPs) in electr oencephalography (EEG) data, in response to a user attending to rarely occurring target stimuli amongst a series of non-target stimuli. However, in most P300 speller implementations, the stimuli to be presented are randomly selected from a limited set of options and stimulus selection and presentation are not optimized based on previous user data. In this work, we propose a data-driven method for stimulus selection based on the expected discrimination gain metric. The data-driven approach selects stimuli based on previously observed stimulus responses, with the aim of choosing a set of stimuli that will provide the most information about the users intended target character. Our approach incorporates knowledge of physiological and system constraints imposed due to real-time BCI implementation. Simulations were performed to compare our stimulus selection approach to the row-column paradigm, the conventional stimulus selection method for P300 spellers. Results from the simulations demonstrated that our adaptive stimulus selection approach has the potential to significantly improve performance from the conventional method: up to 34% improvement in accuracy and 43% reduction in the mean number of stimulus presentations required to spell a character in a 72-character grid. In addition, our greedy approach to stimulus selection provides the flexibility to accommodate design constraints.
Forward-looking ground-penetrating radar (FLGPR) has recently been investigated as a remote sensing modality for buried target detection (e.g., landmines). In this context, raw FLGPR data is beamformed into images and then computerized algorithms are applied to automatically detect subsurface buried targets. Most existing algorithms are supervised, meaning they are trained to discriminate between labeled target and non-target imagery, usually based on features extracted from the imagery. A large number of features have been proposed for this purpose, however thus far it is unclear which are the most effective. The first goal of this work is to provide a comprehensive comparison of detection performance using existing features on a large collection of FLGPR data. Fusion of the decisions resulting from processing each feature is also considered. The second goal of this work is to investigate two modern feature learning approaches from the object recognition literature: the bag-of-visual-words and the Fisher vector for FLGPR processing. The results indicate that the new feature learning approaches outperform existing methods. Results also show that fusion between existing features and new features yields little additional performance improvements.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا