ترغب بنشر مسار تعليمي؟ اضغط هنا

Printed Texts Tracking and Following for a Finger-Wearable Electro-Braille System Through Opto-electrotactile Feedback

313   0   0.0 ( 0 )
 نشر من قبل Mehdi Rahimi
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper presents our recent development on a portable and refreshable text reading and sensory substitution system for the blind or visually impaired (BVI), called Finger-eye. The system mainly consists of an opto-text processing unit and a compact electro-tactile based display that can deliver text-related electrical signals to the fingertip skin through a wearable and Braille-dot patterned electrode array and thus delivers the electro-stimulation based Braille touch sensations to the fingertip. To achieve the goal of aiding BVI to read any text not written in Braille through this portable system, in this work, a Rapid Optical Character Recognition (R-OCR) method is firstly developed for real-time processing text information based on a Fisheye imaging device mounted at the finger-wearable electro-tactile display. This allows real-time translation of printed text to electro-Braille along with natural movement of users fingertip as if reading any Braille display or book. More importantly, an electro-tactile neuro-stimulation feedback mechanism is proposed and incorporated with the R-OCR method, which facilitates a new opto-electrotactile feedback based text line tracking control approach that enables text line following by user fingertip during reading. Multiple experiments were designed and conducted to test the ability of blindfolded participants to read through and follow the text line based on the opto-electrotactile-feedback method. The experiments show that as the result of the opto-electrotactile-feedback, the users were able to maintain their fingertip within a $2mm$ distance of the text while scanning a text line. This research is a significant step to aid the BVI users with a portable means to translate and follow to read any printed text to Braille, whether in the digital realm or physically, on any surface.



قيم البحث

اقرأ أيضاً

With the ubiquity of touchscreens, touch input modality has become a popular way of interaction. However, current touchscreen technology is limiting in its design as it restricts touch interactions to specially instrumented touch surfaces. Surface co ntaminants like water can also hinder proper interactions. In this paper, we propose the use of magnetic field sensing to enable finger tracking on a surface with minimal instrumentation. Our system, MagSurface, turns everyday surfaces into a touch medium, thus allowing more flexibility in the types of touch surfaces. The evaluation of our system consists of quantifying the accuracy of the system in locating an object on 2D flat surfaces. We test our system on three different surface materials to validate its usage scenarios. A qualitative user experience study was also conducted to get feedback on the ease of use and comfort of the system. Localization error as low as a few millimeters was achieved
Nowadays is very common to find headlines in the media where it is stated that 3D printing is a technology called to change our lives in the near future. For many authors, we are living in times of a third industrial revolution. Howerver, we are curr ently in a stage of development where the use of 3D printing is advantageous over other manufacturing technologies only in rare scenarios. Fortunately, scientific research is one of them. Here we present the development of a set of opto-mechanical components that can be built easily using a 3D printer based on Fused Filament Fabrication (FFF) and parts that can be found on any hardware store. The components of the set presented here are highly customizable, low-cost, require a short time to be fabricated and offer a performance that compares favorably with respect to low-end commercial alternatives.
Distal facial Electromyography (EMG) can be used to detect smiles and frowns with reasonable accuracy. It capitalizes on volume conduction to detect relevant muscle activity, even when the electrodes are not placed directly on the source muscle. The main advantage of this method is to prevent occlusion and obstruction of the facial expression production, whilst allowing EMG measurements. However, measuring EMG distally entails that the exact source of the facial movement is unknown. We propose a novel method to estimate specific Facial Action Units (AUs) from distal facial EMG and Computer Vision (CV). This method is based on Independent Component Analysis (ICA), Non-Negative Matrix Factorization (NNMF), and sorting of the resulting components to determine which is the most likely to correspond to each CV-labeled action unit (AU). Performance on the detection of AU06 (Orbicularis Oculi) and AU12 (Zygomaticus Major) was estimated by calculating the agreement with Human Coders. The results of our proposed algorithm showed an accuracy of 81% and a Cohens Kappa of 0.49 for AU6; and accuracy of 82% and a Cohens Kappa of 0.53 for AU12. This demonstrates the potential of distal EMG to detect individual facial movements. Using this multimodal method, several AU synergies were identified. We quantified the co-occurrence and timing of AU6 and AU12 in posed and spontaneous smiles using the human-coded labels, and for comparison, using the continuous CV-labels. The co-occurrence analysis was also performed on the EMG-based labels to uncover the relationship between muscle synergies and the kinematics of visible facial movement.
We show how to apply the Leggett-Garg inequality to opto-electro-mechanical systems near their quantum ground state. We find that by using a dichotomic quantum non-demolition measurement (via, e.g., an additional circuit-QED measurement device) eithe r on the cavity or on the nanomechanical system itself, the Leggett-Garg inequality is violated. We argue that only measurements on the mechanical system itself give a truly unambigous violation of the Leggett-Garg inequality for the mechanical system. In this case, a violation of the Leggett-Garg inequality indicates physics beyond that of macroscopic realism is occurring in the mechanical system. Finally, we discuss the difficulties in using unbound non-dichotomic observables with the Leggett-Garg inequality.
We outline the role of forward and inverse modelling approaches in the design of human--computer interaction systems. Causal, forward models tend to be easier to specify and simulate, but HCI requires solutions of the inverse problem. We infer finger 3D position $(x,y,z)$ and pose (pitch and yaw) on a mobile device using capacitive sensors which can sense the finger up to 5cm above the screen. We use machine learning to develop data-driven models to infer position, pose and sensor readings, based on training data from: 1. data generated by robots, 2. data from electrostatic simulators 3. human-generated data. Machine learned emulation is used to accelerate the electrostatic simulation performance by a factor of millions. We combine a Conditional Variational Autoencoder with domain expertise/models experimentally collected data. We compare forward and inverse model approaches to direct inference of finger pose. The combination gives the most accurate reported results on inferring 3D position and pose with a capacitive sensor on a mobile device.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا