Do you want to publish a course? Click here

Improving Aviation Safety using Synthetic Vision System integrated with Eye-tracking Devices

71   0   0.0 ( 0 )
 Added by Mingliang Xu
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

By collecting the data of eyeball movement of pilots, it is possible to monitor pilots operation in the future flight in order to detect potential accidents. In this paper, we designed a novel SVS system that is integrated with an eye tracking device, and is able to achieve the following functions:1) A novel method that is able to learn from the eyeball movements of pilots and preload or render the terrain data in various resolutions, in order to improve the quality of terrain display by comprehending the interested regions of the pilot. 2) A warning mechanism that may detect the risky operation via analyzing the aviation information from the SVS and the eyeball movement from the eye tracking device, in order to prevent the maloperations or human factor accidents. The user study and experiments show that the proposed SVS-Eyetracking system works efficiently and is capable of avoiding potential risked caused by fatigue in the flight simulation.



rate research

Read More

We study whether it is possible to infer if a news headline is true or false using only the movement of the human eyes when reading news headlines. Our study with 55 participants who are eye-tracked when reading 108 news headlines (72 true, 36 false) shows that false headlines receive statistically significantly less visual attention than true headlines. We further build an ensemble learner that predicts news headline factuality using only eye-tracking measurements. Our model yields a mean AUC of 0.688 and is better at detecting false than true headlines. Through a model analysis, we find that eye-tracking 25 users when reading 3-6 headlines is sufficient for our ensemble learner.
We present an algorithmic and visual grouping of participants and eye-tracking metrics derived from recorded eye-tracking data. Our method utilizes two well-established visualization concepts. First, parallel coordinates are used to provide an overview of the used metrics, their interactions, and similarities, which helps select suitable metrics that describe characteristics of the eye-tracking data. Furthermore, parallel coordinates plots enable an analyst to test the effects of creating a combination of a subset of metrics resulting in a newly derived eye-tracking metric. Second, a similarity matrix visualization is used to visually represent the affine combination of metrics utilizing an algorithmic grouping of subjects that leads to distinct visual groups of similar behavior. To keep the diagrams of the matrix visualization simple and understandable, we visually encode our eye-tracking data into the cells of a similarity matrix of participants. The algorithmic grouping is performed with a clustering based on the affine combination of metrics, which is also the basis for the similarity value computation of the similarity matrix. To illustrate the usefulness of our visualization, we applied it to an eye-tracking data set involving the reading behavior of metro maps of up to 40 participants. Finally, we discuss limitations and scalability issues of the approach focusing on visual and perceptual issues.
Situation awareness (SA) is critical to improving takeover performance during the transition period from automated driving to manual driving. Although many studies measured SA during or after the driving task, few studies have attempted to predict SA in real time in automated driving. In this work, we propose to predict SA during the takeover transition period in conditionally automated driving using eye-tracking and self-reported data. First, a tree ensemble machine learning model, named LightGBM (Light Gradient Boosting Machine), was used to predict SA. Second, in order to understand what factors influenced SA and how, SHAP (SHapley Additive exPlanations) values of individual predictor variables in the LightGBM model were calculated. These SHAP values explained the prediction model by identifying the most important factors and their effects on SA, which further improved the model performance of LightGBM through feature selection. We standardized SA between 0 and 1 by aggregating three performance measures (i.e., placement, distance, and speed estimation of vehicles with regard to the ego-vehicle) of SA in recreating simulated driving scenarios, after 33 participants viewed 32 videos with six lengths between 1 and 20 s. Using only eye-tracking data, our proposed model outperformed other selected machine learning models, having a root-mean-squared error (RMSE) of 0.121, a mean absolute error (MAE) of 0.096, and a 0.719 correlation coefficient between the predicted SA and the ground truth. The code is available at https://github.com/refengchou/Situation-awareness-prediction. Our proposed model provided important implications on how to monitor and predict SA in real time in automated driving using eye-tracking data.
Researchers and robotic development groups have recently started paying special attention to autonomous mobile robot navigation in indoor environments using vision sensors. The required data is provided for robot navigation and object detection using a camera as a sensor. The aim of the project is to construct a mobile robot that has integrated vision system capability used by a webcam to locate, track and follow a moving object. To achieve this task, multiple image processing algorithms are implemented and processed in real-time. A mini-laptop was used for collecting the necessary data to be sent to a PIC microcontroller that turns the processes of data obtained to provide the robots proper orientation. A vision system can be utilized in object recognition for robot control applications. The results demonstrate that the proposed mobile robot can be successfully operated through a webcam that detects the object and distinguishes a tennis ball based on its color and shape.
Technical progress in hardware and software enables us to record gaze data in everyday situations and over long time spans. Among a multitude of research opportunities, this technology enables visualization researchers to catch a glimpse behind performance measures and into the perceptual and cognitive processes of people using visualization techniques. The majority of eye tracking studies performed for visualization research is limited to the analysis of gaze distributions and aggregated statistics, thus only covering a small portion of insights that can be derived from gaze data. We argue that incorporating theories and methodology from psychology and cognitive science will benefit the design and evaluation of eye tracking experiments for visualization. This position paper outlines our experiences with eye tracking in visualization and states the benefits that an interdisciplinary research field on visualization psychology might bring for better understanding how people interpret visualizations.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا