Mackenzie G Glaholt; Grace Sim
In: Journal of Imaging Science and Technology, vol. 61, no. 1, pp. 230–235, 2017.
We investigated gaze-contingent fusion of infrared imagery during visual search. Eye movements were monitored while subjects searched for and identified human targets in images captured simultaneously in the short-wave (SWIR) and long-wave (LWIR) infrared bands. Based on the subject's gaze position, the search displaywas updated such that imagery from one sensorwas continuously presented to the subject's central visual field (“center”) and another sensor was presented to the subject's non-central visual field (“surround”). Analysis ofperformance data indicated that, compared to the other combinations, the scheme featuring SWIR imagery in the center region and LWIR imagery in the surround region constituted an optimal combination of the SWIR and LWIR information: it inherited the superior target detection performance of LWIR imagery and the superior target identification performance of SWIR imagery. This demonstrates a novel method for efficiently combining imagery from two infrared sources as an alternative to conventional image fusion.
Hayward J Godwin; Stuart Hyde; Dominic Taunton; James Calver; James I R Blake; Simon P Liversedge
The influence of expertise on maritime driving behaviour Journal Article
In: Applied Cognitive Psychology, vol. 27, no. 4, pp. 483–492, 2013.
We compared expert and novice behaviour in a group of participants as they engaged in a simulated maritime driving task. We varied the difficulty of the driving task by controllling the severity of the sea state in which they were driving their craft. Increases in sea severity increased the size of the upcoming waves while also increasing the length of the waves. Expert participants drove their craft at a higher speed than novices and decreased their fixation durations as wave severity increased. Furthermore, the expert participants increased the horizontal spread of their fixation positions as wave severity increased to a greater degree than novices. Conversely, novice participants showed evidence of a greater vertical spread of fixations than experts. By connecting our findings with previous research investigating eye movement behaviour and road driving, we suggest that novice or inexperienced drivers show inflexibility in adaptation to changing driving conditions.
Hayward J Godwin; Simon P Liversedge; Julie A Kirkby; Michael Boardman; Katherine Cornes; Nick Donnelly
In: Visual Cognition, vol. 23, no. 4, pp. 415–431, 2015.
We examined the influence of experience upon information-sampling and decision-making behaviour in a group of military personnel as they conducted risk assessments of scenes photographed from patrol routes during the recent conflict in Afghanistan. Their risk assessment was based on an evaluation of Potential Risk Indicators (PRIs) during examination of each scene. We found that both participant groups were equally likely to fixate PRIs, demonstrating similarity in the selectivity of their information-sampling. However, the inexperienced participants made more revisits to PRIs, had longer response times, and were more likely to decide that the scenes contained a high level of risk. Together, these results suggest that experience primarily modulates decision-making behaviour. We discuss potential routes to train personnel to conduct risk assessments in a more similar manner to experienced participants.
Alexander Goettker; Kevin J MacKenzie; Scott T Murdison
In: Journal of the Society for Information Display, vol. 28, no. 6, pp. 509–519, 2020.
We used perceptual and oculomotor measures to understand the negative impacts of low (phantom array) and high (motion blur) duty cycles with a high-speed, AR-likehead-mounted display prototype. We observed large intersubject variability for the detection of phantom array artifacts but a highly consistent and systematic effect on saccadic eye movement targeting during low duty cycle presentations. This adverse effect on saccade endpoints was also related to an increased error rate in a perceptual discrimination task, showing a direct effect of display duty cycle on the perceptual quality. For high duty cycles, the probability of detecting motion blur increased during head movements, and this effect was elevated at lower refresh rates. We did not find an impact of the temporal display characteristics on compensatory eye movements during head motion (e.g., VOR). Together, our results allow us to quantify the tradeoff of different negative spatiotemporal impacts of user movements and make subsequent recommendations for optimized temporal HMD parameters.
Andrea Grant; Gregory J Metzger; Pierre François Van de Moortele; Gregor Adriany; Cheryl Olman; Lin Zhang; Joseph Koopermeiners; Yiğitcan Eryaman; Margaret Koeritzer; Meredith E Adams; Thomas R Henry; Kamil Uğurbil
In: Magnetic Resonance Imaging, vol. 73, pp. 163–176, 2020.
Purpose: To perform a pilot study to quantitatively assess cognitive, vestibular, and physiological function during and after exposure to a magnetic resonance imaging (MRI) system with a static field strength of 10.5 Tesla at multiple time scales. Methods: A total of 29 subjects were exposed to a 10.5 T MRI field and underwent vestibular, cognitive, and physiological testing before, during, and after exposure; for 26 subjects, testing and exposure were repeated within 2–4 weeks of the first visit. Subjects also reported sensory perceptions after each exposure. Comparisons were made between short and long term time points in the study with respect to the parameters measured in the study; short term comparison included pre-vs-isocenter and pre-vs-post (1–24 h), while long term compared pre-exposures 2–4 weeks apart. Results: Of the 79 comparisons, 73 parameters were unchanged or had small improvements after magnet exposure. The exceptions to this included lower scores on short term (i.e. same day) executive function testing, greater isocenter spontaneous eye movement during visit 1 (relative to pre-exposure), increased number of abnormalities on videonystagmography visit 2 versus visit 1 and a mix of small increases (short term visit 2) and decreases (short term visit 1) in blood pressure. In addition, more subjects reported metallic taste at 10.5 T in comparison to similar data obtained in previous studies at 7 T and 9.4 T. Conclusion: Initial results of 10.5 T static field exposure indicate that 1) cognitive performance is not compromised at isocenter, 2) subjects experience increased eye movement at isocenter, and 3) subjects experience small changes in vital signs but no field-induced increase in blood pressure. While small but significant differences were found in some comparisons, none were identified as compromising subject safety. A modified testing protocol informed by these results was devised with the goal of permitting increased enrollment while providing continued monitoring to evaluate field effects.
Elise Grison; Valérie Gyselinck; Jean Marie Burkhardt; Jan M Wiener
In: Psychological Research, vol. 81, no. 5, pp. 1020–1034, 2017.
Planning routes using transportation network maps is a common task that has received little attention in the literature. Here, we present a novel eye-tracking paradigm to investigate psychological processes and mechanisms involved in such a route planning. In the experiment, participants were first presented with an origin and destination pair before we presented them with fictitious public transportation maps. Their task was to find the connecting route that required the minimum number of transfers. Based on participants' gaze behaviour, each trial was split into two phases: (1) the search for origin and destination phase, i.e., the initial phase of the trial until participants gazed at both origin and destination at least once and (2) the route planning and selection phase. Comparisons of other eye-tracking measures between these phases and the time to complete them, which depended on the complexity of the planning task, suggest that these two phases are indeed distinct and supported by different cognitive processes. For example, participants spent more time attending the centre of the map during the initial search phase, before directing their attention to connecting stations, where transitions between lines were possible. Our results provide novel insights into the psychological processes involved in route planning from maps. The findings are discussed in relation to the current theories of route planning.
Michael W von Grünau; Kamala Pilgrim; Rong Zhou
In: Vision Research, vol. 47, no. 18, pp. 2453–2464, 2007.
The visual flow field, produced by forward locomotion, contains useful information about many aspects of visually guided behavior. But locomotion itself also contributes to possible distortions by adding head bobbing motions. Here we examine whether vertical head bobbing affects velocity discrimination thresholds and how the system may compensate for the distortions. Vertical head and eye movements while fixating were recorded during standing, walking or running on a treadmill. Bobbing noise was found to be larger during locomotion. The same observers were equally good at discriminating velocity increases in large accelerating flow fields when standing or walking or running. Simulated head bobbing was compensated when produced by pursuit eye movements, but not when it was part of the flow field. The results showed that these two contributions are additive and dealt with independently before they are combined. Distortions produced by body/head oscillations may also be compensated. Visual performance during running was at least as good as during walking, suggesting more efficient compensation mechanisms for running.
Katja I Häuser; Vera Demberg; Jutta Kray
In: Psychology and Aging, vol. 33, no. 8, pp. 1168–1180, 2018.
Even though older adults are known to have difficulty at language processing when a secondary task has to be performed simultaneously, few studies have addressed how older adults process language in dual-task demands when linguistic load is systematically varied. Here, we manipulated surprisal, an information theoretic measure that quantifies the amount of new information conveyed by a word, to investigate how linguistic load affects younger and older adults during early and late stages of sentence processing under conditions when attention is split between two tasks. In high-surprisal sentences, target words were implausible and mismatched with semantic expectancies based on context, thereby causing integration difficulty. Participants performed semantic meaningfulness judgments on sentences that were presented in isolation (single task) or while performing a secondary tracking task (dual task). Cognitive load was measured by means of pupillometry. Mixed-effects models were fit to the data, showing the following: (a) During the dual task, younger but not older adults demonstrated early sensitivity to surprisal (higher levels of cognitive load, indexed by pupil size) as sentences were heard online; (b) Older adults showed no immediate reaction to surprisal, but a delayed response, where their meaningfulness judgments to high-surprisal words remained stable in accuracy, while secondary tracking performance declined. Findings are discussed in relation to age-related trade-offs in dual tasking and differences in the allocation of attentional resources during language processing. Collectively, our data show that higher linguistic load leads to task trade-offs in older adults and differently affects the time course of online language processing in aging.
Ziad M Hafed; Katarina Stingl; Karl Ulrich Bartz-Schmidt; Florian Gekeler; Eberhart Zrenner
In: Vision Research, vol. 118, pp. 119–131, 2016.
Electronic implants are able to restore some visual function in blind patients with hereditary retinal degenerations. Subretinal visual implants, such as the CE-approved Retina Implant Alpha IMS (Retina Implant AG, Reutlingen, Germany), sense light through the eye's optics and subsequently stimulate retinal bipolar cells via ~1500 independent pixels to project visual signals to the brain. Because these devices are directly implanted beneath the fovea, they potentially harness the full benefit of eye movements to scan scenes and fixate objects. However, so far, the oculomotor behavior of patients using subretinal implants has not been characterized. Here, we tracked eye movements in two blind patients seeing with a subretinal implant, and we compared them to those of three healthy controls. We presented bright geometric shapes on a dark background, and we asked the patients to report seeing them or not. We found that once the patients visually localized the shapes, they fixated well and exhibited classic oculomotor fixational patterns, including the generation of microsaccades and ocular drifts. Further, we found that a reduced frequency of saccades and microsaccades was correlated with loss of visibility. Last, but not least, gaze location corresponded to the location of the stimulus, and shape and size aspects of the viewed stimulus were reflected by the direction and size of saccades. Our results pave the way for future use of eye tracking in subretinal implant patients, not only to understand their oculomotor behavior, but also to design oculomotor training strategies that can help improve their quality of life.
Kai Christoph Hamborg; M Bruns; F Ollermann; Kai Kaspar
In: Computers in Human Behavior, vol. 28, no. 2, pp. 576–582, 2012.
Previous findings suggested that banner ads have little or no impact on perceptual behavior and memory performance in search tasks, but only in browsing paradigms. This assumption is not supported by the present eye-tracking study. It investigates whether task-related selective attention is disrupted depending on the animation intensity of banner ads when users are in a search mode as well as the impact of banner animation on perceptual and memory performance. We find that fixation frequency on banners increases with animation intensity. Moreover, a specific temporal course of fixation frequency on banners could be observed. However, the duration of fixations on a banner is independent of its animation intensity. Results also reveal that animation enhances the recall performance of banner content. The subject of advertisement, the position of the banner as well as writings and colors are recalled better when the banner is animated in contrast to a non-animated banner, whereby the animation intensity has no impact on banner related recall performance. Importantly, the performance in the actual information search task is not affected by banner animation. Moreover, animation intensity does not affect subjects' attitude towards the banner ad.
David J Hancock; Diane M Ste-Marie
In: Psychology of Sport and Exercise, vol. 14, no. 1, pp. 66–71, 2013.
Background: Gaze behaviors are often studied in athletes, but infrequently for sport officials. There is a need to better understand gaze behavior in refereeing in order to improve training and education related to visual search patterns, which have been argued to be related to decision making (Abernethy & Russell, 1987a). Objective: To examine gaze behaviors, decision accuracy, and decision sensitivity (using signal detection analysis) of ice hockey referees of varying skill levels in a laboratory setting. Design: Using an experimental design, we conducted multiple t-tests. Method: Higher-level (N = 15) and lower-level ice hockey referees (N = 15) wore a head-mounted eye movement recorder and made penalty/no penalty decisions related to ice hockey video clips on a computer screen. We recorded gaze behaviors, decision accuracy, and decision sensitivity for each participant. Results: Results of the t-tests indicated no group differences in gaze behaviors; however, higher-level referees made significantly more accurate decisions (both accuracy and sensitivity) than lower-level referees. Conclusion: Higher-level ice hockey referees are superior to lower-level referees on decision making, but referees do not differ on gaze behaviors. Possibly, higher-level referees process relevant decision making information more effectively.
Jessica Hanley; David E Warren; Natalie Glass; Daniel Tranel; Matthew Karam; Joseph Buckwalter
In: The Iowa Orthopaedic Journal, vol. 37, pp. 225–231, 2017.
BACKGROUND: Despite the importance of radiographic interpretation in orthopaedics, there not a clear understanding of the specific visual strategies used while analyzing a plain film. Eyetracking technology allows for the objective study of eye movements while performing a dynamic task, such as reading X-rays. Our study looks to elucidate objective differences in image interpretation between novice and experienced orthopaedic trainees using this novel technology. METHODS: Novice and experienced orthopaedic trainees (N=23) were asked to interpret AP pelvis films, searching for unilateral acetabular fractures while eye-movements were assessed for pattern of gaze, fixation on regions of interest, and time of fixation at regions of interest. Participants were asked to label radiographs as "fractured" or "not fractured." If "fractured", the participant was asked to determine the fracture pattern. A control condition employed Ekman faces and participants judged gender and facial emotion. Data were analyzed for variation in eye movements between participants, accuracy of responses, and response time. RESULTS: Accuracy: There was no significant difference by level of training for accurately identifing fracture images (p=0.3255). There was a significant association between higher level of training and correctly identifying non-fractured images (p=0.0155); greater training was also associated with more success in identifying the correct Judet-Letournel classification (p=0.0029). Response Time: Greater training was associated with faster response times (p=0.0009 for fracture images and 0.0012 for non-fractured images). Fixation Duration: There was no correlation of average fixation duration with experience (p=0.9632). Regions of Interest (ROIs): More experience was associated with an average of two fewer fixated ROIs (p=0.0047). Number of Fixations: Increased experience was associated with fewer fixations overall (p=0.0007). CONCLUSIONS: Experience has a significant impact on both accuracy and efficiency in interpreting plain films. Greater training is associated with a shift toward a more efficient and thorough assessment of plain radiographs. Eyetracking is a useful descriptive tool in the setting of plain film interpretation. CLINICAL RELEVANCE: We propose further assessment of eye movements in larger populations of orthopaedic surgeons, including staff orthopaedists. Describing the differences between novice and expert interpretation may provide insight into ways to accelerate the learning process in young orthopaedists.
Agnes Hardardottir; Mohammed Al-Hamdani; Raymond Klein; Austin Hurst; Sherry H Stewart
In: Nicotine & Tobacco Research, vol. 22, no. 10, pp. 1788–1794, 2020.
INTRODUCTION: The social and health care costs of smoking are immense. To reduce these costs, several tobacco control policies have been introduced (eg, graphic health warnings [GHWs] on cigarette packs). Previous research has found plain packaging (a homogenized form of packaging), in comparison to branded packaging, effectively increases attention to GHWs using UK packaging prototypes. Past studies have also found that illness sensitivity (IS) protects against health-impairing behaviors. Building on this evidence, the goal of the current study was to assess the effect of packaging type (plain vs. branded), IS level, and their interaction on attention to GHWs on cigarette packages using proposed Canadian prototypes. AIMS AND METHODS: We assessed the dwell time and fixations on the GHW component of 40 cigarette pack stimuli (20 branded; 20 plain). Stimuli were presented in random order to 50 smokers (60.8% male; mean age = 33.1; 92.2% daily smokers) using the EyeLink 1000 system. Participants were divided into low IS (n = 25) and high IS (n = 25) groups based on scores on the Illness Sensitivity Index. RESULTS: Overall, plain packaging relative to branded packaging increased fixations (but not dwell time) on GHWs. Moreover, low IS (but not high IS) smokers showed more fixations to GHWs on plain versus branded packages. CONCLUSIONS: These findings demonstrate that plain packaging is a promising intervention for daily smokers, particularly those low in IS, and contribute evidence in support of impending implementation of plain packaging in Canada. IMPLICATIONS: Our findings have three important implications. First, our study provides controlled experimental evidence that plain packaging is a promising intervention for daily smokers. Second, the findings of this study contribute supportive evidence for the impending plain packaging policy in Canada, and can therefore aid in defense against anticipated challenges from the tobacco industry upon its implementation. Third, given its effects in increasing attention to GHWs, plain packaging is an intervention likely to provide smokers enhanced incentive for smoking cessation, particularly among those low in IS who may otherwise be less interested in seeking treatment for tobacco dependence.
Alistair J Harvey; Wendy Kneller; Alison C Campbell
In: Memory, vol. 21, no. 8, pp. 969–980, 2013.
This study tests the claim that alcohol intoxication narrows the focus of visual attention on to the more salient features of a visual scene. A group of alcohol intoxicated and sober participants had their eye movements recorded as they encoded a photographic image featuring a central event of either high or low salience. All participants then recalled the details of the image the following day when sober. We sought to determine whether the alcohol group would pay less attention to the peripheral features of the encoded scene than their sober counterparts, whether this effect of attentional narrowing was stronger for the high-salience event than for the low-salience event, and whether it would lead to a corresponding deficit in peripheral recall. Alcohol was found to narrow the focus of foveal attention to the central features of both images but did not facilitate recall from this region. It also reduced the overall amount of information accurately recalled from each scene. These findings demonstrate that the concept of alcohol myopia originally posited to explain the social consequences of intoxication (Steele & Josephs, 1990) may be extended to explain the relative neglect of peripheral information during the processing of visual scenes.
Hannah Harvey; Stephen J Anderson; Robin Walker
In: Optometry and Vision Science, vol. 96, no. 8, pp. 609–616, 2019.
SIGNIFICANCE: Scrolling text can be an effective reading aid for those with central vision loss. Our results suggest that increased interword spacing with scrolling text may further improve the reading experience of this population. This conclusion may be of particular interest to low-vision aid developers and visual rehabilitation practitioners. PURPOSE: The dynamic, horizontally scrolling text format has been shown to improve reading performance in individuals with central visual loss. Here, we sought to determine whether reading performance with scrolling text can be further improved by modulating interword spacing to reduce the effects of visual crowding, a factor known to impact negatively on reading with peripheral vision. METHODS: The effects of interword spacing on reading performance (accuracy, memory recall, and speed) were assessed for eccentrically viewed single sentences of scrolling text. Separate experiments were used to determine whether performance measures were affected by any confound between interword spacing and text presentation rate in words per minute. Normally sighted participants were included, with a central vision loss implemented using a gaze-contingent scotoma of 8° diameter. In both experiments, participants read sentences that were presented with an interword spacing of one, two, or three characters. RESULTS: Reading accuracy and memory recall were significantly enhanced with triple-character interword spacing (both measures, P ≤.01). These basic findings were independent of the text presentation rate (in words per minute). CONCLUSIONS: We attribute the improvements in reading performance with increased interword spacing to a reduction in the deleterious effects of visual crowding. We conclude that increased interword spacing may enhance reading experience and ability when using horizontally scrolling text with a central vision loss.
Lisena Hasanaj; Sujata P Thawani; Nikki Webb; Julia D Drattell; Liliana Serrano; Rachel C Nolan; Jenelle Raynowska; Todd E Hudson; John-Ross Rizzo; Weiwei Dai; Bryan McComb; Judith D Goldberg; Janet C Rucker; Steven L Galetta; Laura J Balcer
In: Journal of Neuro-Ophthalmology, vol. 38, no. 1, pp. 24–29, 2018.
Objective: We determined the relation of rapid number naming time scores on the King-Devick (K-D) test to video-oculographic eye movement performance during pre-season baseline assessments in a collegiate ice hockey team cohort. Background: The K-D test is a reliable visual performance measure that is a sensitive sideline indicator of concussion when time scores worsen (lengthen) from pre-season baseline. Methods: Athletes from collegiate ice hockey team received pre-season baseline testing as part of an ongoing study of rapid sideline/ rinkside performance measures for concussion. These included the K-D test (spiral bound cards and tablet computer versions). Participants also performed a laboratory-based version of the K-D test with simultaneous infrared-based video-oculographic recordings using EyeLink 1000+. This allowed measurement of temporal and spatial characteristics of eye movements, including saccade velocity, duration and inter-saccadic intervals. Results: Among 13 male athletes, aged 18 to 23 years (mean 20.5+/-1.6 years), prolongation of the inter-saccadic interval (ISI, a combined measure of saccade latency and fixation duration) was the eye movement measure most associated with slower baseline KD scores (mean 38.2+/-6.2 seconds
Sogand Hasanzadeh; Behzad Esmaeili; Michael D Dodd
In: Journal of Management in Engineering, vol. 33, no. 5, pp. 1–17, 2017.
Although several studies have highlighted the importance of attention in reducing the number of injuries in the construction industry, few have attempted to empirically measure the attention of construction workers. One technique that can be used to measure worker attention is eye tracking, which is widely accepted as the most direct and continuous measure of attention because where one looks is highly correlated with where one is focusing his or her attention. Thus, with the fundamental objective of measuring the impacts of safety knowledge (specifically, training, work experience, and injury exposure) on construction workers' attentional allocation, this study demonstrates the application of eye tracking to the realm of construction safety practices. To achieve this objective, a laboratory experiment was designed in which participants identified safety hazards presented in 35 construction site images ordered randomly, each of which showed multiple hazards varying in safety risk. During the experiment, the eye movements of 27 construction workers were recorded using a head-mounted EyeLink II system. The impact of worker safety knowledge in terms of training, work experience, and injury exposure (independent variables) on eye-tracking metrics (dependent variables) was then assessed by implementing numerous permutation simulations. The results show that tacit safety knowledge acquired from work experience and injury exposure can significantly improve construction workers' hazard detection and visual search strategies. The results also demonstrate that (1) there is minimal difference, with or without the Occupational Safety and Health Administration 10-h certificate, in workers' search strategies and attentional patterns while exposed to or seeing hazardous situations; (2) relative to less experienced workers (textless5 years), more experienced workers (textgreater10 years) need less processing time and deploy more frequent short fixations on hazardous areas to maintain situational awareness of the environment; and (3) injury exposure significantly impacts a worker's visual search strategy and attentional allocation. In sum, practical safety knowledge and judgment on a jobsite requires the interaction of both tacit and explicit knowledge gained through work experience, injury exposure, and interactive safety training. This study significantly contributes to the literature by demonstrating the potential application of eye-tracking technology in studying the attentional allocation of construction workers. Regarding practice, the results of the study show that eye tracking can be used to improve worker training and preparedness, which will yield safer working conditions, detect at-risk workers, and improve the effectiveness of safety-training programs.
Sogand Hasanzadeh; Behzad Esmaeili; Michael D Dodd
In: Journal of Construction Engineering and Management, vol. 143, no. 10, pp. 1–16, 2017.
Eye-movement metrics have been shown to correlate with attention and, therefore, represent a means of identifying and analyzing an individual's cognitive processes. Human errors--such as failure to identify a hazard--are often attributed to a worker's lack of attention. Piecemeal attempts have been made to investigate the potential of harnessing eye movements as predictors of human error (e.g., failure to identify a hazard) in the construction industry, although more attempts have investigated human error via subjective measurements. To address this knowledge gap, the present study harnessed eye-tracking technology to evaluate the impacts of workers' hazard-identification skills on their attentional distributions and visual search strategies. To achieve this objective, an experiment was designed in which the eye movements of 31 construction workers were tracked while they searched for hazards in 35 randomly ordered construction scenario images. Workers were then divided into three groups on the basis of their hazard identification performance. Three fixation-related metrics--fixation count, dwell-time percentage, and run count--were analyzed during the eye-tracking experiment for each group (low, medium, and high hazard-identification skills) across various types of hazards. Then, multivariate ANOVA (MANOVA) was used to evaluate the impact of workers' hazard-identification skills on their visual attention. To further investigate the effect of hazard identification skills on the dependent variables (eye movement metrics), two distinct processes followed: separate ANOVAs on each of the dependent variables, and a discriminant function analysis. The analyses indicated that hazard identification skills significantly impact workers' visual search strategies: workers with higher hazard-identification skills had lower dwell-time percentages on ladder-related hazards; higher fixation counts on fall-to-lower-level hazards; and higher fixation counts and run counts on fall-protection systems, struck-by, housekeeping, and all hazardous areas combined. Among the eye-movement metrics studied, fixation count had the largest standardized coefficient in all canonical discriminant functions, which implies that this eye-movement metric uniquely discriminates workers with high hazard-identification skills and at-risk workers. Because discriminant function analysis is similar to regression, discriminant function (linear combinations of eye-movement metrics) can be used to predict workers' hazard-identification capabilities. In conclusion, this study provides a proof of concept that certain eye- movement metrics are predictive indicators of human error due to attentional failure. These outcomes stemmed from a laboratory setting, and, foreseeably, safety managers in the future will be able to use these findings to identify at-risk construction workers, pinpoint required safety training, measure training effectiveness, and eventually improve future personal protective equipment to measure construction workers' situation awareness in real time.
Sogand Hasanzadeh; Bac Dao; Behzad Esmaeili; Michael D Dodd
In: Journal of Construction Engineering and Management, vol. 145, no. 9, pp. 1–14, 2019.
Workers' attentional failures or inattention toward detecting a hazard can lead to inappropriate decisions and unsafe behaviors. Previous research has shown that individual characteristics such as past injury exposure contribute greatly to skill-based (e.g., attention failure) and perception-based (e.g., failure to identify and misperception) errors and subsequent accident involvement. However, a dearth of research empirically examined how a worker's personality affects his or her attention and hazard identification. This study addresses this knowledge gap by exploring the impacts of the personality dimensions on the selective attention of workers exposed to fall hazards. To this end, construction workers were recruited to engage in a laboratory eye-tracking experiment that consisted of 115 potential and active fall scenarios in 35 construction images captured from actual projects within the United States. Construction workers' personalities were assessed through the self-completion of the Big Five personality questionnaire, and their visual attention was monitored continuously using a wearable eye-tracking apparatus. The results of the study show that workers' personality dimensions - specifically, extraversion, conscientiousness, and openness to experience - significantly relate to and impact attentional allocations and the search strategies of workers exposed to fall hazards. A more detailed investigation of this connection showed that individuals who are introverted, more conscientious, or more open to experience are less prone to injury and return their attention more frequently to hazardous areas. This study is the first attempt to illustrate how examining relationships among personality, attention, and hazard identification can reveal opportunities for the early detection of at-risk workers who are more likely to be involved in accidents. A better understanding of these connections provides valuable insight into both practice and theory regarding the transformation of current training and educational practices by providing appropriate intervention strategies for personalized safety guidelines and effective training materials to transform personality-driven at-risk workers into safer workers.
Claire Louise Heard; Tim Rakow; Tom Foulsham
In: Medical Decision Making, vol. 38, no. 6, pp. 646–657, 2018.
Background. Past research finds that treatment evaluations are more negative when risks are presented after benefits. This study investigates this order effect: manipulating tabular orientation and order of risk–benefit information, and examining information search order and gaze duration via eye-tracking. Design. 108 (Study 1) and 44 (Study 2) participants viewed information about treatment risks and benefits, in either a horizontal (left-right) or vertical (above- below) orientation, with the benefits or risks presented first (left side or at top). For 4 scenarios, participants answered 6 treatment evaluation questions (1–7 scales) that were combined into overall evaluation scores. In addi- tion, Study 2 collected eye-tracking data during the benefit–risk presentation. Results. Participants tended to read one set of information (i.e., all risks or all benefits) before transitioning to the other. Analysis of order of fixations showed this tendency was stronger in the vertical (standardized mean rank difference further from 0
Marti Hearst; Emily Pedersen; Lekha Priya Patil; Elsie Lee; Paul Laskowski; Steven L Franconeri
An evaluation of semantically grouped word cloud designs Journal Article
In: IEEE Transactions on Visualization and Computer Graphics, pp. 1–1, 2019.
Word clouds continue to be a popular tool for summarizing textual information, despite their well-documented deficiencies for analytic tasks. Much of their popularity rests on their playful visual appeal. In this paper, we present the results of a series of controlled experiments that show that layouts in which words are arranged into semantically and visually distinct zones are more effective for understanding the underlying topics than standard word cloud layouts. White space separators and/or spatially grouped color coding led to significantly stronger understanding of the underlying topics compared to a standard Wordle layout, while simultaneously scoring higher on measures of aesthetic appeal. This work is an advance on prior research on semantic layouts for word clouds because that prior work has either not ensured that the different semantic groupings are visually or semantically distinct, or has not performed usability studies. An additional contribution of this work is the development of a dataset for a semantic category identification task that can be used for replication of these results or future evaluations of word cloud designs.
Matthew Heath; Erin M Shellington; Sam Titheridge; Dawn P Gill; Robert J Petrella
In: Journal of Alzheimer's Disease, vol. 56, no. 1, pp. 167–183, 2017.
Exercise programs involving aerobic and resistance training (i.e., multiple-modality) have shown promise in improving cognition and executive control in older adults at risk, or experiencing, cognitive decline. It is, however, unclear whether cognitive training within a multiple-modality program elicits an additive benefit to executive/cognitive processes. This is an important question to resolve in order to identify optimal training programs that delay, or ameliorate, executive deficits in persons at risk for further cognitive decline. In the present study, individuals with a self-reported cognitive complaint (SCC) participated in a 24-week multiple-modality (i.e., the M2 group) exercise intervention program. In addition, a separate group of individuals with a SCC completed the same aerobic and resistance training as the M2 group but also completed a cognitive-based stepping task (i.e., multiple-modality, mind-motor intervention: M4 group). Notably, pre- and post-intervention executive control was examined via the antisaccade task (i.e., eye movement mirror-symmetrical to a target). Antisaccades are an ideal tool for the study of individuals with subtle executive deficits because of its hands- and language-free nature and because the task's neural mechanisms are linked to neuropathology in cognitive decline (i.e., prefrontal cortex). Results showed that M2 and M4 group antisaccade reaction times reliably decreased from pre- to post-intervention and the magnitude of the decrease was consistent across groups. Thus, multi-modality exercise training improved executive performance in persons with a SCC independent of mind-motor training. Accordingly, we propose that multiple-modality training provides a sufficient intervention to improve executive control in persons with a SCC.
Kristin J Heaton; Alexis L Maule; Jun Maruta; Elisabeth M Kryskow; Jamshid Ghajar
In: Aviation Space and Environmental Medicine, vol. 85, no. 5, pp. 497–503, 2014.
Background: Fatigue due to sleep restriction places individuals at elevated risk for accidents, degraded health, and impaired physical and mental performance. Early detection of fatigue-related performance decrements is an important component of injury prevention and can help to ensure optimal performance and mission readiness. This study used a predictive visual tracking task and a computer-based measure of attention to characterize fatigue-related attention decrements in healthy Army personnel during acute sleep deprivation. Methods: Serving as subjects in this laboratory-based study were 87 male and female service members between the ages of 18 and 50 with no history of brain injury with loss of consciousness, substance abuse, or significant psychiatric or neurologic diagnoses. Subjects underwent 26 h of sleep deprivation, during which eye movement measures from a continuous circular visual tracking task and attention measures (reaction time, accuracy) from the Attention Network Test (ANT) were collected at baseline, 20 h awake, and between 24 to 26 h awake. Results: Increases in the variability of gaze positional errors (46-47%), as well as reaction time-based ANT measures (9-65%), were observed across 26 h of sleep deprivation. Accuracy of ANT responses declined across this same period (11%). Discussion: Performance measures of predictive visual tracking accurately reflect impaired attention due to acute sleep deprivation and provide a promising approach for assessing readiness in personnel serving in diverse occupational areas, including flight and ground support crews.
Claudia R Hebert; Li Z Sha; Roger W Remington; Yuhong V Jiang
Redundancy gain in visual search of simulated X-ray images Journal Article
In: Attention, Perception, and Psychophysics, vol. 82, no. 4, pp. 1669–1681, 2020.
Cancer diagnosis frequently relies on the interpretation of medical images such as chest X-rays and mammography. This process is error prone; misdiagnoses can reach a rate of 15% or higher. Of particular interest are false negatives—tumors that are present but missed. Previous research has identified several perceptual and attentional problems underlying inaccurate perception of these images. But how might these problems be reduced? The psychological literature has shown that presenting multiple, duplicate images can improve performance. Here we explored whether redundant image presentation can improve target detection in simulated X-ray images, by presenting four identical or similar images concurrently. Displays with redundant images, including duplicates of the same image, showed reduced false-negative rates, compared with displays with a single image. This effect held both when the target's prevalence rate was high and when it was low. Eye tracking showed that fixating on two or more images in the redundant condition speeded target detection and prolonged search, and that the latter effect was the key to reducing false negatives. The redundancy gain may result from both perceptual enhancement and an increase in the search quitting threshold.
Mary Hegarty; Matt S Canham; Sara I Fabrikant
In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 36, no. 1, pp. 37–53, 2010.
Three experiments examined how bottom-up and top-down processes interact when people view and make inferences from complex visual displays (weather maps). Bottom-up effects of display design were investigated by manipulating the relative visual salience of task-relevant and task-irrelevant information across different maps. Top-down effects of domain knowledge were investigated by examining performance and eye fixations before and after participants learned relevant meteorological principles. Map design and knowledge interacted such that salience had no effect on performance before participants learned the meteorological principles; however, after learning, participants were more accurate if they viewed maps that made task-relevant information more visually salient. Effects of display design on task performance were somewhat dissociated from effects of display design on eye fixations. The results support a model in which eye fixations are directed primarily by top-down factors (task and domain knowledge). They suggest that good display design facilitates performance not just by guiding where viewers look in a complex display but also by facilitating processing of the visual features that represent task-relevant information at a given display location.
Mary Hegarty; Harvey S Smallman; Andrew T Stull
In: Journal of Experimental Psychology: Applied, vol. 18, no. 1, pp. 1–17, 2012.
Interactive display systems give users flexibility to tailor their visual displays to different tasks and situations. However, in order for such flexibility to be beneficial, users need to understand how to tailor displays to different tasks (to possess “metarepresentational competence”). Recent research suggests that people may desire more complex and realistic displays than are most effective (Smallman & St. John, 2005). In Experiment 1, undergraduate students were tested on a comprehension task with geospatial displays (weather maps) that varied in the number of extraneous variables displayed. Their metacognitive judgments about the relative effectiveness of the displays were also solicited. Extraneous variables slowed response time and increased errors, but participants favored complex maps that looked more realistic about one third of the time. In Experiment 2, the eye fixations of undergraduate students were monitored as they performed the comprehension task. Complex maps that looked more realistic led to more eye fixations on both task-relevant and task-irrelevant regions of the displays. Experiment 3 compared performance of experienced meteorologists and undergraduate students on the comprehension and metacognitive tasks. Meteorologists were as likely as undergraduate students to prefer geographically complex (realistic) displays and more likely than undergraduates to opt for displays that added extraneous weather variables. However, meteorologists were also slower and less accurate with complex than with simple displays. This work highlights the importance of empirically testing principles of visual display design and suggests some limits to metarepresentational competence.
In: Journal of Medical Imaging, vol. 7, no. 2, pp. 1–22, 2020.
The scientific, clinical, and pedagogical significance of devising methodologies to train nonprofessional subjects to recognize diagnostic visual patterns in medical images has been broadly recognized. However, systematic approaches to doing so remain poorly established. Using mammography as an exemplar case, we use a series of experiments to demonstrate that deep learning (DL) techniques can, in principle, be used to train naïve subjects to reliably detect certain diagnostic visual patterns of cancer in medical images. In the main experiment, subjects were required to learn to detect statistical visual patterns diagnostic of cancer in mammograms using only the mammograms and feedback provided following the subjects' response. We found not only that the subjects learned to perform the task at statistically significant levels, but also that their eye movements related to image scrutiny changed in a learning-dependent fashion. Two additional, smaller exploratory experiments suggested that allowing subjects to re-examine the mammogram in light of various items of diagnostic information may help further improve DL of the diagnostic patterns. Finally, a fourth small, exploratory experiment suggested that the image information learned was similar across subjects. Together, these results prove the principle that DL methodologies can be used to train nonprofessional subjects to reliably perform those aspects of medical image perception tasks that depend on visual pattern recognition expertise.
Benedetta Heimler; Francesco Pavani; Mieke Donk; Wieske van Zoest
In: Attention, Perception, and Psychophysics, vol. 76, no. 8, pp. 2398–2412, 2014.
Action videogame players (AVGPs) have been shown to outperform nongamers (NVGPs) in covert visual attention tasks. These advantages have been attributed to improved top-down control in this population. The time course of visual selection, which permits researchers to highlight when top-down strategies start to control performance, has rarely been investigated in AVGPs. Here, we addressed specifically this issue through an oculomotor additional-singleton paradigm. Participants were instructed to make a saccadic eye movement to a unique orientation singleton. The target was presented among homogeneous nontargets and one additional orientation singleton that was more, equally, or less salient than the target. Saliency was manipulated in the color dimension. Our results showed similar patterns of performance for both AVGPs and NVGPs: Fast-initiated saccades were saliency-driven, whereas later-initiated saccades were more goal-driven. However, although AVGPs were faster than NVGPs, they were also less accurate. Importantly, a multinomial model applied to the data revealed comparable underlying saliency-driven and goal-driven functions for the two groups. Taken together, the observed differences in performance are compatible with the presence of a lower decision bound for releasing saccades in AVGPs than in NVGPs, in the context of comparable temporal interplay between the underlying attentional mechanisms. In sum, the present findings show that in both AVGPs and NVGPs, the implementation of top-down control in visual selection takes time to come about, and they argue against the idea of a general enhancement of top-down control in AVGPs.
Lukáš Hejtmánek; Ivana Oravcová; Jiří Motýl; Jiří Horáček; Iveta Fajnerová
In: International Journal of Human-Computer Studies, vol. 116, pp. 15–24, 2018.
There is a vibrant debate about consequences of mobile devices on our cognitive capabilities. Use of technology guided navigation has been linked with poor spatial knowledge and wayfinding in both virtual and real world experiments. Our goal was to investigate how the attention people pay to the GPS aid influences their navigation performance. We developed navigation tasks in a virtual city environment and during the experiment, we measured participants' eye movements. We also tested their cognitive traits and interviewed them about their navigation confidence and experience. Our results show that the more time participants spend with the GPS-like map, the less accurate spatial knowledge they manifest and the longer paths they travel without GPS guidance. This poor performance cannot be explained by individual differences in cognitive skills. We also show that the amount of time spent with the GPS is related to participant's subjective evaluation of their own navigation skills, with less confident navigators using GPS more intensively. We therefore suggest that despite an extensive use of navigation aids may have a detrimental effect on person's spatial learning, its general use is modulated by a perception of one's own navigation abilities.
Jens R Helmert; Sebastian Pannasch; Boris M Velichkovsky
In: Journal of Eye Movement Research, vol. 2, no. 1, pp. 1–8, 2008.
In gaze controlled computer interfaces the dwell time is often used as selection criterion. But this solution comes along with several problems, especially in the temporal domain: Eye movement studies on scene perception could demonstrate that fixations of different durations serve different purposes and should therefore be differentiated. The use of dwell time for selection implies the need to distinguish intentional selections from merely per-ceptual processes, described as the Midas touch problem. Moreover, the feedback of the actual own eye position has not yet been addressed to systematic studies in the context of usability in gaze based computer interaction. We present research on the usability of a simple eye typing set up. Different dwell time and eye position feedback configurations were tested. Our results indicate that smoothing raw eye position and temporal delays in visual feedback enhance the system's functionality and usability. Best overall performance was obtained with a dwell time of 500 ms.
Olivier J Hénaff; Robbe L T Goris; Eero P Simoncelli
Perceptual straightening of natural videos Journal Article
In: Nature Neuroscience, vol. 22, pp. 984–991, 2019.
Many behaviors rely on predictions derived from recent visual input, but the temporal evolution of those inputs is generally complex and difficult to extrapolate. We propose that the visual system transforms these inputs to follow straighter temporal trajectories. To test this ‘temporal straightening' hypothesis, we develop a methodology for estimating the curvature of an internal trajectory from human perceptual judgments. We use this to test three distinct predictions: natural sequences that are highly curved in the space of pixel intensities should be substantially straighter perceptually; in contrast, artificial sequences that are straight in the intensity domain should be more curved perceptually; finally, naturalistic sequences that are straight in the intensity domain should be relatively less curved. Perceptual data validate all three predictions, as do population models of the early visual system, providing evidence that the visual system specifically straightens natural videos, offering a solution for tasks that rely on prediction.
Constanze Hesse; Tristan T Nakagawa; Heiner Deubel
Bimanual movement control is moderated by fixation strategies Journal Article
In: Experimental Brain Research, vol. 202, no. 4, pp. 837–850, 2010.
Our study examined the effects of performing a pointing movement with the left hand on the kinematics of a simultaneous grasping movement executed with the right hand. We were especially interested in the question of whether both movements can be controlled independently or whether interference effects occur. Since previous studies suggested that eye movements may play a crucial role in bimanual movement control, the effects of different fixation strategies were also studied. Human participants were either free to move their eyes (Experiment 1) or they had to fixate (Experiment 2) while doing the task. The results show that bimanual movement control differed fundamentally depending on the fixation condition: if free viewing was allowed, participants tended to perform the task sequentially, as reflected in grasping kinematics by a delayed grip opening and a poor adaptation of the grip to the object properties for the duration of the pointing movement. This behavior was accompanied by a serial fixation of the targets for the pointing and grasping movements. In contrast, when central fixation was required, both movements were performed fast and with no obvious interference effects. The results support the notion that bimanual movement control is moderated by fixation strategies. By default, participants seem to prefer a sequential behavior in which the eyes monitor what the hands are doing. However, when forced to fixate, they do surprisingly well in performing both movements in parallel.
Austin R Hicklin; Bradford T Ulery; Thomas A Busey; Maria Antonia Roberts; Jo Ann Buscaglia
In: Cognitive Research: Principles and Implications, vol. 4, no. 12, pp. 1–20, 2019.
Background: The comparison of fingerprints by expert latent print examiners generally involves repeating a process in which the examiner selects a small area of distinctive features in one print (a target group), and searches for it in the other print. In order to isolate this key element of fingerprint comparison, we use eye-tracking data to describe the behavior of latent fingerprint examiners on a narrowly defined “find the target” task. Participants were shown a fingerprint image with a target group indicated and asked to find the corresponding area of ridge detail in a second impression of the same finger and state when they found the target location. Target groups were presented on latent and plain exemplar fingerprint images, and as small areas cropped from the plain exemplars, to assess how image quality and the lack of surrounding visual context affected task performance and eye behavior. One hundred and seventeen participants completed a total of 675 trials. Results: The presence or absence of context notably affected the areas viewed and time spent in comparison; differences between latent and plain exemplar tasks were much less significant. In virtually all trials, examiners repeatedly looked back and forth between the images, suggesting constraints on the capacity of visual working memory. On most trials where context was provided, examiners looked immediately at the corresponding location: with context, median time to find the corresponding location was less than 0.3 s (second fixation); however, without context, median time was 1.9 s (five fixations). A few trials resulted in errors in which the examiner did not find the correct target location. Basic gaze measures of overt behaviors, such as speed, areas visited, and back-and-forth behavior, were used in conjunction with the known target area to infer the underlying cognitive state of the examiner. Conclusions: Visual context has a significant effect on the eye behavior of latent print examiners. Localization errors suggest how errors may occur in real comparisons: examiners sometimes compare an incorrect but similar target group and do not continue to search for a better candidate target group. The analytic methods and predictive models developed here can be used to describe the more complex behavior involved in actual fingerprint comparisons.
Corey D Holland; Oleg V Komogortsev
In: IEEE Transactions on Information Forensics and Security, vol. 8, no. 12, pp. 2115–2126, 2013.
This paper presents an objective evaluation of the effects of eye tracking specification and stimulus presentation on the biometric viability of complex eye movement patterns. Six spatial accuracy tiers (0.5°, 1.0°, 1.5°, 2.0°, 2.5°, 3.0°), six temporal resolution tiers (1000, 500, 250, 120, 75, 30 Hz), and five stimulus types (simple, complex, cognitive, textual, random) are evaluated to identify acceptable conditions under which to collect eye movement data. The results suggest the use of eye tracking equipment capable of at least 0.5° spatial accuracy and 250 Hz temporal resolution for biometric purposes, whereas stimulus had little effect on the biometric viability of eye movements.
Seung Kweon Hong
In: Journal of the Ergonomics Society of Korea, vol. 34, no. 1, pp. 19–27, 2015.
Objective: The aim of this study is to investigate how well eye movement times in visual target selection tasks by an eye input device follows the typical Fitts' Law and to compare vertical and horizontal eye movement times. Background: Typically manual pointing provides excellent fit to the Fitts' Law model. However, when an eye input device is used for the visual target selection tasks, there were some debates on whether the eye movement times in can be described by the Fitts' Law. More empirical studies should be added to resolve these debates. This study is an empirical study for resolving this debate. On the other hand, many researchers reported the direction of movement in typical manual pointing has some effects on the movement times. The other question in this study is whether the direction of eye movement also affects the eye movement times. Method: A cursor movement times in visual target selection tasks by both input devices were collected. The layout of visual targets was set up by two types. Cursor starting position for vertical movement times were in the top of the monitor and visual targets were located in the bottom, while cursor starting positions for horizontal movement times were in the right of the monitor and visual targets were located in the left. Results: Although eye movement time was described by the Fitts' Law, the error rate was high and correlation was relatively low (R2 = 0.80 for horizontal movements and R2 = 0.66 for vertical movements), compared to those of manual movement. According to the movement direction, manual movement times were not significantly different, but eye movement times were significantly different. Conclusion: Eye movement times in the selection of visual targets by an eye-gaze input device could be described and predicted by the Fitts' Law. Eye movement times were significantly different according to the direction of eye movement. Application: The results of this study might help to understand eye movement times in visual target selection tasks by the eye input devices.
David R Howell; Anna N Brilliant; Christina L Master; William P Meehan
In: Clinical Journal of Sport Medicine, vol. 30, no. 5, pp. 444–450, 2020.
OBJECTIVE: To determine the test-retest correlation of an objective eye-tracking device among uninjured youth athletes. DESIGN: Repeated-measures study. SETTING: Sports-medicine clinic. PARTICIPANTS: Healthy youth athletes (mean age = 14.6 ± 2.2 years; 39% women) completed a brief, automated, and objective eye-tracking assessment. INDEPENDENT VARIABLES: Participants completed the eye-tracking assessment at 2 different testing sessions. MAIN OUTCOME MEASURES: During the assessment, participants watched a 220-second video clip while it moved around a computer monitor in a clockwise direction as an eye tracker recorded eye movements. We obtained 13 eye movement outcome variables and assessed correlations between the assessments made at the 2 time points using Spearman's Rho (rs). RESULTS: Thirty-one participants completed the eye-tracking evaluation at 2 time points [median = 7 (interquartile range = 6-9) days between tests]. No significant differences in outcomes were found between the 2 testing times. Several eye movement variables demonstrated moderate to moderately high test-retest reliability. Combined eye conjugacy metric (BOX score
Yu feng Huang; Feng yang Kuo; Yu feng Huang; Feng yang Kuo
In: Internet Research, vol. 21, no. 5, pp. 541–561, 2011.
Purpose – Because presentation formats, i.e. table v. graph, in shopping web sites may promote or inhibit deliberate consumer decision making, it is important to understand the effects of information presentation on deliberateness. This paper seeks to empirically test whether the table format enhances deliberate decision making, while the web map weakens the process. In addition, deliberateness can be influenced by the decision orientation, i.e. emotionally charged or accuracy oriented. Thus, the paper further examines the effect of presentations across these two decision orientations. Design/methodology/approach – Objective and detailed description of the decision process is used to examine the effects. A two (decision orientation: positive emotion v. accuracy) by two (presentation: map v. table) eye-tracking experiment is designed. Deliberateness is quantified with the information processing pattern summarized from eye movement data. Participants are required to make preferential choices from simple decision tasks. Findings – The results confirm that the table strengthens while the map weakens deliberateness. In addition, this effect is mostly evident across the two decision orientations. An explorative factor analysis further reveals that there are two major attention distribution functions (global v. local) underlying the decision process. Research limitations/implications – Only simple decision tasks are used in the present study and therefore complex tasks should be introduced to examine the effects in the future. Practical implications – For consumers, they should become aware that the table facilitates while the map diminishes deliberateness. For web businesses, they may try to strengthen the impulsivity in a web map filled with emotional stimuli. Originality/value – This research is one of the first attempts to investigate the joint effects of presentations and decision orientations on decision deliberateness in the internet domain. The eye movement data are also valuable because previous studies seldom provided such detailed description of the decision process.
Anke Huckauf; Mario H Urbina
In: ACM Transactions on Applied Perception, vol. 8, no. 2, pp. 1–14, 2011.
Controlling computers using eye movements can provide a fast and efficient alternative to the computer mouse. However, implementing object selection in gaze-controlled systems is still a challenge. Dwell times or fixations on a certain object typically used to elicit the selection of this object show several disadvantages. We studied deviations of critical thresholds by an individual and task-specific adaptation method. This demonstrated an enormous variability of optimal dwell times. We developed an alternative approach using antisaccades for selection. For selection by antisaccades, highlighted objects are copied to one side of the object. The object is selected when fixating to the side opposed to that copy requiring to inhibit an automatic gaze shift toward new objects. Both techniques were compared in a selection task. Two experiments revealed superior performance in terms of errors for the individually adapted dwell times. Antisaccades provide an alternative approach to dwell time selection, but they did not show an improvement over dwell time. We discuss potential improvements in the antisaccade implementation with which antisaccades might become a serious alternative to dwell times for object selection in gaze-controlled systems.
Lynn Huestegge; Eva Maria Skottke; Sina Anders; Jochen Müsseler; Günter Debus
In: Transportation Research Part F: Traffic Psychology and Behaviour, vol. 13, no. 1, pp. 1–8, 2010.
Eye movements are a key behavior for visual information processing in traffic situations and for vehicle control. Previous research showed that effective ways of eye guidance are related to better hazard perception skills. Furthermore, hazard perception is reported to be faster for experienced drivers as compared to novice drivers. However, little is known whether this difference can be attributed to the development of visual orientation, or hazard processing. In the present study, we compared eye movements of 20 inexperienced and 20 experienced drivers in a hazard perception task. We separately measured (a) the interval between the onset of a static hazard scene and the first fixation on a potential hazard, and (b) the interval between the first fixation on a potential hazard and the final response. While overall RT was faster for experienced compared to inexperienced drivers, the scanning patterns revealed that this difference was due to faster processing after the initial fixation on the hazard, whereas scene scanning times until the initial fixation on the hazard did not differ between groups. textcopyright 2009 Elsevier Ltd. All rights reserved.
Lynn Huestegge; Anne Böckler
In: Journal of Vision, vol. 16, no. 2, pp. 1–15, 2016.
Effective gaze control in traffic, based on peripheral visual information, is important to avoid hazards. Whereas previous hazard perception research mainly focused on skill-component development (e.g., orientation and hazard processing), little is known about the role and dynamics of peripheral vision in hazard perception. We analyzed eye movement data from a study in which participants scanned static traffic scenes including medium-level versus dangerous hazards and focused on characteristics of fixations prior to entering the hazard region. We found that initial saccade amplitudes into the hazard region were substantially longer for dangerous (vs. medium-level) hazards, irrespective of participants' driving expertise. An analysis of the temporal dynamics of this hazard-level dependent saccade targeting distance effect revealed that peripheral hazard-level processing occurred around 200–400 ms during the course of the fixation prior to entering the hazard region. An additional psychophysical hazard detection experiment, in which hazard eccentricity was manipulated, revealed better detection for dangerous (vs. medium-level) hazards in both central and peripheral vision. Furthermore, we observed a significant perceptual decline from center to periphery for medium (but not for highly) dangerous hazards. Overall, the results suggest that hazard processing is remarkably effective in peripheral vision and utilized to guide the eyes toward potential hazards.
Samuel B Hutton; S Nolte
The effect of gaze cues on attention to print advertisements Journal Article
In: Applied Cognitive Psychology, vol. 25, no. 6, pp. 887–892, 2011.
Print advertisements often employ images of humans whose gaze may be focussed on an object or region within the advertisement. Gaze cues are powerful factors in determining the focus of our attention, but there have been no systematic studies exploring the impact of gaze cues on attention to print advertisements. We tracked participants' eyes whilst they read an on-screen magazine containing advertisements in which the model either looked at the product being advertised or towards the viewer. When the model's gaze was directed at the product, participants spent longer looking at the product, the brand logo and the rest of the advertisement compared to when the model's gaze was directed towards the viewer. These results demonstrate that the focus of reader's attention can be readily manipulated by gaze cues provided by models in advertisements, and that these influences go beyond simply drawing attention to the cued area of the advertisement.
Elisa Infanti; Samuel D Schwarzkopf
Mapping sequences can bias population receptive field estimates Journal Article
In: NeuroImage, vol. 211, pp. 1–13, 2020.
Population receptive field (pRF) modelling is a common technique for estimating the stimulus-selectivity of populations of neurons using neuroimaging. Here, we aimed to address if pRF properties estimated with this method depend on the spatio-temporal structure and the predictability of the mapping stimulus. We mapped the polar angle preference and tuning width of voxels in visual cortex (V1–V4) of healthy, adult volunteers. We compared sequences sweeping orderly through the visual field or jumping from location to location employing stimuli of different width (45° vs 6°) and cycles of variable duration (8s vs 60s). While we did not observe any systematic influence of stimulus predictability, the temporal structure of the sequences significantly affected tuning width estimates. Ordered designs with large wedges and short cycles produced systematically smaller estimates than random sequences. Interestingly, when we used small wedges and long cycles, we obtained larger tuning width estimates for ordered than random sequences. We suggest that ordered and random mapping protocols show different susceptibility to other design choices such as stimulus type and duration of the mapping cycle and can produce significantly different pRF results.
Leah A Irish; Allison C Veronda; Amanda E van Lamsweerde; Michael P Mead; Stephen A Wonderlich
In: International Journal of Behavioral Medicine, pp. 3–5, 2020.
Background: Although self-help strategies to improve sleep are widely accessible, little is known about the ways in which individuals interact with these resources and the extent to which people are successful at improving their own sleep based on sleep health recommendations. The present study developed a lab-based model of self-help behavior by observing the development of sleep health improvement plans (SHIPs) and examining factors that may influence SHIP development. Method: Sixty healthy, young adults were identified as poor sleepers during one week of actigraphy baseline and recruited to develop and implement a SHIP. Participants viewed a list of sleep health recommendations through an eye tracker and provided information on their current sleep health habits. Each participant implemented their SHIP for 1 week during which sleep was assessed with actigraphy. Results: Current sleep health habits, but not patterns of visual attention, predicted SHIP goal selection. Sleep duration increased significantly during the week of SHIP implementation. Conclusions: Findings indicate that the SHIP protocol is an effective strategy for observing self-help behavior and examining factors that influence goal selection. The increase in sleep duration suggests that individuals may be successful at extending their own sleep, though causal mechanisms have not yet been established. This study presents a lab-based protocol for studying self-help sleep improvement behavior and takes an initial step toward gaining knowledge required to improve sleep health recommendations.
Ondřej Javora; Tereza Hannemann; Kristina Volná; Filip Děchtěrenko; Tereza Tetourová; Tereza Stárková; Cyril Brom
In: Journal of Computer Assisted Learning, pp. 1–14, 2020.
The present study investigates affective-motivational, attention, and learning effects of unexplored emotional design manipulation: Contextual animation (animation of contextual elements) in multimedia learning game (MLGs) for children. Participants (N = 134; Mage = 9.25; Grades 3 and 4) learned either from an experimental version of the MLG with a high amount of contextual animation or from an identical MLG with no contextual animation (control). Children strongly preferred ($chi$2 = 87.04, p textless.001) and found the experimental version more attractive (p textless.001
Yu Cin Jian; Chao Jung Wu
In: Computers in Human Behavior, vol. 61, pp. 622–632, 2016.
Eye-tracking technology can reflect readers' sophisticated cognitive processes and explain the psychological meanings of reading to some extent. This study investigated the function of diagrams with numbered arrows and illustrated text in conveying the kinematic information of machine operation by recording readers' eye movements and reading tests. Participants read two diagrams depicting how a flushing system works with or without numbered arrows. Then, they read an illustrated text describing the system. The results showed the arrow group significantly outperformed the non-arrow group on the step-by-step test after reading the diagrams, but this discrepancy was reduced after reading the illustrated text. Also, the arrow group outperformed the non-arrow group on the troubleshooting test measuring problem solving. Eye movement data showed the arrow group spent less time reading the diagram and text which conveyed less complicated concept than the non-arrow group, but both groups allocated considerable cognitive resources on complicated diagram and sentences. Overall, this study found learners were able to construct less complex kinematic representation after reading static diagrams with numbered arrows, whereas constructing a more complex kinematic representation needed text information. Another interesting finding was kinematic information conveyed via diagrams is independent of that via text on some areas.
Yu Cin Jian
In: Reading and Writing, vol. 30, no. 7, pp. 1447–1472, 2017.
This study investigated the cognitive processes and reader characteristics of sixth graders who had good and poor performance when reading scientific text with diagrams. We first measured the reading ability and reading self-efficacy of sixth-grade participants, and then recorded their eye movements while they were reading an illustrated scientific text and scored their answers to content-related questions. Finally, the participants evaluated the difficulty of the article, the attractiveness of the content and diagram, and their learning performance. The participants were then classified into groups based on how many correct responses they gave to questions related to reading. The results showed that readers with good performance had better character recognition ability and reading self-efficacy, were more attracted to the diagrams, and had higher self-evaluated learning levels than the readers with poor performance did. Eye-movement data indicated that readers with good performance spent significantly more reading time on the whole article, the text section, and the diagram section than the readers with poor performance did. Interestingly, readers with good performance had significantly longer mean fixation duration on the diagrams than readers with poor performance did; further, readers with good performance made more saccades between the text and the diagrams. Additionally, sequential analysis of eye movements showed that readers with good performance preferred to observe the diagram rather than the text after reading the title, but this tendency was not present in readers with poor performance. In sum, using eye-tracking technology and several reading tests and questionnaires, we found that various cognitive aspects (reading strategy, diagram utilization) and affective aspects (reading self-efficacy, article likeness, diagram attraction, and self-evaluation of learning) affected sixth graders' reading performance in this study.
Yu Cin Jian; Hwa Wei Ko
In: Computers and Education, vol. 113, pp. 263–279, 2017.
In this study, eye movement recordings and comprehension tests were used to investigate children's cognitive processes and comprehension when reading illustrated science texts. Ten-year-old children (N = 42) who were beginning to read to learn, with high and low reading ability read two illustrated science texts in Chinese (one medium-difficult article, one difficult article), and then answered questions that measured comprehension of textual and pictorial information as well as text-and-picture integration. The high-ability group outperformed the low-ability group on all questions. Eye movement analyses showed that both group of students spent roughly the same amount of time reading both articles, but had different methods of reading them. The low-ability group was inclined to read what seemed easier to them and read the text more. The high-ability group attended more to the difficult article and made an effort to integrate the textual and pictorial information. During a first-pass reading of the difficult article, high- but not low-ability readers returned to the previous paragraph. The low-ability readers spent more time reading the less difficult article and not the difficult one that required teachers' attention. Suggestions for classroom instruction are proposed accordingly.
Sabrina Karl; Magdalena Boch; Zsófia Virányi; Claus Lamm; Ludwig Huber
Training pet dogs for eye-tracking and awake fMRI Journal Article
In: Behavior Research Methods, vol. 52, no. 2, pp. 838–856, 2020.
In recent years, two well-developed methods of studying mental processes in humans have been successively applied to dogs. First, eye-tracking has been used to study visual cognition without distraction in unrestrained dogs. Second, noninvasive functional magnetic resonance imaging (fMRI) has been used for assessing the brain functions of dogs in vivo. Both methods, however, require dogs to sit, stand, or lie motionless while yet remaining attentive for several minutes, during which time their brain activity and eye movements are measured. Whereas eye-tracking in dogs is performed in a quiet and, apart from the experimental stimuli, nonstimulating and highly controlled environment, MRI scanning can only be performed in a very noisy and spatially restraining MRI scanner, in which dogs need to feel relaxed and stay motionless in order to study their brain and cognition with high precision. Here we describe in detail a training regime that is perfectly suited to train dogs in the required skills, with a high success probability and while keeping to the highest ethical standards of animal welfare—that is, without using aversive training methods or any other compromises to the dog's well-being for both methods. By reporting data from 41 dogs that successfully participated in eye-tracking training and 24 dogs IN fMRI training, we provide robust qualitative and quantitative evidence for the quality and efficiency of our training methods. By documenting and validating our training approach here, we aim to inspire others to use our methods to apply eye-tracking or fMRI for their investigations of canine behavior and cognition.
Sabrina Karl; Magdalena Boch; Anna Zamansky; Dirk van der Linden; Isabella C Wagner; Christoph J Völter; Claus Lamm; Ludwig Huber
In: Scientific Reports, vol. 10, pp. 1–15, 2020.
Behavioural studies revealed that the dog–human relationship resembles the human mother–child bond, but the underlying mechanisms remain unclear. Here, we report the results of a multi-method approach combining fMRI (N = 17), eye-tracking (N = 15), and behavioural preference tests (N = 24) to explore the engagement of an attachment-like system in dogs seeing human faces. We presented morph videos of the caregiver, a familiar person, and a stranger showing either happy or angry facial expressions. Regardless of emotion, viewing the caregiver activated brain regions associated with emotion and attachment processing in humans. In contrast, the stranger elicited activation mainly in brain regions related to visual and motor processing, and the familiar person relatively weak activations overall. While the majority of happy stimuli led to increased activation of the caudate nucleus associated with reward processing, angry stimuli led to activations in limbic regions. Both the eye-tracking and preference test data supported the superior role of the caregiver's face and were in line with the findings from the fMRI experiment. While preliminary, these findings indicate that cutting across different levels, from brain to behaviour, can provide novel and converging insights into the engagement of the putative attachment system when dogs interact with humans.
Ioanna Katidioti; Jelmer P Borst; Douwe J Bierens de Haan; Tamara Pepping; Marieke K van Vugt; Niels A Taatgen
In: International Journal of Human-Computer Interaction, vol. 32, no. 10, pp. 791–801, 2016.
Interruptions are prevalent in everyday life and can be very disruptive. An important factor that affects the level of disruptiveness is the timing of the interruption: Interruptions at low-workload moments are known to be less disruptive than interruptions at high-workload moments. In this study, we developed a task-independent interruption management system (IMS) that interrupts users at low-workload moments in order to minimize the disruptiveness of interruptions. The IMS identifies low-workload moments in real time by measuring users' pupil dilation, which is a well-known indicator of workload. Using an experimental setup we showed that the IMS succeeded in finding the optimal moments for interruptions and marginally improved performance. Because our IMS is task-independent—it does not require a task analysis—it can be broadly applied.
Loes T E Kessels; Robert A C Ruiter
Eye movement responses to health messages on cigarette packages Journal Article
In: BMC Public Health, vol. 12, no. 1, pp. 1–9, 2012.
BACKGROUND: While the majority of the health messages on cigarette packages contain threatening health information, previous studies indicate that risk information can trigger defensive reactions, especially when the information is self-relevant (i.e., smokers). Providing coping information, information that provides help for quitting smoking, might increase attention to health messages instead of triggering defensive reactions.$backslash$n$backslash$nMETHODS: Eye-movement registration can detect attention preferences for different health education messages over a longer period of time during message exposure. In a randomized, experimental study with 23 smoking and 41 non-smoking student volunteers, eye-movements were recorded for sixteen self-created cigarette packages containing health texts that presented either high risk or coping information combined with a high threat or a low threat smoking-related photo.$backslash$n$backslash$nRESULTS: Results of the eye movement data showed that smokers tend to spend more time looking (i.e., more unique fixations and longer dwell time) at the coping information than at the high risk information irrespective of the content of the smoking-related photo. Non-smokers tend to spend more time looking at the high risk information than at the coping information when the information was presented in combination with a high threat smoking photo. When a low threat photo was presented, non-smokers paid more attention to the coping information than to the high risk information. Results for the smoking photos showed more attention allocation for low threat photos that were presented in combination with high risk information than for low threat photos in combination with coping information. No attention differences were found for the high threat photos.$backslash$n$backslash$nCONCLUSIONS: Non-smokers demonstrated an attention preference for high risk information as opposed to coping information, but only when text information was presented in combination with a high threat photo. For smokers, however, our findings suggest more attention allocation for coping information than for health risk information. This preference for coping information is not reflected in current health messages to motivate smokers to quit smoking. Coping information should be more frequently implemented in health message design to increase attention for these messages and thus contribute to effective persuasion.
Josiah P J King; Jia E Loy; Hannah Rohde; Martin Corley
Interpreting nonverbal cues to deception in real time Journal Article
In: PLoS ONE, vol. 15, no. 3, pp. 1–25, 2020.
When questioning the veracity of an utterance, we perceive certain non-linguistic behaviours to indicate that a speaker is being deceptive. Recent work has highlighted that listeners' associations between speech disfluency and dishonesty are detectable at the earliest stages of reference comprehension, suggesting that the manner of spoken delivery influences pragmatic judgements concurrently with the processing of lexical information. Here, we investigate the integration of a speaker's gestures into judgements of deception, and ask if and when associations between nonverbal cues and deception emerge. Participants saw and heard a video of a potentially dishonest speaker describe treasure hidden behind an object, while also viewing images of both the named object and a distractor object. Their task was to click on the object behind which they believed the treasure to actually be hidden. Eye and mouse movements were recorded. Experiment 1 investigated listeners' associations between visual cues and deception, using a variety of static and dynamic cues. Experiment 2 focused on adaptor gestures. We show that a speaker's nonverbal behaviour can have a rapid and direct influence on listeners' pragmatic judgements, supporting the idea that communication is fundamentally multimodal.
Sequential Processing in Comprehension of Hierarchical Graphs Journal Article
In: Applied Cognitive Psychology, vol. 18, pp. 467–480, 2004.
Hierarchical graphs represent the relationships between non-numerical entities or concepts (like computer file systems, family trees, etc). Graph nodes represent the concepts and interconnecting lines represent the relationships. We recorded participants' eye movements while viewing such graphs to test two possible models of graph comprehension. Graph readers had to answer interpretive questions, which required comparisons between two graph nodes. One model postulates a search and a combined search-reasoning stage of graph comprehension (two-stage model), whereas the second model predicts three stages, two stages devoted to the search of the relevant graph nodes and a separate reasoning stage. A detailed analysis of the eye movement data provided clear support for the three-stage model. This is in line with recent studies, which suggest that participants serialize problem solving tasks in order to minimize the overall processing load.
In: Applied Cognitive Psychology, vol. 25, no. 6, pp. 893–905, 2011.
Hierarchical graphs (e.g. file system browsers, family trees) represent objects (e.g. files, folders) as graph nodes, and relations (subfolder relations) between them as lines. In three experiments, participants viewed such graphs and carried out tasks that either required search for two target nodes (Experiment 1A), reasoning about their relation (Experiment 1B), or both (Experiment 2). We recorded eye movements and used the number of fixations in different phases to identify distinct stages of comprehension. Search in graphs proceeded like search in standard visual search tasks and was mostly unaffected by graph properties. Reasoning occurred typically in a separate stage at the end ofcomprehension and was affected by intersecting graph lines. The alignment ofnodes, together with linguistic factors, may also affect comprehension. Overall, there was good evidence to suggest that participants read graphs in a sequential manner, and that this is an economical approach of comprehension.
Moritz Köster; Marco Rüth; Kai Christoph Hamborg; Kai Kaspar
In: Applied Cognitive Psychology, vol. 29, no. 2, pp. 181–192, 2015.
Internet companies collect a vast amount of data about their users in order to personalize banner ads. However, very little is known about the effects of personalized banners on attention and memory. In the present study, 48 subjects performed search tasks on web pages containing personalized or nonpersonalized banners. Overt attention was measured by an eye-tracker, and recognition of banner and task-relevant information was subsequently examined. The entropy of fixations served as a measure for the overall exploration of web pages. Results confirm the hypotheses that personalization enhances recognition for the content of banners while the effect on attention was weaker and partially nonsignificant. In contrast, overall exploration of web pages and recognition of task-relevant information was not influenced. The temporal course of fixations revealed that visual exploration of banners typically proceeds from the picture to the logo and finally to the slogan. We discuss theoretical and practical implications.
Ellen M Kok; Halszka Jarodzka; Anique B H de Bruin; Hussain A N BinAmir; Simon G F Robben; Jeroen J G van Merriënboer
Systematic viewing in radiology: Seeing more, missing less? Journal Article
In: Advances in Health Sciences Education, vol. 21, no. 1, pp. 189–205, 2016.
To prevent radiologists from overlooking lesions, radiology textbooks rec- ommend ‘‘systematic viewing,'' a technique whereby anatomical areas are inspected in a fixed order. This would ensure complete inspection (full coverage) of the image and, in turn, improve diagnostic performance. To test this assumption, two experiments were performed. Both experiments investigated the relationship between systematic viewing, coverage, and diagnostic performance. Additionally, the first investigated whether sys- tematic viewing increases with expertise; the second investigated whether novices benefit from full-coverage or systematic viewing training. In Experiment 1, 11 students, ten res- idents, and nine radiologists inspected five chest radiographs. Experiment 2 had 75 students undergo a training in either systematic, full-coverage (without being systematic) or non- systematic viewing. Eye movements and diagnostic performance were measured throughout both experiments. In Experiment 1, no significant correlations were found between systematic viewing and coverage
Oleg V Komogortsev; Corey D Holland; Alex Karpov; Larry R Price
In: ACM Transactions on Applied Perception, vol. 11, no. 4, pp. 1–17, 2014.
This article proposes and evaluates a novel biometric approach utilizing the internal, nonvisible, anatomical structure of the human eye. The proposed method estimates the anatomical properties of the human oculomotor plant from the measurable properties of human eye movements, utilizing a two-dimensional linear homeomorphic model of the oculomotor plant. The derived properties are evaluated within a biometric framework to determine their efficacy in both verification and identification scenarios. The results suggest that the physical properties derived from the oculomotor plant model are capable of achieving 20.3% equal error rate and 65.7% rank-1 identification rate on high-resolution equipment involving 32 subjects, with biometric samples taken over four recording sessions; or 22.2% equal error rate and 12.6% rank-1 identification rate on low-resolution equipment involving 172 subjects, with biometric samples taken over two recording sessions.
Oleg V Komogortsev; Alexey Karpov; Corey D Holland
In: IEEE Transactions on Information Forensics and Security, vol. 10, no. 4, pp. 716–725, 2015.
This paper investigates liveness detection techniques in the area of eye movement biometrics. We investigate a specific scenario, in which an impostor constructs an artificial replica of the human eye. Two attack scenarios are considered: 1) the impostor does not have access to the biometric templates representing authentic users, and instead utilizes average anatomical values from the relevant literature and 2) the impostor gains access to the complete biometric database, and is able to employ exact anatomical values for each individual. In this paper, liveness detection is performed at the feature and match score levels for several existing forms of eye movement biometric, based on different aspects of the human visual system. The ability of each technique to differentiate between live and artificial recordings is measured by its corresponding false spoof acceptance rate, false live rejection rate, and classification rate. The results suggest that eye movement biometrics are highly resistant to circumvention by artificial recordings when liveness detection is performed at the feature level. Unfortunately, not all techniques provide feature vectors that are suitable for liveness detection at the feature level. At the match score level, the accuracy of liveness detection depends highly on the biometric techniques employed.
Oleg V Komogortsev; Alexey Karpov
In: IEEE Transactions on Information Forensics and Security, vol. 11, no. 3, pp. 621–632, 2016.
This paper presents an objective evaluation of the effects of environmental factors, such as stimulus presentation and eye tracking specifications, on the biometric accuracy of oculomotor plant characteristic (OPC) biometrics. The study examines the largest known dataset for eye movement biometrics, with eye movements recorded from 323 subjects over multiple sessions. Six spatial precision tiers (0.01°, 0.11°, 0.21°, 0.31°, 0.41°, 0.51°), six temporal resolution tiers (1000 Hz, 500 Hz, 250 Hz, 120 Hz, 75 Hz, 30 Hz), and three stimulus types (horizontal, random, textual) are evaluated to identify acceptable conditions under which to collect eye movement data. The results suggest the use of eye tracking equipment providing at least 0.1° spatial precision and 30 Hz sampling rate for biometric purposes, and the use of a horizontal pattern stimulus when using the two- dimensional oculomotor plant model developed by Komogortsev et al. 
Kentaro Kotani; Yuji Yamaguchi; Takafumi Asao; Ken Horii
In: International Journal of Human-Computer Interaction, vol. 26, no. 4, pp. 361–376, 2010.
The objective of this study was to construct and empirically evaluate an improved, online eye-typing interface with respect to its practical usability. The system used the concept of saccadic latency, a silent period of 200 to 250 msec precedes the initiation of a saccade, for identifying the user's intentional text entry. Ten individuals participated in the experiment that was conducted on 2 consecutive days, with three blocks of trials conducted on each day. A block included five trials, each of which involved completing the text entry of a short sentence using this eye-typing interface. The proposed interface was evaluated by the user's performance based on indices including typing speed and an error index. For defining the error index, the overproduction rates (ORs) were used. The results showed an average OR of 0.032 and average typing speed of 27.1 characters typed per minute. The result revealed that the typing speed changed as an effect of participant, day, and block. The characteristics of the proposed interface with the related characteristics of an eye-typing interface were summarized to discuss a further study for the eye-typing interface.
Archonteia Kyroudi; Kristoffer Petersson; Mahmut Ozsahin; Jean Bourhis; Francois Bochud; Raphaël Moeckli
In: Zeitschrift fur Medizinische Physik, vol. 28, pp. 318–324, 2018.
Background and purpose: Treatment plan evaluation is a clinical decision-making problem that involves visual search and analysis in a contextually rich environment, including delineated structures and isodose lines superposed on CT data. It is a two-step process that includes visual analysis and clinical reasoning. In this work, we used eye tracking methods to gain more knowledge about the treatment plan evaluation process in radiation therapy. Materials and methods: Dose distributions on a single transverse slice of ten prostate cancer treatment plans were presented to eight decision makers. Their eye movements and fixations were recorded with an EyeLink1000 remote eye-tracker. Total evaluation time, dwell time, number and duration of fixations on pre-segmented areas of interest were measured. Results: The main structures receiving more and longer fixations (PTV, rectum, bladder) correspond to the main trade-offs evaluated in a typical prostate plan. Radiation oncologists made more fixations on the main structures compared to the medical physicists. Radiation oncologists fixated longer on the rectum when visited for the first time, while medical physicists fixated longer on the bladder. Conclusion: Our results quantify differences in the visual evaluation patterns between radiation oncologists and medical physicists, which indicate differences in their decision making strategies.
Miguel A Lago; Craig K Abbey; Miguel P Eckstein
Foveated model observers for visual search in 3D medical images Journal Article
In: IEEE Transactions on Medical Imaging, 2020.
Model observers have a long history of success in predicting human observer performance in clinically-relevant detection tasks. New 3D image modalities provide more signal information but vastly increase the search space to be scrutinized. Here, we compared standard linear model observers (ideal observers, non-pre-whitening matched filter with eye filter, and various versions of Channelized Hotelling models) to human performance searching in 3D 1/f2.8 filtered noise images and assessed its relationship to the more traditional location known exactly detection tasks and 2D search. We investigated two different signal types that vary in their detectability away from the point of fixation (visual periphery). We show that the influence of 3D search on human performance interacts with the signal’s detectability in the visual periphery. Detection performance for signals difficult to detect in the visual periphery deteriorates greatly in 3D search but not in 3D location known exactly and 2D search. Standard model observers do not predict the interaction between 3D search and signal type. A proposed extension of the Channelized Hotelling model (foveated search model) that processes the image with reduced spatial detail away from the point of fixation, explores the image through eye movements, and scrolls across slices can successfully predict the interaction observed in humans and also the types of errors in 3D search. Together, the findings highlight the need for foveated model observers for image quality evaluation with 3D search.
Anthony J Lambert; Tanvi Sharma; Nathan Ryckman
In: Vision, vol. 4, pp. 1–13, 2020.
Many accidents, such as those involving collisions or trips, appear to involve failures of vision, but the association between accident risk and vision as conventionally assessed is weak or absent. We addressed this conundrum by embracing the distinction inspired by neuroscientific research, between vision for perception and vision for action. A dual-process perspective predicts that accident vulnerability will be associated more strongly with vision for action than vision for perception. In this preliminary investigation, older and younger adults, with relatively high and relatively low self-reported accident vulnerability (Accident Proneness Questionnaire), completed three behavioural assessments targeting vision for perception (Freiburg Visual Acuity Test); vision for action (Vision for Action Test—VAT); and the ability to perform physical actions involving balance, walking and standing (Short Physical Performance Battery). Accident vulnerability was not associated with visual acuity or with performance of physical actions but was associated with VAT performance. VAT assesses the ability to link visual input with a specific action—launching a saccadic eye movement as rapidly as possible, in response to shapes presented in peripheral vision. The predictive relationship between VAT performance and accident vulnerability was independent of age, visual acuity and physical performance scores. Applied implications of these findings are considered.
Linnéa Larsson; Marcus Nyström; Richard Andersson; Martin Stridh
In: Biomedical Signal Processing and Control, vol. 18, pp. 145–152, 2015.
A novel algorithm for the detection of fixations and smooth pursuit movements in high-speed eye-tracking data is proposed, which uses a three-stage procedure to divide the intersaccadic intervals into a sequence of fixation and smooth pursuit events. The first stage performs a preliminary segmentation while the latter two stages evaluate the characteristics of each such segment and reorganize the preliminary segments into fixations and smooth pursuit events. Five different performance measures are calculated to investigate different aspects of the algorithm's behavior. The algorithm is compared to the current state-of-the-art (I-VDT and the algorithm in ), as well as to annotations by two experts. The proposed algorithm performs considerably better (average Cohen's kappa 0.42) than the I-VDT algorithm (average Cohen's kappa 0.20) and the algorithm in  (average Cohen's kappa 0.16), when compared to the experts' annotations.
Mark A LeBoeuf; Jessica M Choplin; Debra Pogrund Stark
In: Journal of Behavioral Decision Making, vol. 29, no. 2-3, pp. 307–321, 2016.
The federal government mandates the use of home-loan disclosure forms to facilitate understanding of offered loans, enable comparison shopping, and prevent predatory lending. Predatory lending persists, however, and scant research has examined how salespeople might undermine the effectiveness of these forms. Three eye-tracking studies (a laboratory simulation and two controlled experiments) investigated how conversational norms affect the information consumers can glean from these forms. Study 1 was a laboratory simulation that recreated in the laboratory; the effects that previous literature suggested is likely happening in the field, namely, that following or violating conversational norms affects the information that consumers can glean from home-loan disclosure forms and the home-loan decisions they make. Studies 2 and 3 were controlled experiments that isolated the possible factors responsible for the observed biases in the information gleaned from these forms. The results suggest that attentional biases are largely responsible for the effects of conversation on the information consumers get and that perceived importance plays little to no role. Policy implications and how eye-tracking technology can be employed to improve decision-making are considered.
Minyoung Lee; Randolph Blake; Sujin Kim; Chai-Youn Kim
In: Proceedings of the National Academy of Sciences, vol. 112, no. 27, pp. 8493–8498, 2015.
Predictive influences of auditory information on resolution of visual competition were investigated using music, whose visual symbolic notation is familiar only to those with musical training. Results from two experiments using different experimental paradigms revealed that melodic congruence between what is seen and what is heard impacts perceptual dynamics during binocular rivalry. This bisensory interaction was observed only when the musical score was perceptually dominant, not when it was suppressed from awareness, and it was observed only in people who could read music. Results from two ancillary experiments showed that this effect of congruence cannot be explained by differential patterns of eye movements or by differential response sluggishness associated with congruent score/melody combinations. Taken together, these results demonstrate robust audiovisual interaction based on high-level, symbolic representations and its predictive influence on perceptual dynamics during binocular rivalry.
Tsu Chiang Lei; Shih Chieh Wu; Chi Wen Chao; Su Hsin Lee
In: GeoJournal, vol. 81, no. 2, pp. 153–167, 2016.
With the evolution of mapping technology, electronic maps are gradually evolving from traditional 2D formats, and increasingly using a 3D format to represent environmental features. However, these two types of spatial maps might produce different visual attention modes, leading to different spatial wayfinding (or searching) decisions. This study designs a search task for a spatial object to demonstrate whether different types of spatial maps indeed produce different visual attention and decision making. We use eye tracking technology to record the content of visual attention for 44 test subjects with normal eyesight when looking at 2D and 3D maps. The two types of maps have the same scope, but their contents differ in terms of composition, material, and visual observation angle. We use a t test statistical model to analyze differences in indices of eye movement, applying spatial autocorrelation to analyze the aggregation of fixation points and the strength of aggregation. The results show that aside from seek time, there are significant differences between 2D and 3D electronic maps in terms of fixation time and saccade amplitude. This study uses a spatial autocorrelation model to analyze the aggregation of the spatial distribution of fixation points. The results show that in the 2D electronic map the spatial clustering of fixation points occurs in a range of around 12° from the center, and is accompanied by a shorter viewing time and larger saccade amplitude. In the 3D electronic map, the spatial clustering of fixation points occurs in a range of around 9° from the center, and is accompanied by a longer viewing time and smaller saccadic amplitude. The two statistical tests shown above demonstrate that 2D and 3D electronic maps produce different viewing behaviors. The 2D electronic map is more likely to produce fast browsing behavior, which uses rapid eye movements to piece together preliminary information about the overall environment. This enables basic information about the environment to be obtained quickly, but at the cost of the level of detail of the information obtained. However, in the 3D electronic map, more focused browsing occurs. Longer fixations enable the user to gather detailed information from points of interest on the map, and thereby obtain more information about the environment (such as material, color, and depth) and determine the interaction between people and the environment. However, this mode requires a longer viewing time and greater use of directed attention, and therefore may not be conducive to use over a longer period of time. After summarizing the above research findings, the study suggests that future electronic maps can consider combining 2D and 3D modes to simultaneously display electronic map content. Such a mixed viewing mode can provide a more effective viewing interface for human–machine interaction in cyberspace.
Qian Li; Zhuowei Joy Huang; Kiel Christianson
In: Tourism Management, vol. 54, pp. 243–258, 2016.
This study examines consumers' visual attention toward tourism photographs with text naturally embedded in landscapes and their perceived advertising effectiveness. Eye-tracking is employed to record consumers' visual attention and a questionnaire is administered to acquire information about the perceived advertising effectiveness. The impacts of text elements are examined by two factors: viewers' understanding of the text language (understand vs. not understand), and the number of textual messages (single vs. multiple). Findings indicate that text within the landscapes of tourism photographs draws the majority of viewers' visual attention, irrespective of whether or not participants understand the text language. People spent more time viewing photographs with text in a known language compared to photographs with an unknown language, and more time viewing photographs with a single textual message than those with multiple textual messages. Viewers reported higher perceived advertising effectiveness toward tourism photographs that included text in the known language.
Fan Li; Chun Hsien Chen; Gangyan Xu; Li Pheng Khoo
In: IEEE Transactions on Human-Machine Systems, vol. 50, no. 5, pp. 465–474, 2020.
Eye-tracking-based human fatigue detection at traffic control centers suffers from an unavoidable problem of low-quality eye-tracking data caused by noisy and missing gaze points. In this article, the authors conducted pioneering work by investigating the effects of data quality on eye-tracking-based fatigue indicators and by proposing a hierarchical-based interpolation approach to extract the eye-tracking-based fatigue indicators from low-quality eye-tracking data. This approach adaptively classified the missing gaze points and hierarchically interpolated them based on the temporal-spatial characteristics of the gaze points. In addition, the definitions of applicable fixations and saccades for human fatigue detection is proposed. Two experiments are conducted to verify the effectiveness and efficiency of the method in extracting eye-tracking-based fatigue indicators and detecting human fatigue. The results indicate that most eye-tracking parameters are significantly affected by the quality of the eye-tracking data. In addition, the proposed approach can achieve much better performance than the classic velocity threshold identification algorithm (I-VT) and a state-of-the-art method (U'n'Eye) in parsing low-quality eye-tracking data. Specifically, the proposed method attained relatively stable eye-tracking-based fatigue indicators and reported the highest accuracy in human fatigue detection. These results are expected to facilitate the application of eye movement-based human fatigue detection in practice.
Sixin Liao; Lili Yu; Erik D Reichle; Jan Louis Kruger
Using eye movements to study the reading of subtitles in video Journal Article
In: Scientific Studies of Reading, pp. 1–19, 2020.
This article reports the first eye-movement experiment to examine how the presence versus absence of concurrent video content and presentation speed affect the reading of subtitles. Results indicated that participants adapted their visual routines to examine video content while simultaneously prioritizing the reading of subtitles, especially when the latter was displayed only briefly. Although decisions about when and where to move the eyes largely remained under local (cognitive) control, this control was also modulated by global task demands, suggesting an integration of local and global eye-movement control. The theoretical and pedagogical implications of these findings are discussed, and we also briefly describe a new theoretical framework for understanding all forms of multimodal reading, including the reading of subtitles in video.
John J H Lin; Sunny S J Lin
In: International Journal of Science and Mathematics Education, vol. 12, no. 3, pp. 605–627, 2014.
The present study investigated (a) whether the perceived cognitive load was different when geometry problems with various levels of configuration comprehension were solved and (b) whether eye movements in comprehending geometry problems showed sources of cognitive loads. In the first investigation, three characteristics of geometry configurations involving the number of informational elements, the number of element interactivities and the level of mental operations were assumed to account for the increasing difficulty. A sample of 311 9th grade students solved five geometry problems that required knowledge of similar triangles in a computer-supported environment. In the second experiment, 63 participants solved the same problems and eye movements were recorded. The results indicated that (1) the five problems differed in pass rate and in self-reported cognitive load; (2) because the successful solvers were very swift in pattern recognition and visual integration, their fixation did not clearly show valuable information; (3) more attention and more time (shown by the heat maps, dwell time and fixation counts) were given to read the more difficult configurations than to the intermediate or easier configurations; and (4) in addition to number of elements and element interactivities, the level of mental operations accounts for the major cognitive load sources of configuration comprehension. The results derived some implications for design principles of geometry diagrams in secondary school mathematics textbooks.
John J H Lin; Sunny S J Lin
In: Journal of Eye Movement Research, vol. 7, no. 1, pp. 1–15, 2014.
The present study investigated the following issues: (1) whether differences are evident in the eye movement measures of successful and unsuccessful problem-solvers; (2) what is the relationship between perceived difficulty and eye movement measures; and (3) whether eye movements in various AOIs differ when solving problems. Sixty-three 11th grade students solved five geometry problems about the properties of similar triangles. A digital drawing tablet and sensitive pressure pen were used to record the responses. The results indicated that unsuccessful solvers tended to have more fixation counts, run counts, and longer dwell time on the problem area, whereas successful solvers focused more on the calculation area. In addition, fixation counts, dwell time, and run counts in the diagram area were positively correlated with the perceived difficulty, suggesting that understanding similar triangles may require translation or mental rotation. We argue that three eye movement measures (i.e., fixation counts, dwell time, and run counts) are appropriate for use in examining problem solving given that they differentiate successful from unsuccessful solvers and correlate with perceived difficulty. Furthermore, the eye-tracking technique provides objective measures of students' cognitive load for instructional designers.
Hsin Hui Lin; Shu Fei Yang
An eye movement study of attribute framing in online shopping Journal Article
In: Journal of Marketing Analytics, vol. 2, no. 2, pp. 72–80, 2014.
This study uses an eye-tracking method to explore the framing effect on observed eye movements and purchase intention in online shopping. The results show that negative framing induces more active eye movements. Function attributes and non-functionality attributes attract more eye movements and with higher intensity. And the scanpath on the areas of interest reveals a certain pattern. These findings have practical implications for e-sellers to improve communication with customers.
Chiuhsiang Joe Lin; Chi Chan Chang; Yung-Hui Lee
Evaluating camouflage design using eye movement data Journal Article
In: Applied Ergonomics, vol. 45, no. 3, pp. 714–723, 2014.
This study investigates the characteristics of eye movements during a camouflaged target search task. Camouflaged targets were randomly presented on two natural landscapes. The performance of each camouflage design was assessed by target detection hit rate, detection time, number of fixations on display, first saccade amplitude to target, number of fixations on target, fixation duration on target, and subjective ratings of search task difficulty. The results showed that the camouflage patterns could significantly affect the eye-movement behavior, especially first saccade amplitude and fixation duration, and the findings could be used to increase the sensitivity of the camouflage assessment. We hypothesized that the assessment could be made with regard to the differences in detectability and discriminability of the camouflage patterns. These could explain less efficient search behavior in eye movements. Overall, data obtained from eye movements can be used to significantly enhance the interpretation of the effects of different camouflage design.
Chiuhsiang Joe Lin; Chi Chan Chang; Bor-Shong Liu
In: PLoS ONE, vol. 9, no. 2, pp. e87310, 2014.
BACKGROUND: Measurement of camouflage performance is of fundamental importance for military stealth applications. The goal of camouflage assessment algorithms is to automatically assess the effect of camouflage in agreement with human detection responses. In a previous study, we found that the Universal Image Quality Index (UIQI) correlated well with the psychophysical measures, and it could be a potentially camouflage assessment tool. METHODOLOGY: In this study, we want to quantify the camouflage similarity index and psychophysical results. We compare several image quality indexes for computational evaluation of camouflage effectiveness, and present the results of an extensive human visual experiment conducted to evaluate the performance of several camouflage assessment algorithms and analyze the strengths and weaknesses of these algorithms. SIGNIFICANCE: The experimental data demonstrates the effectiveness of the approach, and the correlation coefficient result of the UIQI was higher than those of other methods. This approach was highly correlated with the human target-searching results. It also showed that this method is an objective and effective camouflage performance evaluation method because it considers the human visual system and image structure, which makes it consistent with the subjective evaluation results.
Tomas Lindberg; Risto Näsänen
In: Displays, vol. 24, no. 3, pp. 111–120, 2003.
Alphanumeric and graphical information needs to be presented in such a way that its perception is accurate, fast and as effortless as possible. This study investigated the effects of spacing and size of individual interface elements on their perception. Experiment 1 investigated the effect of icon spacing on the speed of visual search for a target icon and determined the perceptual span for icons, that is, the number of icons that can be processed by one eye fixation. Experiment 2 studied the effect of size, and experiment 3 the subjective preferences for levels of icon spacing. The results of experiment 1 showed that spacing does not have an effect on search times. On average the perceptual span for icons was found to be 25 arranged in a 5 × 5 array. The size of the interface elements, on the other hand, was found to have a great effect. Icons smaller than 0.7° resulted in significantly raised search times. Experiment 3 revealed that an inter-element spacing of one icon is to be preferred and a spacing of zero icons is to be avoided.
Haoxue Liu; Guangming Ding; Weihua Zhao; Hui Wang; Kaizheng Liu; Ludan Shi
In: Journal of Transportation Safety and Security, vol. 3, no. 1, pp. 27–37, 2011.
To avoid traffic accidents in long tunnel entrance sections, the authors studied the variation of driver's visual features based on real road experiments on the expressway. Drivers' visual feature parameters were recorded in real-time using Eyelink (eye tracking system) during the driving test. Mathematic models of drivers' fixation duration, the number of fixations, and saccade amplitude in tunnel entrance were established based on BP Neural Network (Error Back Propagation Network) simulation. Results showed that fixation duration increased gradually as the vehicle moving closer to the tunnel entrance, whereas the number of fixations and saccade amplitude decreased. Meanwhile, drivers' fixations shifted from straight ahead to the right side, which resulted in the number of fixations on the right side increased. After drivers entering the tunnel, fixation duration firstly decreased and then increased afterward, while the number of fixations and saccade amplitude kept increasing.
Tzu Chien Liu; Melissa Hui Mei Fan; Fred Paas
In: Computers and Education, vol. 70, pp. 9–20, 2014.
Recent research has shown that students involved in computer-based second language learning prefer to use a digital dictionary in which a word can be looked up by clicking on it with a mouse (i.e., click-on dictionary) to a digital dictionary in which a word can be looked up by typing it on a keyboard (i.e., key-in dictionary). This study investigated whether digital dictionary format also differentially affects students' incidental acquisition of spelling knowledge and cognitive load during second language learning. A comparison between a click-on dictionary condition, a key-in dictionary condition, and a non-dictionary control condition for 45 Taiwanese students learning English as a foreign language revealed that learners who used a key-in dictionary invested more time investment on dictionary consultation than learners who used a click-on dictionary. However, on a subsequent unexpected spelling test the key-in group invested less time investment and performed better than the click-on group. The theoretical and practical implications of the results are discussed.
Tammy Sue Wynne Liu; Yeu Ting Liu; Chun-Yin Doris Chen
In: Interactive Learning Environments, vol. 27, no. 2, pp. 181–199, 2019.
This study employed eye-tracking technology to probe the online reading behavior of 52 advanced L2 English learners. These participants read an e-book containing six types of multimedia supports for either vocabulary acquisition or comprehension. The six supports consisted of three micro-level supports that provided information about specific words (glosses, vocabulary focus, and footnotes), and three macro-level supports that provided global or background information (illustrations, infographics, and photos). The participants read the ebook under two presentation modes: (1) simultaneous mode: where digital input and supports were presented at the same time; and (2) sequential mode: where the digital content and supports were incrementally presented. Analyses showed that when reading for vocabulary acquisition, vocabulary focus and glosses were significantly fixated on, and when reading for comprehension, illustrations were more intensely fixated on. Additionally, when the digital content was incrementally presented, vocabulary focus received significantly higher total fixation duration. This suggests that reading under the sequential mode has the potency to guide L2 learners' focal attention toward micro-level supports. In contrast, under the simultaneous presentation mode, L2 learners seemed to divide their focal attention among both micro-level and macro-level supports. Pedagogical implications are discussed based on the findings of this study.
In: PeerJ, vol. 7, pp. 1–15, 2019.
This article compares the differences in eye movements between orienteers of different skill levels on map information searches and explores the visual search patterns of orienteers during precise map reading so as to explore the cognitive characteristics of orienteers' visual search. We recruited 44 orienteers at different skill levels (experts, advanced beginners, and novices), and recorded their behavioral responses and eye movement data when reading maps of different complexities. We found that the complexity of map (complex vs. simple) affects the quality of orienteers' route planning during precise map reading. Specifically, when observing complex maps, orienteers of higher competency tend to have a better quality of route planning (i.e., a shorter route planning time, a longer gaze time, and a more concentrate distribution of gazes). Expert orienteers demonstrated obvious cognitive advantages in the ability to find key information. We also found that in the stage of route planning, expert orienteers and advanced beginners first pay attention to the checkpoint description table. The expert group extracted information faster, and their attention was more concentrated, whereas the novice group paid less attention to the checkpoint description table, and their gaze was scattered. We found that experts regarded the information in the checkpoint description table as the key to the problem and they give priority to this area in route decision making. These results advance our understanding of professional knowledge and problem solving in orienteering.
Allison M Londerée; Megan E Roberts; Mary E Wewers; Ellen Peters; Amy K Ferketich; Dylan D Wagner
In: Tobacco Regulatory Science, vol. 4, no. 6, pp. 57–65, 2018.
Objectives: E-cigarettes are now the most commonly-used tobacco product among adoles- cents; yet, little work has examined how the appealing food and flavor cues used in their mar- keting might attract adolescents' attention, thereby increasing willingness to try these prod- ucts. In the present study, we tested whether advertisements for fruit/sweet/savory-flavored (“flavored”) e-cigarettes attracted adolescent attention in real-world scenes more than tobacco flavored (“unflavored”) e-cigarettes. Additionally, we examined the relationship between ado- lescent attentional bias and willingness to try flavored e-cigarettes. Methods: Participants were 46 adolescents (age range: 16-18 years). All participants took part in an eye-tracking paradigm that examined attentional bias to flavored and unflavored e-cigarette advertisements embed- ded in pictures of real-world storefront scenes. Afterwards, participants' willingness to try fla- vored and unflavored e-cigarettes was assessed. Results: In support of our primary hypothesis, adolescents looked longer and fixated more frequently on flavored (vs unflavored) e-cigarette advertisements. Moreover, this attentional bias towards flavored e-cigarette advertisements predicted a greater willingness to try flavored vs unflavored e-cigarettes. Conclusions: These findings suggest that flavored e-cigarette marketing attracts the attention of adolescents, in- creases their willingness to try flavored e-cigarette products, and could, therefore, put them at greater risk for tobacco initiation. Key
Joan López-Moliner; Eli Brenner
Flexible timing of eye movements when catching a ball Journal Article
In: Journal of Vision, vol. 16, no. 5, pp. 1–11, 2016.
In ball games, one cannot direct ones gaze at the ball all the time because one must also judge other aspects of the game, such as other players' positions. We wanted to know whether there are times at which obtaining information about the ball is particularly beneficial for catching it. We recently found that people could catch successfully if they saw any part of the ball's flight except the very end, when sensory-motor delays make it impossible to use new information. Nevertheless, there may be a preferred time to see the ball. We examined when six catchers would choose to look at the ball if they had to both catch the ball and find out what to do with it while the ball was approaching. A catcher and a thrower continuously threw a ball back and forth. We recorded their hand movements, the catcher's eye movements, and the ball's path. While the ball was approaching the catcher, information was provided on a screen about how the catcher should throw the ball back to the thrower (its peak height). This information disappeared just before the catcher caught the ball. Initially there was a slight tendency to look at the ball before looking at the screen but, later, most catchers tended to look at the screen before looking at the ball. Rather than being particularly eager to see the ball at a certain time, people appear to adjust their eye movements to the combined requirements of the task.
Zhenji Lu; Riender Happee; Joost C F de Winter
In: Transportation Research Part F: Traffic Psychology and Behaviour, vol. 72, pp. 211–225, 2020.
In highly automated driving, drivers occasionally need to take over control of the car due to limitations of the automated driving system. Research has shown that visually distracted drivers need about 7 s to regain situation awareness (SA). However, it is unknown whether the presence of a hazard affects SA. In the present experiment, 32 participants watched animated video clips from a driver's perspective while their eyes were recorded using eye-tracking equipment. The videos had lengths between 1 and 20 s and contained either no hazard or an impending crash in the form of a stationary car in the ego lane. After each video, participants had to (1) decide (no need to take over, evade left, evade right, brake only), (2) rate the danger of the situation, (3) rebuild the situation from a top-down perspective, and (4) rate the difficulty of the rebuilding task. The results showed that the hazard situations were experienced as more dangerous than the non-hazard situations, as inferred from self-reported danger and pupil diameter. However, there were no major differences in SA: hazard and non-hazard situations yielded equivalent speed and distance errors in the rebuilding task and equivalent self-reported difficulty scores. An exception occurred for the shortest time budget (1 s) videos, where participants showed impaired SA in the hazard condition, presumably because the threat inhibited participants from looking into the rear-view mirror. Correlations between measures of SA and decision-making accuracy were low to moderate. It is concluded that hazards do not substantially affect the global awareness of the traffic situation, except for short time budgets.
A system for tracking gaze on handheld device Journal Article
In: Behavior Research Methods, vol. 38, no. 4, pp. 660–666, 2006.
Many of the current gaze-tracking systems require that a subject's head be stabilized and that the interface be fixed to a table. This article describes a prototype system for tracking gaze on the screen of mobile, handheld devices. The proposed system frees the user and the interface from previous constraints, allowing natural freedom of movement within the operational envelope of the system. The method is software-based, and integrates a commercial eye-tracking device (EyeLink I) with a magnetic positional tracking device (Polhemus FASTRAK). The evaluation of the system shows that it is capable of producing valid data with adequate accuracy.
Yan Luo; Ming Jiang; Yongkang Wong; Qi Zhao
Multi-camera saliency Journal Article
In: IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 10, pp. 2057–2070, 2015.
A significant body of literature on saliency modeling predicts where humans look in a single image or video. Besides the scientific goal of understanding how information is fused from multiple visual sources to identify regions of interest in a holistic manner, there are tremendous engineering applications of multi-camera saliency due to the widespread of cameras. This paper proposes a principled framework to smoothly integrate visual information from multiple views to a global scene map, and to employ a saliency algorithm incorporating high-level features to identify the most important regions by fusing visual information. The proposed method has the following key distinguishing features compared with its counterparts: (1) the proposed saliency detection is global (salient regions from one local view may not be important in a global context), (2) it does not require special ways for camera deployment or overlapping field of view, and (3) the key saliency algorithm is effective in highlighting interesting object regions though not a single detector is used. Experiments on several data sets confirm the effectiveness of the proposed principled framework.
Shijian Luo; Yi Hu; Yuxiao Zhou
In: Frontiers of Computer Science, vol. 11, no. 2, pp. 290–306, 2017.
Smartphone applications (apps) are becoming increasingly popular all over the world, particularly in the Chinese Generation Y population; however, surprisingly, only a small number of studies on app factors valued by this important group have been conducted. Because the competition among app developers is increasing, app factors that attract users' attention are worth studying for sales promotion. This paper examines these factors through two separate studies. In the first study, i.e., Experiment 1, which consists of a survey, perceptual rating and verbal protocol methods are employed, and 90 randomly selected app websites are rated by 169 experienced smartphone users according to app attraction. Twelve of the most rated apps (six highest rated and six lowest rated) are selected for further investigation, and 11 influential factors that Generation Y members value are listed. A second study, i.e., Experiment 2, is conducted using the most and least rated app websites from Experiment 1, and eye tracking and verbal protocol methods are used. The eye movements of 45 participants are tracked while browsing these websites, providing evidence about what attracts these users' attention and the order in which the app components are viewed. The results of these two studies suggest that Chinese Generation Y is a content-centric group when they browse the smartphone app marketplace. Icon, screenshot, price, rating, and name are the dominant and indispensable factors that influence purchase intentions, among which icon and screenshot should be meticulously designed. Price is another key factor that drives Chinese Generation Y's attention. The recommended apps are the least dominant element. Design suggestions for app websites are also proposed. This research has important implications.
Min-Yuan Ma; Hsien-Chih Chuang
In: International Journal of Technology and Design Education, vol. 27, no. 1, pp. 149–164, 2017.
Type design is the process of re-organizing visual elements and their corresponding meanings into a new organic entity, particularly for the highly logographic Chinese characters whose intrinsic features are retained even after reorganization. Due to this advantage, designers believe that such a re-organization process will not affect Chinese character recognition. However, not having an effect on recognition is not the same as not affecting the viewing process, especially when the character is so highly deconstructed that, along with the viewing process, the original intention of the design and its efficacy are both indirectly affected. Therefore, besides capturing the changes of character features, a good type designer should understand how characters are viewed. Past studies have found that character structure will affect character recognition, particularly for enclosed and non-enclosed characters whose differences are significant, although the interpretation of such differences remains open for discussion. This study explored the viewing process of Chinese characters with eye-tracking methods and calculated the concentration and saccadic amplitude of fixation in the viewing process in terms of the descriptive approach in a geographic information system, so as to investigate the differences among types of character modules with the spatial dispersion index. This study found that the overall vision when viewing enclosed structures is more concentrated than non-enclosed structures.
Xueer Ma; Xiangling Zhuang; Guojie Ma
In: Frontiers in Psychology, vol. 11, pp. 1–11, 2020.
Transparent windows on food packaging can effectively highlight the actual food inside. The present study examined whether food packaging with transparent windows (relative to packaging with food‐ and non-food graphic windows in the same position and of the same size) has more advantages in capturing consumer attention and determining consumers' willingness to purchase. In this study, college students were asked to evaluate prepackaged foods presented on a computer screen, and their eye movements were recorded. The results showed salience effects for both packaging with transparent and food-graphic windows, which were also regulated by food category. Both transparent and graphic packaging gained more viewing time than the non-food graphic baseline condition for all the three selected products (i.e., nuts, preserved fruits, and instant cereals). However, no significant difference was found between transparent and graphic window conditions. For preserved fruits, time to first fixations was shorter in transparent packaging than other conditions. For nuts, the willingness to purchase was higher in both transparent and graphic conditions than the baseline condition, while the packaging attractiveness played a key role in mediating consumers' willingness to purchase. The implications for stakeholders and future research directions are discussed.
Joseph W MacInnes; Amelia R Hunt; Matthew D Hilchey; Raymond M Klein
Driving forces in free visual search: An ethology Journal Article
In: Attention, Perception, and Psychophysics, vol. 76, no. 2, pp. 280–295, 2014.
Visual search typically involves sequences of eye movements under the constraints of a specific scene and specific goals. Visual search has been used as an experimental paradigm to study the interplay of scene salience and top-down goals, as well as various aspects of vision, attention, and memory, usually by introducing a secondary task or by controlling and manipulating the search environment. An ethology is a study of an animal in its natural environment, and here we examine the fixation patterns of the human animal searching a series of challenging illustrated scenes that are well-known in popular culture. The search was free of secondary tasks, probes, and other distractions. Our goal was to describe saccadic behavior, including patterns of fixation duration, saccade amplitude, and angular direction. In particular, we employed both new and established techniques for identifying top-down strategies, any influences of bottom-up image salience, and the midlevel attentional effects of saccadic momentum and inhibition of return. The visual search dynamics that we observed and quantified demonstrate that saccades are not independently generated and incorporate distinct influences from strategy, salience, and attention. Sequential dependencies consistent with inhibition of return also emerged from our analyses.
Andrew K Mackenzie; Julie M Harris
In: Visual Cognition, vol. 23, no. 6, pp. 736–757, 2015.
Differences in eye movement patterns are often found when comparing passive viewing paradigms to actively engaging in everyday tasks. Arguably, investigations into visuomotor control should therefore be most useful when conducted in settings that incorporate the intrinsic link between vision and action. We present a study that compares oculomotor behaviour and hazard reaction times across a simulated driving task and a comparable, but passive, video-based hazard perception task. We found that participants scanned the road less during the active driving task and fixated closer to the front of the vehicle. Participants were also slower to detect the hazards in the driving task. Our results suggest that the interactivity of simulated driving places increased demand upon the visual and attention systems than simply viewing driving movies. We offer insights into why these differences occur and explore the possible implications of such findings within the wider context of driver training and assessment.
Andrew K Mackenzie; Julie M Harris
In: Journal of Experimental Psychology: Human Perception and Performance, vol. 43, no. 2, pp. 381–394, 2017.
The misallocation of driver visual attention has been suggested as a major contributing factor to vehicle accidents. One possible reason is that the relatively high cognitive demands of driving limits the ability to efficiently allocate gaze. We present an experiment that explores the relationship between attentional function and visual performance when driving. Drivers performed two variations of a multiple object tracking task targeting aspects of cognition including sustained attention, dual-tasking, covert attention and visuomotor skill. They also drove a number of courses in a driving simulator. Eye movements were recorded throughout. We found that individuals who performed better in the cognitive tasks exhibited more effective eye movement strategies when driving, such as scanning more of the road, and they also exhibited better driving performance. We discuss the potential link between an individual's attentional function, effective eye movements and driving ability. We also discuss the use of a visuomotor task in assessing driving behaviour.
Jamal K Mansour; R C L Lindsay; Neil Brewer; Kevin G Munhall
Characterizing visual behaviour in a lineup task Journal Article
In: Applied Cognitive Psychology, vol. 23, no. 7, pp. 1012–1026, 2009.
Eye tracking was used to monitor participants' visual behaviour while viewing lineups in order to determine whether gaze behaviour predicted decision accuracy. Participants viewed taped crimes followed by simultaneous lineups. Participants (N¼34) viewed 4 target-present and 4 target-absent lineups. Decision time, number of fixations and duration of fixations differed for selections vs. non- selections. Correct and incorrect selections differed only in terms of comparison-type behaviour involving the selected face. Correct and incorrect non-selections could be distinguished by decision time, number of fixations and duration of fixations on the target or most-attended face and comparisons. Implications of visual behaviour for judgment strategy (relative vs. absolute) are discussed.
Mauro Marchitto; Leandro Luigi Di Stasi; José J Cañas
In: Human Factors and Ergonomics in Manufacturing & Service Industries, vol. 19, no. 6, pp. 1–13, 2011.
Traffic geometry is a factor that contributes to cognitive complexity in air traffic control. In conflict-detection tasks, geometry can affect the attentional effort necessary to correctly perceive and interpret the situation; online measures of situational workload are therefore highly desirable. In this study, we explored whether saccadic movements vary with changes in geometry. We created simple scenarios with two aircraft and simulated a conflict detection task. Independent variables were the conflict angle and the distance to convergence point. We hypothesized lower saccadic peak velocity (and longer duration) for increasing complexity, that is, for increasing conflict angles and for different distances to convergence point. Response times varied accordingly with task complexity. Concerning saccades, there was a decrease of peak velocity (and a related increase of duration) for increased geometry complexity for large saccades (textgreater15°). The data therefore suggest that geometry is able to influence "reaching" saccades and not "fixation" saccades.
Yousri Marzouki; Valériane Dusaucy; Myriam Chanceaux; Sebastiaan Mathôt
The World (of Warcraft) through the eyes of an expert Journal Article
In: PeerJ, vol. 5, pp. 1–21, 2017.
Negative correlations between pupil size and the tendency to look at salient locations were found in recent studies (e.g., Mathôt et al., 2015). It is hypothesized that this negative correlation might be explained by the mental effort put by participants in the task that leads in return to pupil dilation. Here we present an exploratory study on the effect of expertise on eye-movement behavior. Because there is no available standard tool to evaluate WoW players' expertise, we built an off-game questionnaire testing players' knowledge about WoW and acquired skills through completed raids, highest rated battlegrounds, Skill Points, etc. Experts ( N = 4) and novices ( N = 4) in the massively multiplayer online role-playing game World of Warcraft (WoW) viewed 24 designed video segments from the game that differ in regards with their content (i.e, informative locations) and visual complexity (i.e, salient locations). Consistent with previous studies, we found a negative correlation between pupil size and the tendency to look at salient locations (experts
Hideyuki Matsumoto; Yasuo Terao; Akihiro Yugeta; Hideki Fukuda; Masaki Emoto; Toshiaki Furubayashi; Tomoko Okano; Ritsuko Hanajima; Yoshikazu Ugawa
In: PLoS ONE, vol. 6, no. 12, pp. e28928, 2011.
The aim of this study was to investigate where neurologists look when they view brain computed tomography (CT) images and to evaluate how they deploy their visual attention by comparing their gaze distribution with saliency maps. Brain CT images showing cerebrovascular accidents were presented to 12 neurologists and 12 control subjects. The subjects' ocular fixation positions were recorded using an eye-tracking device (Eyelink 1000). Heat maps were created based on the eye-fixation patterns of each group and compared between the two groups. The heat maps revealed that the areas on which control subjects frequently fixated often coincided with areas identified as outstanding in saliency maps, while the areas on which neurologists frequently fixated often did not. Dwell time in regions of interest (ROI) was likewise compared between the two groups, revealing that, although dwell time on large lesions was not different between the two groups, dwell time in clinically important areas with low salience was longer in neurologists than in controls. Therefore it appears that neurologists intentionally scan clinically important areas when reading brain CT images showing cerebrovascular accidents. Both neurologists and control subjects used the "bottom-up salience" form of visual attention, although the neurologists more effectively used the "top-down instruction" form.
Nadine Matton; Pierre Vincent Paubel; Sébastien Puma
Toward the use of pupillary responses for pilot selection Journal Article
In: Human Factors, pp. 1–13, 2020.
Objective: For selection practitioners, it seems important to assess the level of mental resources invested in order to perform a demanding task. In this study, we investigated the potential of pupil size measurement to discriminate the most proficient pilot students from the less proficient. Background: Cognitive workload is known to influence learning outcome. More specifically, cognitive difficulties observed during pilot training are often related to a lack of efficient mental workload management. Method: Twenty pilot students performed a laboratory multitasking scenario, composed of several stages with increasing workload, while their pupil size was recorded. Two levels of pilot students were compared according to the outcome after 2 years of training: high success and medium success. Results: Our findings suggested that task-evoked pupil size measurements could be a promising predictor of flight training difficulties during the 2-year training. Indeed, high-level pilot students showed greater pupil size changes from low-load to high-load stages of the multitasking scenario than medium-level pilot students. Moreover, average pupil diameters at the low-load stage were smallest for the high-level pilot students. Conclusion: Following the neural efficiency hypothesis framework, the most proficient pilot students supposedly used their mental resources more efficiently than the least proficient while performing the multitasking scenario. Application: These findings might introduce a new way of managing selection processes complemented with ocular measurements. More specifically, pupil size measurement could enable identification of applicants with greater chances of success during pilot training.
Olivia M Maynard; Marcus R Munafò; Ute Leonards
In: Addiction, vol. 108, no. 2, pp. 413–419, 2013.
AIMS: Previous research with adults indicates that plain packaging increases visual attention to health warnings in adult non-smokers and weekly smokers, but not daily smokers. The present research extends this study to adolescents aged 14-19 years. DESIGN: Mixed-model experimental design, with smoking status as a between-subjects factor and pack type (branded or plain pack) and eye gaze location (health warning or branding) as within-subjects factors. SETTING: Three secondary schools in Bristol, UK. PARTICIPANTS: A convenience sample of adolescents comprising never-smokers (n = 26), experimenters (n = 34), weekly smokers (n = 13) and daily smokers (n = 14). MEASUREMENTS: Number of eye movements to health warnings and branding on plain and branded packs. FINDINGS: Analysis of variance, irrespective of smoking status revealed more eye movements to health warnings than branding on plain packs, but an equal number of eye movements to both regions on branded packs (P = 0.033). This was observed among experimenters (P textless 0.001) and weekly smokers (P = 0.047), but not among never-smokers or daily smokers. CONCLUSION: Among experimenters and weekly smokers, plain packaging increases visual attention to health warnings and away from branding. Daily smokers, even relatively early in their smoking careers, seem to avoid the health warnings on cigarette packs. Adolescent never-smokers attend the health warnings preferentially on both types of packs, a finding which may reflect their decision not to smoke.
Olivia M Maynard; Angela Attwood; Laura O'Brien; Sabrina Brooks; Craig Hedge; Ute Leonards; Marcus R Munafò
In: Drug and Alcohol Dependence, vol. 136, no. 1, pp. 170–174, 2014.
Background: Previous research with adults and adolescents indicates that plain cigarette packs increase visual attention to health warnings among non-smokers and non-regular smokers, but not among regular smokers. This may be because regular smokers: (1) are familiar with the health warnings, (2) preferentially attend to branding, or (3) actively avoid health warnings. We sought to distinguish between these explanations using eye-tracking technology. Method: A convenience sample of 30 adult dependent smokers participated in an eye-tracking study. Participants viewed branded, plain and blank packs of cigarettes with familiar and unfamiliar health warnings. The number of fixations to health warnings and branding on the different pack types were recorded. Results: Analysis of variance indicated that regular smokers were biased towards fixating the branding rather than the health warning on all three pack types. This bias was smaller, but still evident, for blank packs, where smokers preferentially attended the blank region over the health warnings. Time-course analysis showed that for branded and plain packs, attention was preferentially directed to the branding location for the entire 10. s of the stimulus presentation, while for blank packs this occurred for the last 8. s of the stimulus presentation. Familiarity with health warnings had no effect on eye gaze location. Conclusion: Smokers actively avoid cigarette pack health warnings, and this remains the case even in the absence of salient branding information. Smokers may have learned to divert their attention away from cigarette pack health warnings. These findings have implications for cigarette packaging and health warning policy.
Olivia M Maynard; Jonathan C W Brooks; Marcus R Munafò; Ute Leonards
In: Addiction, vol. 112, no. 4, pp. 662–672, 2017.
Aims: To (1) test if activation in brain regions related to reward (nucleus accumbens) and emotion (amygdala) differ when branded and plain packs of cigarettes are viewed, (2) test whether these activation patterns differ by smoking status and (3) examine whether activation patterns differ as a function of visual attention to health warning labels on cigarette packs. Design: Cross-sectional observational study combining functional magnetic resonance imaging (fMRI) with eye-tracking. Non-smokers, weekly smokers and daily smokers performed a memory task on branded and plain cigarette packs with pictorial health warnings presented in an event-related design. Setting: Clinical Research and Imaging Centre, University of Bristol, UK. Participants: Non-smokers, weekly smokers and daily smokers (n = 72) were tested. After exclusions, data from 19 non-smokers, 19 weekly smokers and 20 daily smokers were analysed. Measurements: Brain activity was assessed in whole brain analyses and in pre-specified masked analyses in the amygdala and nucleus accumbens. On-line eye-tracking during scanning recorded visual attention to health warnings. Findings: There was no evidence for a main effect of pack type or smoking status in either the nucleus accumbens or amygdala, and this was unchanged when taking account of visual attention to health warnings. However, there was evidence for an interaction, such that we observed increased activation in the right amygdala when viewing branded as compared with plain packs among weekly smokers (P = 0.003). When taking into account visual attention to health warnings, we observed higher levels of activation in the visual cortex in response to plain packaging compared with branded packaging of cigarettes (P = 0.020). Conclusions: Based on functional magnetic resonance imaging and eye-tracking data, health warnings appear to be more salient on ‘plain' cigarette packs than branded packs.
Jason S McCarley; Arthur F Kramer; Christopher D Wickens; Eric D Vidoni; Walter R Boot
Visual skills in airport security inspection Journal Article
In: Psychological Science, vol. 15, no. 5, pp. 302–306, 2004.
An experiment examined visual performance in a simulated luggage-screening task. Observers participated in five sessions of a task requiring them to search for knives hidden in x-ray images of cluttered bags. Sensitivity and response times improved reliably as a result of practice. Eye movement data revealed that sensitivity increases were produced entirely by changes in observers' ability to recognize target objects, and not by changes in the effectiveness of visual scanning. Moreover, recognition skills were in part stimulus-specific, such that performance was degraded by the introduction of unfamiliar target objects. Implications for screener training are discussed.