Feng Yi Tseng; Chin Jung Chao; Wen Yang Feng; Sheue Ling Hwang Effects of display modality on critical battlefield e-map search performance Journal Article Behaviour and Information Technology, 32 (9), pp. 888–901, 2013. @article{Tseng2013, title = {Effects of display modality on critical battlefield e-map search performance}, author = {Feng Yi Tseng and Chin Jung Chao and Wen Yang Feng and Sheue Ling Hwang}, doi = {10.1080/0144929X.2012.702286}, year = {2013}, date = {2013-01-01}, journal = {Behaviour and Information Technology}, volume = {32}, number = {9}, pages = {888--901}, abstract = {Visual search performance in visual display terminals can be affected by several changeable display parameters, such as the dimensions of screen, target size and background clutter. We found that when there was time pressure for operators to execute the critical battlefield map searching in a control room, efficient visual search became more important. We investigated the visual search performance in a simulated radar interface, which included the warrior symbology. Thirty-six participants were recruited and a three-factor mixed design was used in which the independent variables were three screen dimensions (7, 15 and 21 in.), five icon sizes (visual angle 40, 50, 60, 70 and 80 min of arc) and two map background clutter types (topography displayed [TD] and topography not displayed [TND]). The five dependent variables were completion time, accuracy, fixation duration, fixation count and saccade amplitude. The results showed that the best icon sizes were 80 and 70 min. The 21 in. screen dimension was chosen as the superior screen for search tasks. The TND map background with less clutters produced higher accuracy compared to that of TD background with clutter. The results of this research can be used in control room design to promote operators' visual search performance.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Visual search performance in visual display terminals can be affected by several changeable display parameters, such as the dimensions of screen, target size and background clutter. We found that when there was time pressure for operators to execute the critical battlefield map searching in a control room, efficient visual search became more important. We investigated the visual search performance in a simulated radar interface, which included the warrior symbology. Thirty-six participants were recruited and a three-factor mixed design was used in which the independent variables were three screen dimensions (7, 15 and 21 in.), five icon sizes (visual angle 40, 50, 60, 70 and 80 min of arc) and two map background clutter types (topography displayed [TD] and topography not displayed [TND]). The five dependent variables were completion time, accuracy, fixation duration, fixation count and saccade amplitude. The results showed that the best icon sizes were 80 and 70 min. The 21 in. screen dimension was chosen as the superior screen for search tasks. The TND map background with less clutters produced higher accuracy compared to that of TD background with clutter. The results of this research can be used in control room design to promote operators' visual search performance. |
Philip R K Turnbull; John R Phillips Ocular effects of virtual reality headset wear in young adults Journal Article Scientific Reports, 7 , pp. 16172, 2017. @article{Turnbull2017a, title = {Ocular effects of virtual reality headset wear in young adults}, author = {Philip R K Turnbull and John R Phillips}, doi = {10.1038/s41598-017-16320-6}, year = {2017}, date = {2017-01-01}, journal = {Scientific Reports}, volume = {7}, pages = {16172}, publisher = {Springer US}, abstract = {Virtual Reality (VR) headsets create immersion by displaying images on screens placed very close to the eyes, which are viewed through high powered lenses. Here we investigate whether this viewing arrangement alters the binocular status of the eyes, and whether it is likely to provide a stimulus for myopia development. We compared binocular status after 40-minute trials in indoor and outdoor environments, in both real and virtual worlds. We also measured the change in thickness of the ocular choroid, to assess the likely presence of signals for ocular growth and myopia development. We found that changes in binocular posture at distance and near, gaze stability, amplitude of accommodation and stereopsis were not different after exposure to each of the 4 environments. Thus, we found no evidence that the VR optical arrangement had an adverse effect on the binocular status of the eyes in the short term. Choroidal thickness did not change after either real world trial, but there was a significant thickening (≈10 microns) after each VR trial (p textless 0.001). The choroidal thickening which we observed suggest that a VR headset may not be a myopiagenic stimulus, despite the very close viewing distances involved.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Virtual Reality (VR) headsets create immersion by displaying images on screens placed very close to the eyes, which are viewed through high powered lenses. Here we investigate whether this viewing arrangement alters the binocular status of the eyes, and whether it is likely to provide a stimulus for myopia development. We compared binocular status after 40-minute trials in indoor and outdoor environments, in both real and virtual worlds. We also measured the change in thickness of the ocular choroid, to assess the likely presence of signals for ocular growth and myopia development. We found that changes in binocular posture at distance and near, gaze stability, amplitude of accommodation and stereopsis were not different after exposure to each of the 4 environments. Thus, we found no evidence that the VR optical arrangement had an adverse effect on the binocular status of the eyes in the short term. Choroidal thickness did not change after either real world trial, but there was a significant thickening (≈10 microns) after each VR trial (p textless 0.001). The choroidal thickening which we observed suggest that a VR headset may not be a myopiagenic stimulus, despite the very close viewing distances involved. |
Yusuke Uchida; Daisuke Kudoh; Akira Murakami; Masaaki Honda; Shigeru Kitazawa Origins of superior dynamic visual acuity in baseball players: Superior eye movements or superior image processing Journal Article PLoS ONE, 7 (2), pp. e31530, 2012. @article{Uchida2012, title = {Origins of superior dynamic visual acuity in baseball players: Superior eye movements or superior image processing}, author = {Yusuke Uchida and Daisuke Kudoh and Akira Murakami and Masaaki Honda and Shigeru Kitazawa}, doi = {10.1371/journal.pone.0031530}, year = {2012}, date = {2012-01-01}, journal = {PLoS ONE}, volume = {7}, number = {2}, pages = {e31530}, abstract = {Dynamic visual acuity (DVA) is defined as the ability to discriminate the fine parts of a moving object. DVA is generally better in athletes than in non-athletes, and the better DVA of athletes has been attributed to a better ability to track moving objects. In the present study, we hypothesized that the better DVA of athletes is partly derived from better perception of moving images on the retina through some kind of perceptual learning. To test this hypothesis, we quantitatively measured DVA in baseball players and non-athletes using moving Landolt rings in two conditions. In the first experiment, the participants were allowed to move their eyes (free-eye-movement conditions), whereas in the second they were required to fixate on a fixation target (fixation conditions). The athletes displayed significantly better DVA than the non-athletes in the free-eye-movement conditions. However, there was no significant difference between the groups in the fixation conditions. These results suggest that the better DVA of athletes is primarily due to an improved ability to track moving targets with their eyes, rather than to improved perception of moving images on the retina.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Dynamic visual acuity (DVA) is defined as the ability to discriminate the fine parts of a moving object. DVA is generally better in athletes than in non-athletes, and the better DVA of athletes has been attributed to a better ability to track moving objects. In the present study, we hypothesized that the better DVA of athletes is partly derived from better perception of moving images on the retina through some kind of perceptual learning. To test this hypothesis, we quantitatively measured DVA in baseball players and non-athletes using moving Landolt rings in two conditions. In the first experiment, the participants were allowed to move their eyes (free-eye-movement conditions), whereas in the second they were required to fixate on a fixation target (fixation conditions). The athletes displayed significantly better DVA than the non-athletes in the free-eye-movement conditions. However, there was no significant difference between the groups in the fixation conditions. These results suggest that the better DVA of athletes is primarily due to an improved ability to track moving targets with their eyes, rather than to improved perception of moving images on the retina. |
Yusuke Uchida; Nobuaki Mizuguchi; Masaaki Honda; Kazuyuki Kanosue Prediction of shot success for basketball free throws: Visual search strategy Journal Article European Journal of Sport Science, 14 (5), pp. 426–432, 2014. @article{Uchida2014, title = {Prediction of shot success for basketball free throws: Visual search strategy}, author = {Yusuke Uchida and Nobuaki Mizuguchi and Masaaki Honda and Kazuyuki Kanosue}, doi = {10.1080/17461391.2013.866166}, year = {2014}, date = {2014-01-01}, journal = {European Journal of Sport Science}, volume = {14}, number = {5}, pages = {426--432}, abstract = {Abstract In ball games, players have to pay close attention to visual information in order to predict the movements of both the opponents and the ball. Previous studies have indicated that players primarily utilise cues concerning the ball and opponents' body motion. The information acquired must be effective for observing players to select the subsequent action. The present study evaluated the effects of changes in the video replay speed on the spatial visual search strategy and ability to predict free throw success. We compared eye movements made while observing a basketball free throw by novices and experienced basketball players. Correct response rates were close to chance (50%) at all video speeds for the novices. The correct response rate of experienced players was significantly above chance (and significantly above that of the novices) at the normal speed, but was not different from chance at both slow and fast speeds. Experienced players gazed more on the lower part of the player's body when viewing a normal speed video than the novices. The players likely detected critical visual information to predict shot success by properly moving their gaze according to the shooter's movements. This pattern did not change when the video speed was decreased, but changed when it was increased. These findings suggest that temporal information is important for predicting action outcomes and that such outcomes are sensitive to video speed.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Abstract In ball games, players have to pay close attention to visual information in order to predict the movements of both the opponents and the ball. Previous studies have indicated that players primarily utilise cues concerning the ball and opponents' body motion. The information acquired must be effective for observing players to select the subsequent action. The present study evaluated the effects of changes in the video replay speed on the spatial visual search strategy and ability to predict free throw success. We compared eye movements made while observing a basketball free throw by novices and experienced basketball players. Correct response rates were close to chance (50%) at all video speeds for the novices. The correct response rate of experienced players was significantly above chance (and significantly above that of the novices) at the normal speed, but was not different from chance at both slow and fast speeds. Experienced players gazed more on the lower part of the player's body when viewing a normal speed video than the novices. The players likely detected critical visual information to predict shot success by properly moving their gaze according to the shooter's movements. This pattern did not change when the video speed was decreased, but changed when it was increased. These findings suggest that temporal information is important for predicting action outcomes and that such outcomes are sensitive to video speed. |
Liis Uiga; Catherine M Capio; Donghyun Ryu; William R Young; Mark R Wilson; Thomson W L Wong; Andy C Y Tse; Rich S W Masters The role of movement-specific reinvestment in visuomotor control of walking by older adults Journal Article Journals of Gerontology - Series B Psychological Sciences and Social Sciences, 75 (2), pp. 282–292, 2020. @article{Uiga2020, title = {The role of movement-specific reinvestment in visuomotor control of walking by older adults}, author = {Liis Uiga and Catherine M Capio and Donghyun Ryu and William R Young and Mark R Wilson and Thomson W L Wong and Andy C Y Tse and Rich S W Masters}, doi = {10.1093/geronb/gby078}, year = {2020}, date = {2020-01-01}, journal = {Journals of Gerontology - Series B Psychological Sciences and Social Sciences}, volume = {75}, number = {2}, pages = {282--292}, abstract = {Objectives: The aim of this study was to examine the association between conscious monitoring and control of movements (i.e., movement-specific reinvestment) and visuomotor control during walking by older adults. Method: The Movement-Specific Reinvestment Scale (MSRS) was administered to 92 community-dwelling older adults, aged 65-81 years, who were required to walk along a 4.8-m walkway and step on the middle of a target as accurately as possible. Participants' movement kinematics and gaze behavior were measured during approach to the target and when stepping on it. Results: High scores on the MSRS were associated with prolonged stance and double support times during approach to the stepping target, and less accurate foot placement when stepping on the target. No associations between MSRS and gaze behavior were observed. Discussion: Older adults with a high propensity for movement-specific reinvestment seem to need more time to "plan" future stepping movements, yet show worse stepping accuracy than older adults with a low propensity for movement-specific reinvestment. Future research should examine whether older adults with a higher propensity for reinvestment are more likely to display movement errors that lead to falling.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Objectives: The aim of this study was to examine the association between conscious monitoring and control of movements (i.e., movement-specific reinvestment) and visuomotor control during walking by older adults. Method: The Movement-Specific Reinvestment Scale (MSRS) was administered to 92 community-dwelling older adults, aged 65-81 years, who were required to walk along a 4.8-m walkway and step on the middle of a target as accurately as possible. Participants' movement kinematics and gaze behavior were measured during approach to the target and when stepping on it. Results: High scores on the MSRS were associated with prolonged stance and double support times during approach to the stepping target, and less accurate foot placement when stepping on the target. No associations between MSRS and gaze behavior were observed. Discussion: Older adults with a high propensity for movement-specific reinvestment seem to need more time to "plan" future stepping movements, yet show worse stepping accuracy than older adults with a low propensity for movement-specific reinvestment. Future research should examine whether older adults with a higher propensity for reinvestment are more likely to display movement errors that lead to falling. |
Miguel A Vadillo; Chris N H Street; Tom Beesley; David R Shanks A simple algorithm for the offline recalibration of eye-tracking data through best-fitting linear transformation Journal Article Behavior Research Methods, 47 (4), pp. 1365–1376, 2015. @article{Vadillo2015, title = {A simple algorithm for the offline recalibration of eye-tracking data through best-fitting linear transformation}, author = {Miguel A Vadillo and Chris N H Street and Tom Beesley and David R Shanks}, doi = {10.3758/s13428-014-0544-1}, year = {2015}, date = {2015-01-01}, journal = {Behavior Research Methods}, volume = {47}, number = {4}, pages = {1365--1376}, abstract = {Poor calibration and inaccurate drift correction can pose severe problems for eye-tracking experiments requiring high levels of accuracy and precision. We describe an algorithm for the offline correction of eye-tracking data. The algorithm conducts a linear transformation of the coordinates of fixations that minimizes the distance between each fixation and its closest stimulus. A simple implementation in MATLAB is also presented. We explore the performance of the correction algorithm under several conditions using simulated and real data, and show that it is particularly likely to improve data quality when many fixations are included in the fitting process.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Poor calibration and inaccurate drift correction can pose severe problems for eye-tracking experiments requiring high levels of accuracy and precision. We describe an algorithm for the offline correction of eye-tracking data. The algorithm conducts a linear transformation of the coordinates of fixations that minimizes the distance between each fixation and its closest stimulus. A simple implementation in MATLAB is also presented. We explore the performance of the correction algorithm under several conditions using simulated and real data, and show that it is particularly likely to improve data quality when many fixations are included in the fitting process. |
Juan D Velásquez; Pablo Loyola; Gustavo Martinez; Kristofher Munoz; Pedro Maldanado; Andrés Andres Couve; Pedro E Maldonado Combining eye tracking and pupillary dilation analysis to identify website key objects Journal Article Neurocomputing, 168 , pp. 179–189, 2015. @article{Velasquez2015, title = {Combining eye tracking and pupillary dilation analysis to identify website key objects}, author = {Juan D Velásquez and Pablo Loyola and Gustavo Martinez and Kristofher Munoz and Pedro Maldanado and Andrés Andres Couve and Pedro E Maldonado}, doi = {10.1016/j.neucom.2015.05.108}, year = {2015}, date = {2015-01-01}, journal = {Neurocomputing}, volume = {168}, pages = {179--189}, publisher = {Elsevier}, abstract = {Identifying the salient zones from Web interfaces, namely the Website Key Objects, is an essential part of the personalization process that current Web systems perform to increase user engagement. While several techniques have been proposed, most of them are focused on the use of Web usage logs. Only recently has the use of data from users[U+05F3] biological responses emerged as an alternative to enrich the analysis. In this work, a model is proposed to identify Website Key Objects that not only takes into account visual gaze activity, such as fixation time, but also the impact of pupil dilation. Our main hypothesis is that there is a strong relationship in terms of the pupil dynamics and the Web user preferences on a Web page. An empirical study was conducted on a real Website, from which the navigational activity of 23 subjects was captured using an eye tracking device. Results showed that the inclusion of pupillary activity, although not conclusively, allows us to extract a more robust Web Object classification, achieving a 14% increment in the overall accuracy.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Identifying the salient zones from Web interfaces, namely the Website Key Objects, is an essential part of the personalization process that current Web systems perform to increase user engagement. While several techniques have been proposed, most of them are focused on the use of Web usage logs. Only recently has the use of data from users[U+05F3] biological responses emerged as an alternative to enrich the analysis. In this work, a model is proposed to identify Website Key Objects that not only takes into account visual gaze activity, such as fixation time, but also the impact of pupil dilation. Our main hypothesis is that there is a strong relationship in terms of the pupil dynamics and the Web user preferences on a Web page. An empirical study was conducted on a real Website, from which the navigational activity of 23 subjects was captured using an eye tracking device. Results showed that the inclusion of pupillary activity, although not conclusively, allows us to extract a more robust Web Object classification, achieving a 14% increment in the overall accuracy. |
Boris M Velichkovsky; Sascha M Dornhoefer; Mathias Kopf; Jens R Helmert; Markus Joos Change detection and occlusion modes in road-traffic scenarios Journal Article Transportation Research Part F: Traffic Psychology and Behaviour, 5 (2), pp. 99–109, 2002. @article{Velichkovsky2002, title = {Change detection and occlusion modes in road-traffic scenarios}, author = {Boris M Velichkovsky and Sascha M Dornhoefer and Mathias Kopf and Jens R Helmert and Markus Joos}, doi = {10.1016/S1369-8478(02)00009-8}, year = {2002}, date = {2002-01-01}, journal = {Transportation Research Part F: Traffic Psychology and Behaviour}, volume = {5}, number = {2}, pages = {99--109}, abstract = {Change blindness phenomena are widely known in cognitive science, but their relation to driving is not quite clear. We report a study where subjects viewed colour video stills of natural traffic while eye movements were recorded. A change could occur randomly in three different occlusion modes-blinks, blanks and saccades-or during a fixation (as control condition). These changes could be either relevant or irrelevant with respect to the traffic safety. We used deletions as well as insertions of objects. All occlusion modes were equivalent concerning detection rate and reaction time, deviating from the control condition only. The detection of relevant changes was both more likely and faster than that of irrelevant ones, particularly for relevant insertions, which approached the base line level. Even in this case, it took about 180 ms longer to react to changes when they occurred during a saccade, blink or blank. In a second study, relevant insertions and the blank occlusion were used in a driving simulator environment. We found a surprising effect in the dynamic setting: an advantage in change detection rate and time with blanks compared to the control condition. Change detection was also good during blinks, but not in saccades. Possible explanation of these effects and their practical implications are discussed.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Change blindness phenomena are widely known in cognitive science, but their relation to driving is not quite clear. We report a study where subjects viewed colour video stills of natural traffic while eye movements were recorded. A change could occur randomly in three different occlusion modes-blinks, blanks and saccades-or during a fixation (as control condition). These changes could be either relevant or irrelevant with respect to the traffic safety. We used deletions as well as insertions of objects. All occlusion modes were equivalent concerning detection rate and reaction time, deviating from the control condition only. The detection of relevant changes was both more likely and faster than that of irrelevant ones, particularly for relevant insertions, which approached the base line level. Even in this case, it took about 180 ms longer to react to changes when they occurred during a saccade, blink or blank. In a second study, relevant insertions and the blank occlusion were used in a driving simulator environment. We found a surprising effect in the dynamic setting: an advantage in change detection rate and time with blanks compared to the control condition. Change detection was also good during blinks, but not in saccades. Possible explanation of these effects and their practical implications are discussed. |
Boris M Velichkovsky; Mikhail A Rumyantsev; Mikhail A Morozov New solution to the Midas Touch Problem: Identification of visual commands via extraction of focal fixations Journal Article Procedia Computer Science, 39 , pp. 75–82, 2014. @article{Velichkovsky2014, title = {New solution to the Midas Touch Problem: Identification of visual commands via extraction of focal fixations}, author = {Boris M Velichkovsky and Mikhail A Rumyantsev and Mikhail A Morozov}, doi = {10.1016/j.procs.2014.11.012}, year = {2014}, date = {2014-01-01}, journal = {Procedia Computer Science}, volume = {39}, pages = {75--82}, publisher = {Elsevier Masson SAS}, abstract = {Reliable identification of intentional visual commands is a major problem in the development of eye-movements based user interfaces. This work suggests that the presence of focal visual fixations is indicative of visual commands. Two experiments are described which assessed the effectiveness of this approach in a simple gaze-control interface. Identification accuracy was shown to match that of the commonly used dwell time method. Using focal fixations led to less visual fatigue and higher speed of work. Perspectives of using focal fixations for identification of visual commands in various kinds of eye-movements based interfaces are discussed.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Reliable identification of intentional visual commands is a major problem in the development of eye-movements based user interfaces. This work suggests that the presence of focal visual fixations is indicative of visual commands. Two experiments are described which assessed the effectiveness of this approach in a simple gaze-control interface. Identification accuracy was shown to match that of the commonly used dwell time method. Using focal fixations led to less visual fatigue and higher speed of work. Perspectives of using focal fixations for identification of visual commands in various kinds of eye-movements based interfaces are discussed. |
Pedro G Vieira; Matthew R Krause; Christopher C Pack tACS entrains neural activity while somatosensory input is blocked Journal Article PLoS Biology, 18 (10), pp. 1–14, 2020. @article{Vieira2020, title = {tACS entrains neural activity while somatosensory input is blocked}, author = {Pedro G Vieira and Matthew R Krause and Christopher C Pack}, doi = {10.1371/journal.pbio.3000834}, year = {2020}, date = {2020-01-01}, journal = {PLoS Biology}, volume = {18}, number = {10}, pages = {1--14}, abstract = {Transcranial alternating current stimulation (tACS) modulates brain activity by passing electrical current through electrodes that are attached to the scalp. Because it is safe and noninvasive, tACS holds great promise as a tool for basic research and clinical treatment. However, little is known about how tACS ultimately influences neural activity. One hypothesis is that tACS affects neural responses directly, by producing electrical fields that interact with the brain's endogenous electrical activity. By controlling the shape and location of these electric fields, one could target brain regions associated with particular behaviors or symptoms. However, an alternative hypothesis is that tACS affects neural activity indirectly, via peripheral sensory afferents. In particular, it has often been hypothesized that tACS acts on sensory fibers in the skin, which in turn provide rhythmic input to central neurons. In this case, there would be little possibility of targeted brain stimulation, as the regions modulated by tACS would depend entirely on the somatosensory pathways originating in the skin around the stimulating electrodes. Here, we directly test these competing hypotheses by recording single-unit activity in the hippocampus and visual cortex of alert monkeys receiving tACS. We find that tACS entrains neuronal activity in both regions, so that cells fire synchronously with the stimulation. Blocking somatosensory input with a topical anesthetic does not significantly alter these neural entrainment effects. These data are therefore consistent with the direct stimulation hypothesis and suggest that peripheral somatosensory stimulation is not required for tACS to entrain neurons.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Transcranial alternating current stimulation (tACS) modulates brain activity by passing electrical current through electrodes that are attached to the scalp. Because it is safe and noninvasive, tACS holds great promise as a tool for basic research and clinical treatment. However, little is known about how tACS ultimately influences neural activity. One hypothesis is that tACS affects neural responses directly, by producing electrical fields that interact with the brain's endogenous electrical activity. By controlling the shape and location of these electric fields, one could target brain regions associated with particular behaviors or symptoms. However, an alternative hypothesis is that tACS affects neural activity indirectly, via peripheral sensory afferents. In particular, it has often been hypothesized that tACS acts on sensory fibers in the skin, which in turn provide rhythmic input to central neurons. In this case, there would be little possibility of targeted brain stimulation, as the regions modulated by tACS would depend entirely on the somatosensory pathways originating in the skin around the stimulating electrodes. Here, we directly test these competing hypotheses by recording single-unit activity in the hippocampus and visual cortex of alert monkeys receiving tACS. We find that tACS entrains neuronal activity in both regions, so that cells fire synchronously with the stimulation. Blocking somatosensory input with a topical anesthetic does not significantly alter these neural entrainment effects. These data are therefore consistent with the direct stimulation hypothesis and suggest that peripheral somatosensory stimulation is not required for tACS to entrain neurons. |
Fernando Vilariño; Gerard Lacey; Jiang Zhou; Hugh Mulcahy; Stephen Patchett Automatic labeling of colonoscopy video for cancer detection Journal Article Pattern Recognition and Image Analysis, (1), pp. 290–297, 2007. @article{Vilarino2007, title = {Automatic labeling of colonoscopy video for cancer detection}, author = {Fernando Vilari{ñ}o and Gerard Lacey and Jiang Zhou and Hugh Mulcahy and Stephen Patchett}, doi = {10.1007/978-3-540-72847-4_38}, year = {2007}, date = {2007-01-01}, journal = {Pattern Recognition and Image Analysis}, number = {1}, pages = {290--297}, abstract = {The labeling of large quantities of medical video data by clinicians is a tedious and time consuming task. In addition, the labeling process itself is rigid, since it requires the expert's interaction to classify image contents into a limited number of predetermined categories. This paper describes an architecture to accelerate the labeling step using eye movement tracking data. We report some initial results in training a Support Vector Machine (SVM) to detect cancer polyps in colonoscopy video, and a further analysis of their categories in the feature space using Self Organizing Maps (SOM). Our overall hypothesis is that the clinician's eye will be drawn to the salient features of the image and that sustained fixations will be associated with those features that are associated with disease states.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The labeling of large quantities of medical video data by clinicians is a tedious and time consuming task. In addition, the labeling process itself is rigid, since it requires the expert's interaction to classify image contents into a limited number of predetermined categories. This paper describes an architecture to accelerate the labeling step using eye movement tracking data. We report some initial results in training a Support Vector Machine (SVM) to detect cancer polyps in colonoscopy video, and a further analysis of their categories in the feature space using Self Organizing Maps (SOM). Our overall hypothesis is that the clinician's eye will be drawn to the salient features of the image and that sustained fixations will be associated with those features that are associated with disease states. |
Margarita Vinnikov; Robert S Allison; Suzette Fernandes Impact of depth of field simulation on visual fatigue: Who are impacted? and how? Journal Article International Journal of Human-Computer Studies, 91 , pp. 37–51, 2016. @article{Vinnikov2016, title = {Impact of depth of field simulation on visual fatigue: Who are impacted? and how?}, author = {Margarita Vinnikov and Robert S Allison and Suzette Fernandes}, doi = {10.1016/j.ijhcs.2016.03.001}, year = {2016}, date = {2016-01-01}, journal = {International Journal of Human-Computer Studies}, volume = {91}, pages = {37--51}, publisher = {Elsevier}, abstract = {While stereoscopic content can be compelling, it is not always comfortable for users to interact with on a regular basis. This is because the stereoscopic content on displays viewed at a short distance has been associated with different symptoms such as eye-strain, visual discomfort, and even nausea. Many of these symptoms have been attributed to cue conflict, for example between vergence and accommodation. To resolve those conflicts, volumetric and other displays have been proposed to improve the user's experience. However, these displays are expensive, unduly restrict viewing position, or provide poor image quality. As a result, commercial solutions are not readily available. We hypothesized that some of the discomfort and fatigue symptoms exhibited from viewing in stereoscopic displays may result from a mismatch between stereopsis and blur, rather than between sensed accommodation and vergence. To find factors that may support or disprove this claim, we built a real-time gaze-contingent system that simulates depth of field (DOF) that is associated with accommodation at the virtual depth of the point of regard (POR). Subsequently, a series of experiments evaluated the impact of DOF on people of different age groups (younger versus older adults). The difference between short duration discomfort and fatigue due to prolonged viewing was also examined. Results indicated that age may be a determining factor for a user's experience of DOF. There was also a major difference in a user's perception of viewing comfort during short-term exposure and prolonged viewing. Primarily, people did not find that the presence of DOF enhanced short-term viewing comfort, while DOF alleviated some symptoms of visual fatigue but not all.}, keywords = {}, pubstate = {published}, tppubtype = {article} } While stereoscopic content can be compelling, it is not always comfortable for users to interact with on a regular basis. This is because the stereoscopic content on displays viewed at a short distance has been associated with different symptoms such as eye-strain, visual discomfort, and even nausea. Many of these symptoms have been attributed to cue conflict, for example between vergence and accommodation. To resolve those conflicts, volumetric and other displays have been proposed to improve the user's experience. However, these displays are expensive, unduly restrict viewing position, or provide poor image quality. As a result, commercial solutions are not readily available. We hypothesized that some of the discomfort and fatigue symptoms exhibited from viewing in stereoscopic displays may result from a mismatch between stereopsis and blur, rather than between sensed accommodation and vergence. To find factors that may support or disprove this claim, we built a real-time gaze-contingent system that simulates depth of field (DOF) that is associated with accommodation at the virtual depth of the point of regard (POR). Subsequently, a series of experiments evaluated the impact of DOF on people of different age groups (younger versus older adults). The difference between short duration discomfort and fatigue due to prolonged viewing was also examined. Results indicated that age may be a determining factor for a user's experience of DOF. There was also a major difference in a user's perception of viewing comfort during short-term exposure and prolonged viewing. Primarily, people did not find that the presence of DOF enhanced short-term viewing comfort, while DOF alleviated some symptoms of visual fatigue but not all. |
Andrej Vlasenko; Tadas Limba; Mindaugas Kiškis; Gintarė Gulevičiūtė Research on human emotion while playing a computer game using pupil recognition technology. Journal Article TEM Journal, 5 (4), pp. 417–423, 2016. @article{Vlasenko2016, title = {Research on human emotion while playing a computer game using pupil recognition technology.}, author = {Andrej Vlasenko and Tadas Limba and Mindaugas Kiškis and Gintarė Gulevi{č}iūtė}, doi = {10.18421/TEM54-02}, year = {2016}, date = {2016-01-01}, journal = {TEM Journal}, volume = {5}, number = {4}, pages = {417--423}, abstract = {The article presents the results of an experiment during which the participants were playing an online game (poker), and while playing the game, a special video cam was recording the diameters of the player's eye pupils. Diameter data and calculations were based on these records with the aid of a computer program; then, diagrams of the diameter changes in the players' pupils were created (built) depending on the game situation. The study was conducted in a real life situation, when the players were playing online poker. The results of the study point out the connection between the changes in the psycho-emotional state of the players and the changes in their pupil diameters, where the emotional state is a critical factor affecting the operation of such systems.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The article presents the results of an experiment during which the participants were playing an online game (poker), and while playing the game, a special video cam was recording the diameters of the player's eye pupils. Diameter data and calculations were based on these records with the aid of a computer program; then, diagrams of the diameter changes in the players' pupils were created (built) depending on the game situation. The study was conducted in a real life situation, when the players were playing online poker. The results of the study point out the connection between the changes in the psycho-emotional state of the players and the changes in their pupil diameters, where the emotional state is a critical factor affecting the operation of such systems. |
Jorrig Vogels; David M Howcroft; Elli Tourtouri; Vera Demberg How speakers adapt object descriptions to listeners under load Journal Article Language, Cognition and Neuroscience, 35 (1), pp. 78–92, 2020. @article{Vogels2020, title = {How speakers adapt object descriptions to listeners under load}, author = {Jorrig Vogels and David M Howcroft and Elli Tourtouri and Vera Demberg}, doi = {10.1080/23273798.2019.1648839}, year = {2020}, date = {2020-01-01}, journal = {Language, Cognition and Neuroscience}, volume = {35}, number = {1}, pages = {78--92}, publisher = {Taylor & Francis}, abstract = {A controversial issue in psycholinguistics is the degree to which speakers employ audience design during language production. Hypothesising that a consideration of the listener's needs is particularly relevant when the listener is under cognitive load, we had speakers describe objects for a listener performing an easy or a difficult simulated driving task. We predicted that speakers would introduce more redundancy in their descriptions in the difficult driving task, thereby accommodating the listener's reduced cognitive capacity. The results showed that speakers did not adapt their descriptions to a change in the listener's cognitive load. However, speakers who had experienced the driving task themselves before and who were presented with the difficult driving task first were more redundant than other speakers. These findings may suggest that speakers only consider the listener's needs in the presence of strong enough cues, and do not update their beliefs about these needs during the task.}, keywords = {}, pubstate = {published}, tppubtype = {article} } A controversial issue in psycholinguistics is the degree to which speakers employ audience design during language production. Hypothesising that a consideration of the listener's needs is particularly relevant when the listener is under cognitive load, we had speakers describe objects for a listener performing an easy or a difficult simulated driving task. We predicted that speakers would introduce more redundancy in their descriptions in the difficult driving task, thereby accommodating the listener's reduced cognitive capacity. The results showed that speakers did not adapt their descriptions to a change in the listener's cognitive load. However, speakers who had experienced the driving task themselves before and who were presented with the difficult driving task first were more redundant than other speakers. These findings may suggest that speakers only consider the listener's needs in the presence of strong enough cues, and do not update their beliefs about these needs during the task. |
Robin Walker An iPad app as a low-vision aid for people with macular disease Journal Article British Journal of Ophthalmology, 97 (1), pp. 110–112, 2013. @article{Walker2013, title = {An iPad app as a low-vision aid for people with macular disease}, author = {Robin Walker}, doi = {10.1136/bjophthalmol-2012-302415}, year = {2013}, date = {2013-01-01}, journal = {British Journal of Ophthalmology}, volume = {97}, number = {1}, pages = {110--112}, abstract = {Age-related macular degeneration (AMD) is the single most common cause of vision loss in people over the age of 50. Individuals with low vision caused by macular disease, experience severe difficulty with everyday tasks such as reading, which has profound detrimental consequences for their quality of life. We have developed an app for the iPad (the MD evReader) that aims to improve reading (of electronic books) by enhancing the effectiveness of the eccentric viewing technique (EV) using dynamic text presentation. Eccentric viewing is a simple strategy adopted by individuals with AMD that involves using the relatively preserved peripheral region of their retina in order to see. A limiting factor of the EV technique is that it relies on the individual holding their gaze away from the focus of interest and suppressing the natural and strong, tendency to make eye-movements (saccades). During normal reading, for example, a stereotypical pattern of horizontal saccades are made, from left-to-right, enabling fixations to be made on each word4 – Figure 1a). The natural inclination to make saccades is, however, difficult to suppress and limits the effectiveness of eccentric viewing in people with macular disease.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Age-related macular degeneration (AMD) is the single most common cause of vision loss in people over the age of 50. Individuals with low vision caused by macular disease, experience severe difficulty with everyday tasks such as reading, which has profound detrimental consequences for their quality of life. We have developed an app for the iPad (the MD evReader) that aims to improve reading (of electronic books) by enhancing the effectiveness of the eccentric viewing technique (EV) using dynamic text presentation. Eccentric viewing is a simple strategy adopted by individuals with AMD that involves using the relatively preserved peripheral region of their retina in order to see. A limiting factor of the EV technique is that it relies on the individual holding their gaze away from the focus of interest and suppressing the natural and strong, tendency to make eye-movements (saccades). During normal reading, for example, a stereotypical pattern of horizontal saccades are made, from left-to-right, enabling fixations to be made on each word4 – Figure 1a). The natural inclination to make saccades is, however, difficult to suppress and limits the effectiveness of eccentric viewing in people with macular disease. |
Jyun Cheng Wang; Rong-Fuh Day The effects of attention inertia on advertisements on the WWW Journal Article Computers in Human Behavior, 23 (3), pp. 1390–1407, 2007. @article{Wang2007b, title = {The effects of attention inertia on advertisements on the WWW}, author = {Jyun Cheng Wang and Rong-Fuh Day}, doi = {10.1016/j.chb.2004.12.014}, year = {2007}, date = {2007-01-01}, journal = {Computers in Human Behavior}, volume = {23}, number = {3}, pages = {1390--1407}, abstract = {When a viewer browses a web site, one presumably performs the task of seeking information from a sequence of scattered web pages to form a meaningful path. The aim of this study is to explore changes in the distribution of attention to banner advertisements as a viewer advances along a meaningful path and their effects on the advertisements. With aid of an instrument called eye-tracker, a laboratory experiment was conducted to observe directly the attention that subjects allocate along meaningful paths. Our results show that at different levels of depth in a meaningful path, the amount of attention allocated to the content of a web page is not the same, regardless of whether attention indexes were based on dwell time or the number of fixations. Theoretically, this experiment successfully generalizes the attentional inertia theory to web environment and elaborates web advertising research by involving a significant web structural factor. In practice, this findings hint that web advertising located in the earlier and later phases of a path should be priced higher than advertising in the middle phases because, during these two phases, the audience is more sensitive to the peripheral advertising.}, keywords = {}, pubstate = {published}, tppubtype = {article} } When a viewer browses a web site, one presumably performs the task of seeking information from a sequence of scattered web pages to form a meaningful path. The aim of this study is to explore changes in the distribution of attention to banner advertisements as a viewer advances along a meaningful path and their effects on the advertisements. With aid of an instrument called eye-tracker, a laboratory experiment was conducted to observe directly the attention that subjects allocate along meaningful paths. Our results show that at different levels of depth in a meaningful path, the amount of attention allocated to the content of a web page is not the same, regardless of whether attention indexes were based on dwell time or the number of fixations. Theoretically, this experiment successfully generalizes the attentional inertia theory to web environment and elaborates web advertising research by involving a significant web structural factor. In practice, this findings hint that web advertising located in the earlier and later phases of a path should be priced higher than advertising in the middle phases because, during these two phases, the audience is more sensitive to the peripheral advertising. |
Sheng-Ming Wang Integrating service design and eye tracking insight for designing smart TV user interfaces Journal Article International Journal of Advanced Computer Science and Applications, 6 (7), pp. 163–171, 2015. @article{Wang2015a, title = {Integrating service design and eye tracking insight for designing smart TV user interfaces}, author = {Sheng-Ming Wang}, year = {2015}, date = {2015-01-01}, journal = {International Journal of Advanced Computer Science and Applications}, volume = {6}, number = {7}, pages = {163--171}, abstract = {This research proposes a process that integrate service design method and eye tracking insight for designing a Smart TV user interface. The Service Design method, which is utilized for leading the combination of the quality function deployment (QFD) and the analytic hierarchy process (AHP), is used to analyze the features of three Smart TV user interface design mockups. Scientific evidences, which include the effectiveness and efficiency testing data obtained from eye tracking experiments with six participants, are provided the information for analysing the affordance of these design mockups. The results of this research demonstrate a comprehensive methodology that can be used iteratively for redesigning, redefining and evaluating of Smart TV user interfaces. It can also help to make the design of Smart TV user interfaces relate to users' behaviors and needs. So that to improve the affordance of design. Future studies may analyse the data that are derived from eye tracking experiments to improve our understanding of the spatial relationship between designed elements in a Smart TV user interface.}, keywords = {}, pubstate = {published}, tppubtype = {article} } This research proposes a process that integrate service design method and eye tracking insight for designing a Smart TV user interface. The Service Design method, which is utilized for leading the combination of the quality function deployment (QFD) and the analytic hierarchy process (AHP), is used to analyze the features of three Smart TV user interface design mockups. Scientific evidences, which include the effectiveness and efficiency testing data obtained from eye tracking experiments with six participants, are provided the information for analysing the affordance of these design mockups. The results of this research demonstrate a comprehensive methodology that can be used iteratively for redesigning, redefining and evaluating of Smart TV user interfaces. It can also help to make the design of Smart TV user interfaces relate to users' behaviors and needs. So that to improve the affordance of design. Future studies may analyse the data that are derived from eye tracking experiments to improve our understanding of the spatial relationship between designed elements in a Smart TV user interface. |
Jian Wang; Ryoichi Ohtsuka; Kimihiro Yamanaka Relation between mental workload and visual information processing Journal Article Procedia Manufacturing, 3 , pp. 5308–5312, 2015. @article{Wang2015c, title = {Relation between mental workload and visual information processing}, author = {Jian Wang and Ryoichi Ohtsuka and Kimihiro Yamanaka}, doi = {10.1016/j.promfg.2015.07.625}, year = {2015}, date = {2015-01-01}, journal = {Procedia Manufacturing}, volume = {3}, pages = {5308--5312}, publisher = {Elsevier B.V.}, abstract = {The aim of this study is to clarify the relation between mental workload and the function of visual information processing. To examine the mental workload (MWL) relative to the size of the useful field of view (UFOV), an experiment was conducted with 12 participants (ages 21–23). In the primary task, participants responded to visual markers appearing in a computer display. The UFOV and the results of the secondary task for MWL were measured. In the MWL task, participants solved numerical operations designed to increase MWL. The experimental conditions in this task were divided into three categories (Repeat Aloud, Addition, and No Task), where No Task meant no mental task was given. MWL was changed in a stepwise manner. The quantitative assessment confirmed that the UFOV narrows with the increase in the MWL.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The aim of this study is to clarify the relation between mental workload and the function of visual information processing. To examine the mental workload (MWL) relative to the size of the useful field of view (UFOV), an experiment was conducted with 12 participants (ages 21–23). In the primary task, participants responded to visual markers appearing in a computer display. The UFOV and the results of the secondary task for MWL were measured. In the MWL task, participants solved numerical operations designed to increase MWL. The experimental conditions in this task were divided into three categories (Repeat Aloud, Addition, and No Task), where No Task meant no mental task was given. MWL was changed in a stepwise manner. The quantitative assessment confirmed that the UFOV narrows with the increase in the MWL. |
Xi Wang; Bin Cai; Yang Cao; Chen Zhou; Le Yang; Runzhong Liu; Xiaojing Long; Weicai Wang; Dingguo Gao; Baicheng Bao Objective method for evaluating orthodontic treatment from the lay perspective: An eye-tracking study Journal Article American Journal of Orthodontics and Dentofacial Orthopedics, 150 (4), pp. 601–610, 2016. @article{Wang2016b, title = {Objective method for evaluating orthodontic treatment from the lay perspective: An eye-tracking study}, author = {Xi Wang and Bin Cai and Yang Cao and Chen Zhou and Le Yang and Runzhong Liu and Xiaojing Long and Weicai Wang and Dingguo Gao and Baicheng Bao}, doi = {10.1016/j.ajodo.2016.03.028}, year = {2016}, date = {2016-01-01}, journal = {American Journal of Orthodontics and Dentofacial Orthopedics}, volume = {150}, number = {4}, pages = {601--610}, publisher = {American Association of Orthodontists}, abstract = {Introduction Currently, few methods are available to measure orthodontic treatment need and treatment outcome from the lay perspective. The objective of this study was to explore the function of an eye-tracking method to evaluate orthodontic treatment need and treatment outcome from the lay perspective as a novel and objective way when compared with traditional assessments. Methods The scanpaths of 88 laypersons observing the repose and smiling photographs of normal subjects and pretreatment and posttreatment malocclusion patients were recorded by an eye-tracking device. The total fixation time and the first fixation time on the areas of interest (eyes, nose, and mouth) for each group of faces were compared and analyzed using mixed-effects linear regression and a support vector machine. The aesthetic component of the Index of Orthodontic Treatment Need was used to categorize treatment need and outcome levels to determine the accuracy of the support vector machine in identifying these variables. Results Significant deviations in the scanpaths of laypersons viewing pretreatment smiling faces were noted, with less fixation time (P textless0.05) and later attention capture (P textless0.05) on the eyes, and more fixation time (P textless0.05) and earlier attention capture (P textless0.05) on the mouth than for the scanpaths of laypersons viewing normal smiling subjects. The same results were obtained when comparing posttreatment smiling patients, with less fixation time (P textless0.05) and later attention capture on the eyes (P textless0.05), and more fixation time (P textless0.05) and earlier attention capture on the mouth (P textless0.05). The pretreatment repose faces exhibited an earlier attention capture on the mouth than did the normal subjects (P textless0.05) and posttreatment patients (P textless0.05). Linear support vector machine classification showed accuracies of 97.2% and 93.4% in distinguishing pretreatment patients from normal subjects (treatment need), and pretreatment patients from posttreatment patients (treatment outcome), respectively. Conclusions The eye-tracking device was able to objectively quantify the effect of malocclusion on facial perception and the impact of orthodontic treatment on malocclusion from the lay perspective. The support vector machine for classification of selected features achieved high accuracy of judging treatment need and treatment outcome. This approach may represent a new method for objectively evaluating orthodontic treatment need and treatment outcome from the perspective of laypersons.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Introduction Currently, few methods are available to measure orthodontic treatment need and treatment outcome from the lay perspective. The objective of this study was to explore the function of an eye-tracking method to evaluate orthodontic treatment need and treatment outcome from the lay perspective as a novel and objective way when compared with traditional assessments. Methods The scanpaths of 88 laypersons observing the repose and smiling photographs of normal subjects and pretreatment and posttreatment malocclusion patients were recorded by an eye-tracking device. The total fixation time and the first fixation time on the areas of interest (eyes, nose, and mouth) for each group of faces were compared and analyzed using mixed-effects linear regression and a support vector machine. The aesthetic component of the Index of Orthodontic Treatment Need was used to categorize treatment need and outcome levels to determine the accuracy of the support vector machine in identifying these variables. Results Significant deviations in the scanpaths of laypersons viewing pretreatment smiling faces were noted, with less fixation time (P textless0.05) and later attention capture (P textless0.05) on the eyes, and more fixation time (P textless0.05) and earlier attention capture (P textless0.05) on the mouth than for the scanpaths of laypersons viewing normal smiling subjects. The same results were obtained when comparing posttreatment smiling patients, with less fixation time (P textless0.05) and later attention capture on the eyes (P textless0.05), and more fixation time (P textless0.05) and earlier attention capture on the mouth (P textless0.05). The pretreatment repose faces exhibited an earlier attention capture on the mouth than did the normal subjects (P textless0.05) and posttreatment patients (P textless0.05). Linear support vector machine classification showed accuracies of 97.2% and 93.4% in distinguishing pretreatment patients from normal subjects (treatment need), and pretreatment patients from posttreatment patients (treatment outcome), respectively. Conclusions The eye-tracking device was able to objectively quantify the effect of malocclusion on facial perception and the impact of orthodontic treatment on malocclusion from the lay perspective. The support vector machine for classification of selected features achieved high accuracy of judging treatment need and treatment outcome. This approach may represent a new method for objectively evaluating orthodontic treatment need and treatment outcome from the perspective of laypersons. |
Jiahui Wang; Pavlo Antonenko; Mehmet Celepkolu; Yerika Jimenez; Ethan Fieldman; Ashley Fieldman Exploring relationships between eye tracking and traditional usability testing data Journal Article International Journal of Human-Computer Interaction, pp. 1–12, 2018. @article{Wang2018d, title = {Exploring relationships between eye tracking and traditional usability testing data}, author = {Jiahui Wang and Pavlo Antonenko and Mehmet Celepkolu and Yerika Jimenez and Ethan Fieldman and Ashley Fieldman}, doi = {10.1080/10447318.2018.1464776}, year = {2018}, date = {2018-01-01}, journal = {International Journal of Human-Computer Interaction}, pages = {1--12}, publisher = {Taylor & Francis}, abstract = {This study explored the relationships between eye tracking and traditional usability testing data in the context of analyzing the usability of Algebra Nation™, an online system for learning mathematics used by hundreds of thousands of students. Thirty-five undergraduate students (20 females) completed seven usability tasks in the Algebra Nation™ online learning environment. The participants were asked to log in, select an instructor for the instructional video, post a question on the collaborative wall, search for an explanation of a mathematics concept on the wall, find information relating to Karma Points (an incentive for engagement and learning), and watch two instructional videos of varied content difficulty. Participants' eye movements (fixations and saccades) were simultaneously recorded by an eye tracker. Usability testing software was used to capture all participants' interactions with the system, task completion time, and task difficulty ratings. Upon finishing the usability tasks, participants completed the System Usability Scale. Important relationships were identified between the eye movement metrics and traditional usability testing metrics such as task difficulty rating and completion time. Eye tracking data were investigated quantitatively using aggregated fixation maps, and qualitative examination was performed on video replay of participants' fixation behavior. Augmenting the traditional usability testing methods, eye movement analysis provided additional insights regarding revisions to the interface elements associated with these usability tasks.}, keywords = {}, pubstate = {published}, tppubtype = {article} } This study explored the relationships between eye tracking and traditional usability testing data in the context of analyzing the usability of Algebra Nation™, an online system for learning mathematics used by hundreds of thousands of students. Thirty-five undergraduate students (20 females) completed seven usability tasks in the Algebra Nation™ online learning environment. The participants were asked to log in, select an instructor for the instructional video, post a question on the collaborative wall, search for an explanation of a mathematics concept on the wall, find information relating to Karma Points (an incentive for engagement and learning), and watch two instructional videos of varied content difficulty. Participants' eye movements (fixations and saccades) were simultaneously recorded by an eye tracker. Usability testing software was used to capture all participants' interactions with the system, task completion time, and task difficulty ratings. Upon finishing the usability tasks, participants completed the System Usability Scale. Important relationships were identified between the eye movement metrics and traditional usability testing metrics such as task difficulty rating and completion time. Eye tracking data were investigated quantitatively using aggregated fixation maps, and qualitative examination was performed on video replay of participants' fixation behavior. Augmenting the traditional usability testing methods, eye movement analysis provided additional insights regarding revisions to the interface elements associated with these usability tasks. |
Hongyan Wang; Zhongling Pi; Weiping Hu The instructor's gaze guidance in video lectures improves learning Journal Article Journal of Computer Assisted Learning, 35 (1), pp. 42–50, 2019. @article{Wang2019c, title = {The instructor's gaze guidance in video lectures improves learning}, author = {Hongyan Wang and Zhongling Pi and Weiping Hu}, doi = {10.1111/jcal.12309}, year = {2019}, date = {2019-01-01}, journal = {Journal of Computer Assisted Learning}, volume = {35}, number = {1}, pages = {42--50}, abstract = {Instructor behaviour is known to affect learning performance, but it is unclear which specific instructor behaviours can optimize learning. We used eye-tracking technology and questionnaires to test whether the instructor's gaze guidance affected learners' visual attention, social presence, and learning performance, using four video lectures: declarative knowledge with and without the instructor's gaze guidance and procedural knowledge with and without the instructor's gaze guidance. The results showed that the instructor's gaze guidance not only guided learners to allocate more visual attention to corresponding learning content but also increased learners' sense of social presence and learning. Furthermore, the link between the instructor's gaze guidance and better learning was especially strong for participants with a high sense of social connection with the instructor when they learned procedural knowledge. The findings lead to a strong recommendation for educational practitioners: Instructors should provide gaze guidance in video lectures for better learning performance.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Instructor behaviour is known to affect learning performance, but it is unclear which specific instructor behaviours can optimize learning. We used eye-tracking technology and questionnaires to test whether the instructor's gaze guidance affected learners' visual attention, social presence, and learning performance, using four video lectures: declarative knowledge with and without the instructor's gaze guidance and procedural knowledge with and without the instructor's gaze guidance. The results showed that the instructor's gaze guidance not only guided learners to allocate more visual attention to corresponding learning content but also increased learners' sense of social presence and learning. Furthermore, the link between the instructor's gaze guidance and better learning was especially strong for participants with a high sense of social connection with the instructor when they learned procedural knowledge. The findings lead to a strong recommendation for educational practitioners: Instructors should provide gaze guidance in video lectures for better learning performance. |
Zepeng Wang; Ping Li; Luming Zhang; Ling Shao Community-aware photo quality evaluation by deeply encoding human perception Journal Article IEEE Transactions on Multimedia, pp. 1–11, 2019. @article{Wang2019l, title = {Community-aware photo quality evaluation by deeply encoding human perception}, author = {Zepeng Wang and Ping Li and Luming Zhang and Ling Shao}, doi = {10.1109/TMM.2019.2938664}, year = {2019}, date = {2019-01-01}, journal = {IEEE Transactions on Multimedia}, pages = {1--11}, publisher = {IEEE}, abstract = {Computational photo quality evaluation is a useful technique in many tasks of computer vision and graphics, e.g., photo retaregeting, 3D rendering, and fashion recommendation. Conventional photo quality models are designed by characterizing pictures from all communities (e.g., “architecture” and “colorful”) indiscriminately, wherein community-specific features are not encoded explicitly. In this work, we develop a new community-aware photo quality evaluation framework. It uncovers the latent community-specific topics by a regularized latent topic model (LTM), and captures human visual quality perception by exploring multiple attributes. More specifically, given massive-scale online photos from multiple communities, a novel ranking algorithm is proposed to measure the visual/semantic attractiveness of regions inside each photo. Meanwhile, three attributes: photo quality scores, weak semantic tags, and inter-region correlations, are seamlessly and collaboratively incorporated during ranking. Subsequently, we construct gaze shifting path (GSP) for each photo by sequentially linking the top-ranking regions from each photo, and an aggregation-based deep CNN calculates the deep representation for each GSP. Based on this, an LTM is proposed to model the GSP distribution from multiple communities in the latent space. To mitigate the overfitting problem caused by communities with very few photos, a regularizer is added into our LTM. Finally, given a test photo, we obtain its deep GSP representation and its quality score is determined by the posterior probability of the regularized LTM. Comprehensive comparative studies on four image sets have shown the competitiveness of our method. Besides, eye tracking experiments demonstrated that our ranking-based GSPs are highly consistent with real human gaze movements.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Computational photo quality evaluation is a useful technique in many tasks of computer vision and graphics, e.g., photo retaregeting, 3D rendering, and fashion recommendation. Conventional photo quality models are designed by characterizing pictures from all communities (e.g., “architecture” and “colorful”) indiscriminately, wherein community-specific features are not encoded explicitly. In this work, we develop a new community-aware photo quality evaluation framework. It uncovers the latent community-specific topics by a regularized latent topic model (LTM), and captures human visual quality perception by exploring multiple attributes. More specifically, given massive-scale online photos from multiple communities, a novel ranking algorithm is proposed to measure the visual/semantic attractiveness of regions inside each photo. Meanwhile, three attributes: photo quality scores, weak semantic tags, and inter-region correlations, are seamlessly and collaboratively incorporated during ranking. Subsequently, we construct gaze shifting path (GSP) for each photo by sequentially linking the top-ranking regions from each photo, and an aggregation-based deep CNN calculates the deep representation for each GSP. Based on this, an LTM is proposed to model the GSP distribution from multiple communities in the latent space. To mitigate the overfitting problem caused by communities with very few photos, a regularizer is added into our LTM. Finally, given a test photo, we obtain its deep GSP representation and its quality score is determined by the posterior probability of the regularized LTM. Comprehensive comparative studies on four image sets have shown the competitiveness of our method. Besides, eye tracking experiments demonstrated that our ranking-based GSPs are highly consistent with real human gaze movements. |
Åsa Wengelin; Mark Torrance; Kenneth Holmqvist; Sol Simpson; David Galbraith; Victoria Johansson; Roger Johansson Combined eyetracking and keystroke-logging methods for studying cognitive processes in text production Journal Article Behavior Research Methods, 41 (2), pp. 337–351, 2009. @article{Wengelin2009, title = {Combined eyetracking and keystroke-logging methods for studying cognitive processes in text production}, author = {Åsa Wengelin and Mark Torrance and Kenneth Holmqvist and Sol Simpson and David Galbraith and Victoria Johansson and Roger Johansson}, doi = {10.3758/BRM.41.2.337}, year = {2009}, date = {2009-01-01}, journal = {Behavior Research Methods}, volume = {41}, number = {2}, pages = {337--351}, abstract = {Writers typically spend a certain proportion of time looking back over the text that they have written. This is likely to serve a number of different functions, which are currently poorly understood. In this article, we present two systems, ScriptLog+ TimeLine and EyeWrite, that adopt different and complementary approaches to exploring this activity by collecting and analyzing combined eye movement and keystroke data from writers composing extended texts. ScriptLog+ TimeLine is a system that is based on an existing keystroke-logging program and uses heuristic, pattern-matching methods to identify reading episodes within eye movement data. EyeWrite is an integrated editor and analysis system that permits identification of the words that the writer fixates and their location within the developing text. We demonstrate how the methods instantiated within these systems can be used to make sense of the large amount of data generated by eyetracking and keystroke logging in order to inform understanding of the cognitive processes that underlie written text production.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Writers typically spend a certain proportion of time looking back over the text that they have written. This is likely to serve a number of different functions, which are currently poorly understood. In this article, we present two systems, ScriptLog+ TimeLine and EyeWrite, that adopt different and complementary approaches to exploring this activity by collecting and analyzing combined eye movement and keystroke data from writers composing extended texts. ScriptLog+ TimeLine is a system that is based on an existing keystroke-logging program and uses heuristic, pattern-matching methods to identify reading episodes within eye movement data. EyeWrite is an integrated editor and analysis system that permits identification of the words that the writer fixates and their location within the developing text. We demonstrate how the methods instantiated within these systems can be used to make sense of the large amount of data generated by eyetracking and keystroke logging in order to inform understanding of the cognitive processes that underlie written text production. |
Bogusława Whyatt In search of directionality effects in the translation process and in the end product Journal Article Translation, Cognition & Behavior, 2 (1), pp. 79–100, 2019. @article{Whyatt2019, title = {In search of directionality effects in the translation process and in the end product}, author = {Bogus{ł}awa Whyatt}, doi = {10.1075/tcb.00020.why}, year = {2019}, date = {2019-01-01}, journal = {Translation, Cognition & Behavior}, volume = {2}, number = {1}, pages = {79--100}, abstract = {This article tackles directionality as one of the most contentious issues in translation studies, still without solid empirical footing. The research presented here shows that, to understand directionality effects on the process of translation and its end product, performance in L2 → L1 and L1 → L2 translation needs to be compared in a specific setting in which more factors than directionality are considered-especially text type. For 26 professional translators who participated in an experimental study, L1 → L2 translation did not take significantly more time than L2 → L1 translation and the end products of both needed improvement from proofreaders who are native speakers of the target language. A close analysis of corrections made by the proofreaders shows that different aspects of translation quality are affected by directionality. A case study of two translators who produced high quality L1 → L2 translations reveals that their performance was affected more by text type than by directionality.}, keywords = {}, pubstate = {published}, tppubtype = {article} } This article tackles directionality as one of the most contentious issues in translation studies, still without solid empirical footing. The research presented here shows that, to understand directionality effects on the process of translation and its end product, performance in L2 → L1 and L1 → L2 translation needs to be compared in a specific setting in which more factors than directionality are considered-especially text type. For 26 professional translators who participated in an experimental study, L1 → L2 translation did not take significantly more time than L2 → L1 translation and the end products of both needed improvement from proofreaders who are native speakers of the target language. A close analysis of corrections made by the proofreaders shows that different aspects of translation quality are affected by directionality. A case study of two translators who produced high quality L1 → L2 translations reveals that their performance was affected more by text type than by directionality. |
Lauren H Williams; Trafton Drew Distraction in diagnostic radiology: How is search through volumetric medical images affected by interruptions? Journal Article Cognitive Research: Principles and Implications, 2 (1), pp. 12, 2017. @article{Williams2017b, title = {Distraction in diagnostic radiology: How is search through volumetric medical images affected by interruptions?}, author = {Lauren H Williams and Trafton Drew}, doi = {10.1186/s41235-017-0050-y}, year = {2017}, date = {2017-01-01}, journal = {Cognitive Research: Principles and Implications}, volume = {2}, number = {1}, pages = {12}, publisher = {Cognitive Research: Principles and Implications}, abstract = {Observational studies have shown that interruptions are a frequent occurrence in diagnostic radiology. The present study used an experimental design in order to quantify the cost of these interruptions during search through volumetric medical images. Participants searched through chest CT scans for nodules that are indicative of lung cancer. In half of the cases, search was interrupted by a series of true or false math equations. The primary cost of these interruptions was an increase in search time with no corresponding increase in accuracy or lung coverage. This time cost was not modulated by the difficulty of the interruption task or an individual's working memory capacity. Eye-tracking suggests that this time cost was driven by impaired memory for which regions of the lung were searched prior to the interruption. Potential interventions will be discussed in the context of these results.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Observational studies have shown that interruptions are a frequent occurrence in diagnostic radiology. The present study used an experimental design in order to quantify the cost of these interruptions during search through volumetric medical images. Participants searched through chest CT scans for nodules that are indicative of lung cancer. In half of the cases, search was interrupted by a series of true or false math equations. The primary cost of these interruptions was an increase in search time with no corresponding increase in accuracy or lung coverage. This time cost was not modulated by the difficulty of the interruption task or an individual's working memory capacity. Eye-tracking suggests that this time cost was driven by impaired memory for which regions of the lung were searched prior to the interruption. Potential interventions will be discussed in the context of these results. |
Lauren Williams; Ann Carrigan; William Auffermann; Megan Mills; Anina Rich; Joann Elmore; Trafton Drew The invisible breast cancer: Experience does not protect against inattentional blindness to clinically relevant findings in radiology Journal Article Psychonomic Bulletin & Review, pp. 1–9, 2020. @article{Williams2020, title = {The invisible breast cancer: Experience does not protect against inattentional blindness to clinically relevant findings in radiology}, author = {Lauren Williams and Ann Carrigan and William Auffermann and Megan Mills and Anina Rich and Joann Elmore and Trafton Drew}, doi = {10.3758/s13423-020-01826-4}, year = {2020}, date = {2020-01-01}, journal = {Psychonomic Bulletin & Review}, pages = {1--9}, publisher = {Psychonomic Bulletin & Review}, abstract = {Retrospectively obvious events are frequently missed when attention is engaged in another task—a phenomenon known as inattentional blindness. Although the task characteristics that predict inattentional blindness rates are relatively well understood, the observer characteristics that predict inattentional blindness rates are largely unknown. Previously, expert radiologists showed a surprising rate of inattentional blindness to a gorilla photoshopped into a CT scan during lung-cancer screening. However, inattentional blindness rates were higher for a group of naïve observers performing the same task, suggesting that perceptual expertise may provide protection against inattentional blindness. Here, we tested whether expertise in radiology predicts inattentional blindness rates for unexpected abnormalities that were clinically relevant. Fifty radiologists evaluated CT scans for lung cancer. The final case contained a large (9.1 cm) breast mass and lymphadenopathy. When their attention was focused on searching for lung nodules, 66% of radiologists did not detect breast cancer and 30% did not detect lymphadenopathy. In contrast, only 3% and 10% of radiologists (N = 30), respectively, missed these abnormalities in a follow-up study when searching for a broader range of abnormalities. Neither experience, primary task performance, nor search behavior predicted which radiologists missed the unexpected abnormalities. These findings suggest perceptual expertise does not protect against inattentional blindness, even for unexpected stimuli that are within the domain of expertise.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Retrospectively obvious events are frequently missed when attention is engaged in another task—a phenomenon known as inattentional blindness. Although the task characteristics that predict inattentional blindness rates are relatively well understood, the observer characteristics that predict inattentional blindness rates are largely unknown. Previously, expert radiologists showed a surprising rate of inattentional blindness to a gorilla photoshopped into a CT scan during lung-cancer screening. However, inattentional blindness rates were higher for a group of naïve observers performing the same task, suggesting that perceptual expertise may provide protection against inattentional blindness. Here, we tested whether expertise in radiology predicts inattentional blindness rates for unexpected abnormalities that were clinically relevant. Fifty radiologists evaluated CT scans for lung cancer. The final case contained a large (9.1 cm) breast mass and lymphadenopathy. When their attention was focused on searching for lung nodules, 66% of radiologists did not detect breast cancer and 30% did not detect lymphadenopathy. In contrast, only 3% and 10% of radiologists (N = 30), respectively, missed these abnormalities in a follow-up study when searching for a broader range of abnormalities. Neither experience, primary task performance, nor search behavior predicted which radiologists missed the unexpected abnormalities. These findings suggest perceptual expertise does not protect against inattentional blindness, even for unexpected stimuli that are within the domain of expertise. |
Louis Williams; Eugene McSorley; Rachel McCloy Enhanced associations with actions of the artist influence gaze behaviour Journal Article i-Perception, 11 (2), pp. 1–25, 2020. @article{Williams2020a, title = {Enhanced associations with actions of the artist influence gaze behaviour}, author = {Louis Williams and Eugene McSorley and Rachel McCloy}, doi = {10.1177/2041669520911059}, year = {2020}, date = {2020-01-01}, journal = {i-Perception}, volume = {11}, number = {2}, pages = {1--25}, abstract = {The aesthetic experience of the perceiver of art has been suggested to relate to the art-making process of the artist. The artist's gestures during the creation process have been stated to influence the perceiver's art-viewing experience. However, limited studies explore the art-viewing experience in relation to the creative process of the artist. We introduced eye-tracking measures to further establish how congruent actions with the artist influence perceiver's gaze behaviour. Experiments 1 and 2 showed that simultaneous congruent and incongruent actions do not influence gaze behaviour. However, brushstroke paintings were found to be more pleasing than pointillism paintings. In Experiment 3, participants were trained to associate painting actions with hand primes to enhance visuomotor and visuovisual associations with the artist's actions. A greater amount of time was spent fixating brushstroke paintings when presented with a congruent prime compared with an incongruent prime, and fewer fixations were made to these styles of paintings when presented with an incongruent prime. The results suggest that explicit links that allow perceivers to resonate with the artist's actions lead to greater exploration of preferred artwork styles.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The aesthetic experience of the perceiver of art has been suggested to relate to the art-making process of the artist. The artist's gestures during the creation process have been stated to influence the perceiver's art-viewing experience. However, limited studies explore the art-viewing experience in relation to the creative process of the artist. We introduced eye-tracking measures to further establish how congruent actions with the artist influence perceiver's gaze behaviour. Experiments 1 and 2 showed that simultaneous congruent and incongruent actions do not influence gaze behaviour. However, brushstroke paintings were found to be more pleasing than pointillism paintings. In Experiment 3, participants were trained to associate painting actions with hand primes to enhance visuomotor and visuovisual associations with the artist's actions. A greater amount of time was spent fixating brushstroke paintings when presented with a congruent prime compared with an incongruent prime, and fewer fixations were made to these styles of paintings when presented with an incongruent prime. The results suggest that explicit links that allow perceivers to resonate with the artist's actions lead to greater exploration of preferred artwork styles. |
Matthew B Winn Rapid release from listening effort resulting from semantic context, and effects of spectral degradation and cochlear implants Journal Article Trends in Hearing, 20 , 2016. @article{Winn2016, title = {Rapid release from listening effort resulting from semantic context, and effects of spectral degradation and cochlear implants}, author = {Matthew B Winn}, doi = {10.1177/2331216516669723}, year = {2016}, date = {2016-01-01}, journal = {Trends in Hearing}, volume = {20}, abstract = {People with hearing impairment are thought to rely heavily on context to compensate for reduced audibility. Here, we explore the resulting cost of this compensatory behavior, in terms of effort and the efficiency of ongoing predictive language processing. The listening task featured predictable or unpredictable sentences, and participants included people with cochlear implants as well as people with normal hearing who heard full-spectrum/unprocessed or vocoded speech. The crucial metric was the growth of the pupillary response and the reduction of this response for predictable versus unpredictable sentences, which would suggest reduced cognitive load resulting from predictive processing. Semantic context led to rapid reduction of listening effort for people with normal hearing; the reductions were observed well before the offset of the stimuli. Effort reduction was slightly delayed for people with cochlear implants and considerably more delayed for normal-hearing listeners exposed to spectrally degraded noise-vocoded signals; this pattern of results was maintained even when intelligibility was perfect. Results suggest that speed of sentence processing can still be disrupted, and exertion of effort can be elevated, even when intelligibility remains high. We discuss implications for experimental and clinical assessment of speech recognition, in which good performance can arise because of cognitive processes that occur after a stimulus, during a period of silence. Because silent gaps are not common in continuous flowing speech, the cognitive/linguistic restorative processes observed after sentences in such studies might not be available to listeners in everyday conversations, meaning that speech recognition in conventional tests might overestimate sentence-processing capability.}, keywords = {}, pubstate = {published}, tppubtype = {article} } People with hearing impairment are thought to rely heavily on context to compensate for reduced audibility. Here, we explore the resulting cost of this compensatory behavior, in terms of effort and the efficiency of ongoing predictive language processing. The listening task featured predictable or unpredictable sentences, and participants included people with cochlear implants as well as people with normal hearing who heard full-spectrum/unprocessed or vocoded speech. The crucial metric was the growth of the pupillary response and the reduction of this response for predictable versus unpredictable sentences, which would suggest reduced cognitive load resulting from predictive processing. Semantic context led to rapid reduction of listening effort for people with normal hearing; the reductions were observed well before the offset of the stimuli. Effort reduction was slightly delayed for people with cochlear implants and considerably more delayed for normal-hearing listeners exposed to spectrally degraded noise-vocoded signals; this pattern of results was maintained even when intelligibility was perfect. Results suggest that speed of sentence processing can still be disrupted, and exertion of effort can be elevated, even when intelligibility remains high. We discuss implications for experimental and clinical assessment of speech recognition, in which good performance can arise because of cognitive processes that occur after a stimulus, during a period of silence. Because silent gaps are not common in continuous flowing speech, the cognitive/linguistic restorative processes observed after sentences in such studies might not be available to listeners in everyday conversations, meaning that speech recognition in conventional tests might overestimate sentence-processing capability. |
Andi K Winterboer; Martin I Tietze; Maria K Wolters; Johanna D Moore The user model-based summarize and refine approach improves information presentation in spoken dialog systems Journal Article Computer Speech and Language, 25 (2), pp. 175–191, 2011. @article{Winterboer2011, title = {The user model-based summarize and refine approach improves information presentation in spoken dialog systems}, author = {Andi K Winterboer and Martin I Tietze and Maria K Wolters and Johanna D Moore}, doi = {10.1016/j.csl.2010.04.003}, year = {2011}, date = {2011-01-01}, journal = {Computer Speech and Language}, volume = {25}, number = {2}, pages = {175--191}, publisher = {Elsevier Ltd}, abstract = {A common task for spoken dialog systems (SDS) is to help users select a suitable option (e.g., flight, hotel, and restaurant) from the set of options available. As the number of options increases, the system must have strategies for generating summaries that enable the user to browse the option space efficiently and successfully. In the user-model based summarize and refine approach (UMSR, Demberg and Moore, 2006), options are clustered to maximize utility with respect to a user model, and linguistic devices such as discourse cues and adverbials are used to highlight the trade-offs among the presented items. In a Wizard-of-Oz experiment, we show that the UMSR approach leads to improvements in task success, efficiency, and user satisfaction compared to an approach that clusters the available options to maximize coverage of the domain (Polifroni et al., 2003). In both a laboratory experiment and a web-based experimental paradigm employing the Amazon Mechanical Turk platform, we show that the discourse cues in UMSR summaries help users compare different options and choose between options, even though they do not improve verbatim recall. This effect was observed for both written and spoken stimuli.}, keywords = {}, pubstate = {published}, tppubtype = {article} } A common task for spoken dialog systems (SDS) is to help users select a suitable option (e.g., flight, hotel, and restaurant) from the set of options available. As the number of options increases, the system must have strategies for generating summaries that enable the user to browse the option space efficiently and successfully. In the user-model based summarize and refine approach (UMSR, Demberg and Moore, 2006), options are clustered to maximize utility with respect to a user model, and linguistic devices such as discourse cues and adverbials are used to highlight the trade-offs among the presented items. In a Wizard-of-Oz experiment, we show that the UMSR approach leads to improvements in task success, efficiency, and user satisfaction compared to an approach that clusters the available options to maximize coverage of the domain (Polifroni et al., 2003). In both a laboratory experiment and a web-based experimental paradigm employing the Amazon Mechanical Turk platform, we show that the discourse cues in UMSR summaries help users compare different options and choose between options, even though they do not improve verbatim recall. This effect was observed for both written and spoken stimuli. |
Julia A Wolfson; Dan J Graham; Sara N Bleich Attention to physical activity–equivalent calorie information on nutrition facts labels: An eye-tracking investigation Journal Article Journal of Nutrition Education and Behavior, 49 (1), pp. 35–42.e1, 2017. @article{Wolfson2017, title = {Attention to physical activity–equivalent calorie information on nutrition facts labels: An eye-tracking investigation}, author = {Julia A Wolfson and Dan J Graham and Sara N Bleich}, doi = {10.1016/j.jneb.2016.10.001}, year = {2017}, date = {2017-01-01}, journal = {Journal of Nutrition Education and Behavior}, volume = {49}, number = {1}, pages = {35--42.e1}, publisher = {Elsevier Inc.}, abstract = {Objective Investigate attention to Nutrition Facts Labels (NFLs) with numeric only vs both numeric and activity-equivalent calorie information, and attitudes toward activity-equivalent calories. Design An eye-tracking camera monitored participants' viewing of NFLs for 64 packaged foods with either standard NFLs or modified NFLs. Participants self-reported demographic information and diet-related attitudes and behaviors. Setting Participants came to the Behavioral Medicine Lab at Colorado State University in spring, 2015. Participants The researchers randomized 234 participants to view NFLs with numeric calorie information only (n = 108) or numeric and activity-equivalent calorie information (n = 126). Main Outcome Measure(s) Attention to and attitudes about activity-equivalent calorie information. Analysis Differences by experimental condition and weight loss intention (overall and within experimental condition) were assessed using t tests and Pearson's chi-square tests of independence. Results Overall, participants viewed numeric calorie information on 20% of NFLs for 249 ms. Participants in the modified NFL condition viewed activity-equivalent information on 17% of NFLs for 231 ms. Most participants indicated that activity-equivalent calorie information would help them decide whether to eat a food (69%) and that they preferred both numeric and activity-equivalent calorie information on NFLs (70%). Conclusions and Implications Participants used activity-equivalent calorie information on NFLs and found this information helpful for making food decisions.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Objective Investigate attention to Nutrition Facts Labels (NFLs) with numeric only vs both numeric and activity-equivalent calorie information, and attitudes toward activity-equivalent calories. Design An eye-tracking camera monitored participants' viewing of NFLs for 64 packaged foods with either standard NFLs or modified NFLs. Participants self-reported demographic information and diet-related attitudes and behaviors. Setting Participants came to the Behavioral Medicine Lab at Colorado State University in spring, 2015. Participants The researchers randomized 234 participants to view NFLs with numeric calorie information only (n = 108) or numeric and activity-equivalent calorie information (n = 126). Main Outcome Measure(s) Attention to and attitudes about activity-equivalent calorie information. Analysis Differences by experimental condition and weight loss intention (overall and within experimental condition) were assessed using t tests and Pearson's chi-square tests of independence. Results Overall, participants viewed numeric calorie information on 20% of NFLs for 249 ms. Participants in the modified NFL condition viewed activity-equivalent information on 17% of NFLs for 231 ms. Most participants indicated that activity-equivalent calorie information would help them decide whether to eat a food (69%) and that they preferred both numeric and activity-equivalent calorie information on NFLs (70%). Conclusions and Implications Participants used activity-equivalent calorie information on NFLs and found this information helpful for making food decisions. |
Jiaxin Wu; Sheng hua Zhong; Zheng Ma; Stephen J Heinen; Jianmin Jiang Foveated convolutional neural networks for video summarization Journal Article Multimedia Tools and Applications, 77 (22), pp. 29245–29267, 2018. @article{Wu2018bb, title = {Foveated convolutional neural networks for video summarization}, author = {Jiaxin Wu and Sheng hua Zhong and Zheng Ma and Stephen J Heinen and Jianmin Jiang}, doi = {10.1007/s11042-018-5953-1}, year = {2018}, date = {2018-01-01}, journal = {Multimedia Tools and Applications}, volume = {77}, number = {22}, pages = {29245--29267}, publisher = {Multimedia Tools and Applications}, abstract = {With the proliferation of video data, video summarization is an ideal tool for users to browse video content rapidly. In this paper, we propose a novel foveated convolutional neural networks for dynamic video summarization. We are the first to integrate gaze information into a deep learning network for video summarization. Foveated images are constructed based on subjects' eye movements to represent the spatial information of the input video. Multi-frame motion vectors are stacked across several adjacent frames to convey the motion clues. To evaluate the proposed method, experiments are conducted on two video summarization benchmark datasets. The experimental results validate the effectiveness of the gaze information for video summarization despite the fact that the eye movements are collected from different subjects from those who generated Jiaxin Wu and Sheng-hua Zhong contributed equally to this work. Multimed Tools Appl summaries. Empirical validations also demonstrate that our proposed foveated convolutional neural networks for video summarization can achieve state-of-the-art performances on these benchmark datasets.}, keywords = {}, pubstate = {published}, tppubtype = {article} } With the proliferation of video data, video summarization is an ideal tool for users to browse video content rapidly. In this paper, we propose a novel foveated convolutional neural networks for dynamic video summarization. We are the first to integrate gaze information into a deep learning network for video summarization. Foveated images are constructed based on subjects' eye movements to represent the spatial information of the input video. Multi-frame motion vectors are stacked across several adjacent frames to convey the motion clues. To evaluate the proposed method, experiments are conducted on two video summarization benchmark datasets. The experimental results validate the effectiveness of the gaze information for video summarization despite the fact that the eye movements are collected from different subjects from those who generated Jiaxin Wu and Sheng-hua Zhong contributed equally to this work. Multimed Tools Appl summaries. Empirical validations also demonstrate that our proposed foveated convolutional neural networks for video summarization can achieve state-of-the-art performances on these benchmark datasets. |
Ye Xia; Mauro Manassi; Ken Nakayama; Karl Zipser; David Whitney Visual crowding in driving Journal Article Journal of Vision, 20 (6), pp. 1–17, 2020. @article{Xia2020a, title = {Visual crowding in driving}, author = {Ye Xia and Mauro Manassi and Ken Nakayama and Karl Zipser and David Whitney}, doi = {10.1167/jov.20.6.1}, year = {2020}, date = {2020-01-01}, journal = {Journal of Vision}, volume = {20}, number = {6}, pages = {1--17}, abstract = {Visual crowding-the deleterious influence of nearby objects on object recognition-is considered to be a major bottleneck for object recognition in cluttered environments. Although crowding has been studied for decades with static and artificial stimuli, it is still unclear how crowding operates when viewing natural dynamic scenes in real-life situations. For example, driving is a frequent and potentially fatal real-life situation where crowding may play a critical role. In order to investigate the role of crowding in this kind of situation, we presented observers with naturalistic driving videos and recorded their eye movements while they performed a simulated driving task. We found that the saccade localization on pedestrians was impacted by visual clutter, in a manner consistent with the diagnostic criteria of crowding (Bouma's rule of thumb, flanker similarity tuning, and the radial-tangential anisotropy). In order to further confirm that altered saccadic localization is a behavioral consequence of crowding, we also showed that crowding occurs in the recognition of cluttered pedestrians in a more conventional crowding paradigm. We asked participants to discriminate the gender of pedestrians in static video frames and found that the altered saccadic localization correlated with the degree of crowding of the saccade targets. Taken together, our results provide strong evidence that crowding impacts both recognition and goal-directed actions in natural driving situations.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Visual crowding-the deleterious influence of nearby objects on object recognition-is considered to be a major bottleneck for object recognition in cluttered environments. Although crowding has been studied for decades with static and artificial stimuli, it is still unclear how crowding operates when viewing natural dynamic scenes in real-life situations. For example, driving is a frequent and potentially fatal real-life situation where crowding may play a critical role. In order to investigate the role of crowding in this kind of situation, we presented observers with naturalistic driving videos and recorded their eye movements while they performed a simulated driving task. We found that the saccade localization on pedestrians was impacted by visual clutter, in a manner consistent with the diagnostic criteria of crowding (Bouma's rule of thumb, flanker similarity tuning, and the radial-tangential anisotropy). In order to further confirm that altered saccadic localization is a behavioral consequence of crowding, we also showed that crowding occurs in the recognition of cluttered pedestrians in a more conventional crowding paradigm. We asked participants to discriminate the gender of pedestrians in static video frames and found that the altered saccadic localization correlated with the degree of crowding of the saccade targets. Taken together, our results provide strong evidence that crowding impacts both recognition and goal-directed actions in natural driving situations. |
Chenjiang Xie; Tong Zhu; Chunlin Guo; Yimin Zhang Measuring IVIS impact to driver by on-road test and simulator experiment Journal Article Procedia Social and Behavioral Sciences, 96 , pp. 1566–1577, 2013. @article{Xie2013, title = {Measuring IVIS impact to driver by on-road test and simulator experiment}, author = {Chenjiang Xie and Tong Zhu and Chunlin Guo and Yimin Zhang}, doi = {10.1016/j.sbspro.2013.08.178}, year = {2013}, date = {2013-01-01}, journal = {Procedia Social and Behavioral Sciences}, volume = {96}, pages = {1566--1577}, publisher = {Elsevier B.V.}, abstract = {This work examined the effects of using in-vehicle information systems (IVIS) on drivers by on-road test and simulator experiment. Twelve participants participated in the test. In on-road test, drivers performed driving task with voice prompt and non-voice prompt navigation device mounted on different position. In simulator experiment, secondary tasks, including cognitive, visual and manual tasks, were performed in a driving simulator. Subjective rating was used to test mental workload of drivers in on-road test and simulator experiment. The impact of task complexity and reaction mode was investigated in this paper. The results of the test and the simulation showed that position 1 was more comfortable than other two positions for drivers and it would cause less mental load. Drivers tend to support this result in subjective rating. IVIS with voice prompt causes less visual demand to drivers. The mental load will grow as the difficulty of the task increasing. The cognitive task on manual reaction causes higher mental load than cognitive task which doesn't require manual reaction. These results may have practical implications for in-vehicle information system design.}, keywords = {}, pubstate = {published}, tppubtype = {article} } This work examined the effects of using in-vehicle information systems (IVIS) on drivers by on-road test and simulator experiment. Twelve participants participated in the test. In on-road test, drivers performed driving task with voice prompt and non-voice prompt navigation device mounted on different position. In simulator experiment, secondary tasks, including cognitive, visual and manual tasks, were performed in a driving simulator. Subjective rating was used to test mental workload of drivers in on-road test and simulator experiment. The impact of task complexity and reaction mode was investigated in this paper. The results of the test and the simulation showed that position 1 was more comfortable than other two positions for drivers and it would cause less mental load. Drivers tend to support this result in subjective rating. IVIS with voice prompt causes less visual demand to drivers. The mental load will grow as the difficulty of the task increasing. The cognitive task on manual reaction causes higher mental load than cognitive task which doesn't require manual reaction. These results may have practical implications for in-vehicle information system design. |
Jia Qiong Xie; Detlef H Rost; Fu Xing Wang; Jin Liang Wang; Rebecca L Monk The association between excessive social media use and distraction: An eye movement tracking study Journal Article Information & Management, 58 (2), pp. 1–12, 2021. @article{Xie2021a, title = {The association between excessive social media use and distraction: An eye movement tracking study}, author = {Jia Qiong Xie and Detlef H Rost and Fu Xing Wang and Jin Liang Wang and Rebecca L Monk}, doi = {10.1016/j.im.2020.103415}, year = {2021}, date = {2021-01-01}, journal = {Information & Management}, volume = {58}, number = {2}, pages = {1--12}, publisher = {Elsevier B.V.}, abstract = {Drawing on scan-and-shift hypothesis and scattered attention hypothesis, this article attempted to explore the association between excessive social media use (ESMU) and distraction from an information engagement perspective. In study 1, the results, based on 743 questionnaires completed by Chinese college students, showed that ESMU is related to distraction in daily life. In Study 2, eye-tracking technology was used to investigate the distfile:///Users/PrinzEugen/Desktop/PDF/Uploaded/1-s2.0-S0378720620303530-main.pdfraction and performance of excessive microblog users when performing the modified Stroop task. The results showed that excessive microblog users had difficulties suppressing interference information than non-microblog users, resulting in poor performance. Theoretical and practical implications are discussed.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Drawing on scan-and-shift hypothesis and scattered attention hypothesis, this article attempted to explore the association between excessive social media use (ESMU) and distraction from an information engagement perspective. In study 1, the results, based on 743 questionnaires completed by Chinese college students, showed that ESMU is related to distraction in daily life. In Study 2, eye-tracking technology was used to investigate the distfile:///Users/PrinzEugen/Desktop/PDF/Uploaded/1-s2.0-S0378720620303530-main.pdfraction and performance of excessive microblog users when performing the modified Stroop task. The results showed that excessive microblog users had difficulties suppressing interference information than non-microblog users, resulting in poor performance. Theoretical and practical implications are discussed. |
Aiping Xiong; Robert W Proctor; Weining Yang; Ninghui Li Is domain highlighting actually helpful in identifying phishing web pages? Journal Article Human Factors, 59 (4), pp. 640–660, 2017. @article{Xiong2017, title = {Is domain highlighting actually helpful in identifying phishing web pages?}, author = {Aiping Xiong and Robert W Proctor and Weining Yang and Ninghui Li}, doi = {10.1177/0018720816684064}, year = {2017}, date = {2017-01-01}, journal = {Human Factors}, volume = {59}, number = {4}, pages = {640--660}, abstract = {OBJECTIVE: To evaluate the effectiveness of domain highlighting in helping users identify whether Web pages are legitimate or spurious. BACKGROUND: As a component of the WIBBLE, a domain name can be overlooked. Consequently, browsers highlight the domain name to help users identify which Web site they are visiting. Nevertheless, few studies have assessed the effectiveness of domain highlighting, and the only formal study confounded highlighting with instructions to look at the address bar. METHOD: We conducted two phishing detection experiments. Experiment 1 was run online: Participants judged the legitimacy of Web pages in two phases. In Phase 1, participants were to judge the legitimacy based on any information on the Web page, whereas in Phase 2, they were to focus on the address bar. Whether the domain was highlighted was also varied. Experiment 2 was conducted similarly but with participants in a laboratory setting, which allowed tracking of fixations. RESULTS: Participants differentiated the legitimate and fraudulent Web pages better than chance. There was some benefit of attending to the address bar, but domain highlighting did not provide effective protection against phishing attacks. Analysis of eye-gaze fixation measures was in agreement with the task performance, but heat-map results revealed that participants' visual attention was attracted by the highlighted domains. CONCLUSION: Failure to detect many fraudulent Web pages even when the domain was highlighted implies that users lacked knowledge of Web page security cues or how to use those cues. APPLICATION: Potential applications include development of phishing prevention training incorporating domain highlighting with other methods to help users identify phishing Web pages.}, keywords = {}, pubstate = {published}, tppubtype = {article} } OBJECTIVE: To evaluate the effectiveness of domain highlighting in helping users identify whether Web pages are legitimate or spurious. BACKGROUND: As a component of the WIBBLE, a domain name can be overlooked. Consequently, browsers highlight the domain name to help users identify which Web site they are visiting. Nevertheless, few studies have assessed the effectiveness of domain highlighting, and the only formal study confounded highlighting with instructions to look at the address bar. METHOD: We conducted two phishing detection experiments. Experiment 1 was run online: Participants judged the legitimacy of Web pages in two phases. In Phase 1, participants were to judge the legitimacy based on any information on the Web page, whereas in Phase 2, they were to focus on the address bar. Whether the domain was highlighted was also varied. Experiment 2 was conducted similarly but with participants in a laboratory setting, which allowed tracking of fixations. RESULTS: Participants differentiated the legitimate and fraudulent Web pages better than chance. There was some benefit of attending to the address bar, but domain highlighting did not provide effective protection against phishing attacks. Analysis of eye-gaze fixation measures was in agreement with the task performance, but heat-map results revealed that participants' visual attention was attracted by the highlighted domains. CONCLUSION: Failure to detect many fraudulent Web pages even when the domain was highlighted implies that users lacked knowledge of Web page security cues or how to use those cues. APPLICATION: Potential applications include development of phishing prevention training incorporating domain highlighting with other methods to help users identify phishing Web pages. |
Ying Yan; Huazhi Yuan; Xiaofei Wang; Ting Xu; Haoxue Liu Study on driver's fixation variation at entrance and inside sections of tunnel on highway Journal Article Advances in Mechanical Engineering, 7 (1), pp. 1–10, 2015. @article{Yan2015d, title = {Study on driver's fixation variation at entrance and inside sections of tunnel on highway}, author = {Ying Yan and Huazhi Yuan and Xiaofei Wang and Ting Xu and Haoxue Liu}, doi = {10.1155/2014/273427}, year = {2015}, date = {2015-01-01}, journal = {Advances in Mechanical Engineering}, volume = {7}, number = {1}, pages = {1--10}, publisher = {Hindawi Publishing Corporation}, abstract = {How drivers' visual characteristics change as they pass tunnels was studied. Firstly, nine drivers' test data at tunnel entrance and inside sections using eye movement tracking devices were recorded. Then the transfer function of BP artificial neural network was employed to simulate and analyze the variation of the drivers' eye movement parameters. The relation models between eye movement parameters and the distance of the tunnels were established. In the analysis of the fixation point distributions, the analytic coordinates of fixations in visual field were clustered to obtain different visual area of fixations by utilizing dynamic cluster theory. The results indicated that, at 100 meters before the entrance, the average fixation duration increased, but the fixations number decreased substantially. After 100 meters into the tunnel, the fixation duration started to decrease first and then increased. The variations of drivers' fixation points demonstrated such a pattern of change as scatter, focus, and scatter again. While driving through the tunnels, drivers presented a long time fixation. Nearly 61.5% subjects' average fixation duration increased significantly. In the tunnel, these drivers pay attention to seven fixation points areas from the car dashboard area to the road area in front of the car.}, keywords = {}, pubstate = {published}, tppubtype = {article} } How drivers' visual characteristics change as they pass tunnels was studied. Firstly, nine drivers' test data at tunnel entrance and inside sections using eye movement tracking devices were recorded. Then the transfer function of BP artificial neural network was employed to simulate and analyze the variation of the drivers' eye movement parameters. The relation models between eye movement parameters and the distance of the tunnels were established. In the analysis of the fixation point distributions, the analytic coordinates of fixations in visual field were clustered to obtain different visual area of fixations by utilizing dynamic cluster theory. The results indicated that, at 100 meters before the entrance, the average fixation duration increased, but the fixations number decreased substantially. After 100 meters into the tunnel, the fixation duration started to decrease first and then increased. The variations of drivers' fixation points demonstrated such a pattern of change as scatter, focus, and scatter again. While driving through the tunnels, drivers presented a long time fixation. Nearly 61.5% subjects' average fixation duration increased significantly. In the tunnel, these drivers pay attention to seven fixation points areas from the car dashboard area to the road area in front of the car. |
Ying Yan; Xiaofei Wang; Ludan Shi; Haoxue Liu Influence of light zones on drivers' visual fixation characteristics and traffic safety in extra-long tunnels Journal Article Traffic Injury Prevention, 18 (1), pp. 102–110, 2017. @article{Yan2017, title = {Influence of light zones on drivers' visual fixation characteristics and traffic safety in extra-long tunnels}, author = {Ying Yan and Xiaofei Wang and Ludan Shi and Haoxue Liu}, doi = {10.1080/15389588.2016.1193170}, year = {2017}, date = {2017-01-01}, journal = {Traffic Injury Prevention}, volume = {18}, number = {1}, pages = {102--110}, abstract = {OBJECTIVE: Special light zone is a new illumination technique that promises to improve the visual environment and improve traffic safety in extra-long tunnels. The purpose of this study is to identify how light zones affect the dynamic visual characteristics and information perception of drivers as they pass through extra-long tunnels on highways. METHODS: Thirty-two subjects were recruited for this study, and fixation data were recorded using eye movement tracking devices. A back-propagation artificial neural network was employed to predict and analyze the influence of special light zones on the variations in the fixation duration and pupil area of drivers. The analytic coordinates of focus points at different light zones were clustered to obtain different visual fixation regions using dynamic cluster theory. RESULTS: The findings of this study indicated that the special light zones had different influences on fixation duration and pupil area compared to other sections. Drivers gradually changed their fixation points from a scattered pattern to a narrow and zonal distribution that mainly focused on the main visual area at the center, the road just ahead, and the right side of the main visual area while approaching the special light zones. The results also showed that the variation in illumination and landscape in light zones was more important than driving experience to yield changes in visual cognition and driving behavior. CONCLUSIONS: It can be concluded that the special light zones can help relieve drivers' vision fatigue to some extent and further develop certain visual stimulus that can enhance drivers' attention. The study would provide a scientific basis for safety measurement implementation in extra-long tunnels.}, keywords = {}, pubstate = {published}, tppubtype = {article} } OBJECTIVE: Special light zone is a new illumination technique that promises to improve the visual environment and improve traffic safety in extra-long tunnels. The purpose of this study is to identify how light zones affect the dynamic visual characteristics and information perception of drivers as they pass through extra-long tunnels on highways. METHODS: Thirty-two subjects were recruited for this study, and fixation data were recorded using eye movement tracking devices. A back-propagation artificial neural network was employed to predict and analyze the influence of special light zones on the variations in the fixation duration and pupil area of drivers. The analytic coordinates of focus points at different light zones were clustered to obtain different visual fixation regions using dynamic cluster theory. RESULTS: The findings of this study indicated that the special light zones had different influences on fixation duration and pupil area compared to other sections. Drivers gradually changed their fixation points from a scattered pattern to a narrow and zonal distribution that mainly focused on the main visual area at the center, the road just ahead, and the right side of the main visual area while approaching the special light zones. The results also showed that the variation in illumination and landscape in light zones was more important than driving experience to yield changes in visual cognition and driving behavior. CONCLUSIONS: It can be concluded that the special light zones can help relieve drivers' vision fatigue to some extent and further develop certain visual stimulus that can enhance drivers' attention. The study would provide a scientific basis for safety measurement implementation in extra-long tunnels. |
Shun Nan Yang; Yu-Chi Tai; James E Sheedy; Beth Kinoshita; Matthew Lampa; Jami R Kern Comparative effect of lens care solutions on blink rate, ocular discomfort and visual performance Journal Article Ophthalmic and Physiological Optics, 32 (5), pp. 412–420, 2012. @article{Yang2012a, title = {Comparative effect of lens care solutions on blink rate, ocular discomfort and visual performance}, author = {Shun Nan Yang and Yu-Chi Tai and James E Sheedy and Beth Kinoshita and Matthew Lampa and Jami R Kern}, doi = {10.1111/j.1475-1313.2012.00922.x}, year = {2012}, date = {2012-01-01}, journal = {Ophthalmic and Physiological Optics}, volume = {32}, number = {5}, pages = {412--420}, abstract = {PURPOSE: To help maintain clear vision and ocular surface health, eye blinks occur to distribute natural tears over the ocular surface, especially the corneal surface. Contact lens wearers may suffer from poor vision and dry eye symptoms due to difficulty in lens surface wetting and reduced tear production. Sustained viewing of a computer screen reduces eye blinks and exacerbates such difficulties. The present study evaluated the wetting effect of lens care solutions (LCSs) on blink rate, dry eye symptoms, and vision performance. METHODS: Sixty-five adult habitual soft contact lens wearers were recruited to adapt to different LCSs (Opti-free, ReNu, and ClearCare) in a cross-over design. Blink rate in pictorial viewing and reading (measured with an eyetracker), dry eye symptoms (measured with the Ocular Surface Disease Index questionnaire), and visual discrimination (identifying tumbling E) immediately before and after eye blinks were measured after 2 weeks of adaption to LCS. Repeated measures anova and mixed model ancova were conducted to evaluate effects of LCS on blink rate, symptom score, and discrimination accuracy.$backslash$n$backslash$nRESULTS: Opti-Free resulted in lower dry eye symptoms (p = 0.018) than ClearCare, and lower spontaneous blink rate (measured in picture viewing) than ClearCare (p = 0.014) and ReNu (p = 0.041). In reading, blink rate was higher for ClearCare compared to ReNu (p = 0.026) and control (p = 0.024). Visual discrimination time was longer for the control (daily disposable lens) than for Opti-Free (p = 0.007), ReNu (p = 0.009), and ClearCare (0.013) immediately before the blink.$backslash$n$backslash$nCONCLUSIONS: LCSs differently affected blink rate, subjective dry eye symptoms, and visual discrimination speed. Those with wetting agents led to significantly fewer eye blinks while affording better ocular comfort for contact lens wearers, compared to that without. LCSs with wetting agents also resulted in better visual performance compared to wearing daily disposable contact lenses. These presumably are because of improved tear film quality.}, keywords = {}, pubstate = {published}, tppubtype = {article} } PURPOSE: To help maintain clear vision and ocular surface health, eye blinks occur to distribute natural tears over the ocular surface, especially the corneal surface. Contact lens wearers may suffer from poor vision and dry eye symptoms due to difficulty in lens surface wetting and reduced tear production. Sustained viewing of a computer screen reduces eye blinks and exacerbates such difficulties. The present study evaluated the wetting effect of lens care solutions (LCSs) on blink rate, dry eye symptoms, and vision performance. METHODS: Sixty-five adult habitual soft contact lens wearers were recruited to adapt to different LCSs (Opti-free, ReNu, and ClearCare) in a cross-over design. Blink rate in pictorial viewing and reading (measured with an eyetracker), dry eye symptoms (measured with the Ocular Surface Disease Index questionnaire), and visual discrimination (identifying tumbling E) immediately before and after eye blinks were measured after 2 weeks of adaption to LCS. Repeated measures anova and mixed model ancova were conducted to evaluate effects of LCS on blink rate, symptom score, and discrimination accuracy.$backslash$n$backslash$nRESULTS: Opti-Free resulted in lower dry eye symptoms (p = 0.018) than ClearCare, and lower spontaneous blink rate (measured in picture viewing) than ClearCare (p = 0.014) and ReNu (p = 0.041). In reading, blink rate was higher for ClearCare compared to ReNu (p = 0.026) and control (p = 0.024). Visual discrimination time was longer for the control (daily disposable lens) than for Opti-Free (p = 0.007), ReNu (p = 0.009), and ClearCare (0.013) immediately before the blink.$backslash$n$backslash$nCONCLUSIONS: LCSs differently affected blink rate, subjective dry eye symptoms, and visual discrimination speed. Those with wetting agents led to significantly fewer eye blinks while affording better ocular comfort for contact lens wearers, compared to that without. LCSs with wetting agents also resulted in better visual performance compared to wearing daily disposable contact lenses. These presumably are because of improved tear film quality. |
Jingwen Yang; Frederic Hamelin; Dominique Sauter Fault detection observer design using time and frequency domain specifications Journal Article IFAC Proceedings Volumes, 19 (1), pp. 8564–8569, 2014. @article{Yang2014, title = {Fault detection observer design using time and frequency domain specifications}, author = {Jingwen Yang and Frederic Hamelin and Dominique Sauter}, doi = {10.1002/dir}, year = {2014}, date = {2014-01-01}, journal = {IFAC Proceedings Volumes}, volume = {19}, number = {1}, pages = {8564--8569}, abstract = {Several scholars have proposed personalization models based on product variety breadth and the intensity of customer-firm interaction with a focus on marketing strategies ranging from basic product versioning to customerization and reverse marketing. However, some studies have shown that the explosion of product variety may generate information overload. Moreover, customers are highly heterogeneous in willingness and ability to interact with firms in personalization processes. This often results in consumer confusion and wasteful investments. To address this problem, we propose a conceptual framework of e-customer profiling for interactive personalization by distinguishing content (that is, expected customer benefits) and process (that is, expected degree of interaction) issues. The framework focuses on four general dimensions suggested by previous research as significant drivers of online customer heterogeneity: VALUE, KNOWLEDGE, ORIENTATION, and RELATIONSHIP QUALITY. We also present a preliminary test of the framework and derive directions for customer relationship management and future research.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Several scholars have proposed personalization models based on product variety breadth and the intensity of customer-firm interaction with a focus on marketing strategies ranging from basic product versioning to customerization and reverse marketing. However, some studies have shown that the explosion of product variety may generate information overload. Moreover, customers are highly heterogeneous in willingness and ability to interact with firms in personalization processes. This often results in consumer confusion and wasteful investments. To address this problem, we propose a conceptual framework of e-customer profiling for interactive personalization by distinguishing content (that is, expected customer benefits) and process (that is, expected degree of interaction) issues. The framework focuses on four general dimensions suggested by previous research as significant drivers of online customer heterogeneity: VALUE, KNOWLEDGE, ORIENTATION, and RELATIONSHIP QUALITY. We also present a preliminary test of the framework and derive directions for customer relationship management and future research. |
Shu Fei Yang An eye-tracking study of the Elaboration Likelihood Model in online shopping Journal Article Electronic Commerce Research and Applications, 14 (4), pp. 233–240, 2015. @article{Yang2015a, title = {An eye-tracking study of the Elaboration Likelihood Model in online shopping}, author = {Shu Fei Yang}, doi = {10.1016/j.elerap.2014.11.007}, year = {2015}, date = {2015-01-01}, journal = {Electronic Commerce Research and Applications}, volume = {14}, number = {4}, pages = {233--240}, publisher = {Elsevier B.V.}, abstract = {This study uses eye-tracking to explore the Elaboration Likelihood Model (ELM) in online shopping. The results show that the peripheral cue did not have moderating effect on purchase intention, but had moderating effect on eye movement. Regarding purchase intention, the high elaboration had higher purchase intention than the low elaboration with a positive peripheral cue, but there was no difference in purchase intention between the high and low elaboration with a negative peripheral cue. Regarding eye movement, with a positive peripheral cue, the high elaboration group was observed to have longer fixation duration than the low elaboration group in two areas of interest (AOIs); however, with a negative peripheral cue, the low elaboration group had longer fixation on the whole page and two AOIs. In addition, the relationship between purchase intention and eye movement of the AOIs is more significant in the high elaboration group when given a negative peripheral cue and in the low elaboration group when given a positive peripheral cue. This study not only examines the postulates of the ELM, but also contributes to a better understanding of the cognitive processes of the ELM. These findings have practical implications for e-sellers to identify characteristics of consumers' elaboration in eye movement and designing customization and persuasive context for different elaboration groups in e-commerce.}, keywords = {}, pubstate = {published}, tppubtype = {article} } This study uses eye-tracking to explore the Elaboration Likelihood Model (ELM) in online shopping. The results show that the peripheral cue did not have moderating effect on purchase intention, but had moderating effect on eye movement. Regarding purchase intention, the high elaboration had higher purchase intention than the low elaboration with a positive peripheral cue, but there was no difference in purchase intention between the high and low elaboration with a negative peripheral cue. Regarding eye movement, with a positive peripheral cue, the high elaboration group was observed to have longer fixation duration than the low elaboration group in two areas of interest (AOIs); however, with a negative peripheral cue, the low elaboration group had longer fixation on the whole page and two AOIs. In addition, the relationship between purchase intention and eye movement of the AOIs is more significant in the high elaboration group when given a negative peripheral cue and in the low elaboration group when given a positive peripheral cue. This study not only examines the postulates of the ELM, but also contributes to a better understanding of the cognitive processes of the ELM. These findings have practical implications for e-sellers to identify characteristics of consumers' elaboration in eye movement and designing customization and persuasive context for different elaboration groups in e-commerce. |
Jiang Yushi Research on the best visual search effect of logo elements in internet advertising layout Journal Article Journal of Contemporary Marketing Science, 2 (1), pp. 23–33, 2019. @article{Yushi2019, title = {Research on the best visual search effect of logo elements in internet advertising layout}, author = {Jiang Yushi}, doi = {10.1108/JCMARS-01-2019-0009}, year = {2019}, date = {2019-01-01}, journal = {Journal of Contemporary Marketing Science}, volume = {2}, number = {1}, pages = {23--33}, publisher = {Emerald Publishing Limited}, abstract = {Purpose: The purpose of this paper is to control the size of online advertising by the use of the single factor experiment design using the eight matching methods of logo and commodity picture elements as independent variables, under the premise of background color and content complexity and to investigate the best visual search law of logo elements in online advertising format. The result shows that when the picture element is fixed in the center of the advertisement, it is suggested that the logo element should be placed in the middle position parallel to the picture element (left middle and upper left), placing the logo element at the bottom of the picture element, especially at the bottom left should be avoided. The designer can determine the best online advertising format based on the visual search effect of the logo element and the actual marketing purpose. Design/methodology/approach: In this experiment, the repeated measurement experiment design was used in a single factor test. According to the criteria of different types of commodities and eight matching methods, 20 advertisements were randomly selected from 50 original advertisements as experimental stimulation materials, as shown in Section 2.3. The eight matching methods were processed to obtain a total of 20×8=160 experimental stimuli. At the same time, in order to minimize the memory effect of the repeated appearance of the same product, all pictures, etc., the probability was randomly presented. In addition, in order to avoid the pre-judgment of the test for the purpose of the experiment, 80 additional filler online advertisements were added. Therefore, each testee was required to watch 160+80=240 pieces of stimulation materials.Findings On one hand, when the image elements are fixed for an advertisement, the advertiser should first try to place the logo element in the right middle position parallel to the picture element, because the commodity logo in this matching mode can get the longest average time of consumers' attention, and the duration of attention is the most. Danaher and Mullarkey (2003) clearly pointed out that as consumers look at online advertising, the length of fixation time increases, the degree of memory of online advertisement is also improved accordingly. Second, you can consider placing the logo element in the left or upper left of the picture element. In contrast, advertisers should try to avoid placing the logo element at the bottom of the picture element (lower left and lower right), especially at the lower left, because, at this area, the logo attracts less attention, resulting in shortest duration of consumer attention, less than a quarter of consumers' total attention. This conclusion is consistent with the related research results.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Purpose: The purpose of this paper is to control the size of online advertising by the use of the single factor experiment design using the eight matching methods of logo and commodity picture elements as independent variables, under the premise of background color and content complexity and to investigate the best visual search law of logo elements in online advertising format. The result shows that when the picture element is fixed in the center of the advertisement, it is suggested that the logo element should be placed in the middle position parallel to the picture element (left middle and upper left), placing the logo element at the bottom of the picture element, especially at the bottom left should be avoided. The designer can determine the best online advertising format based on the visual search effect of the logo element and the actual marketing purpose. Design/methodology/approach: In this experiment, the repeated measurement experiment design was used in a single factor test. According to the criteria of different types of commodities and eight matching methods, 20 advertisements were randomly selected from 50 original advertisements as experimental stimulation materials, as shown in Section 2.3. The eight matching methods were processed to obtain a total of 20×8=160 experimental stimuli. At the same time, in order to minimize the memory effect of the repeated appearance of the same product, all pictures, etc., the probability was randomly presented. In addition, in order to avoid the pre-judgment of the test for the purpose of the experiment, 80 additional filler online advertisements were added. Therefore, each testee was required to watch 160+80=240 pieces of stimulation materials.Findings On one hand, when the image elements are fixed for an advertisement, the advertiser should first try to place the logo element in the right middle position parallel to the picture element, because the commodity logo in this matching mode can get the longest average time of consumers' attention, and the duration of attention is the most. Danaher and Mullarkey (2003) clearly pointed out that as consumers look at online advertising, the length of fixation time increases, the degree of memory of online advertisement is also improved accordingly. Second, you can consider placing the logo element in the left or upper left of the picture element. In contrast, advertisers should try to avoid placing the logo element at the bottom of the picture element (lower left and lower right), especially at the lower left, because, at this area, the logo attracts less attention, resulting in shortest duration of consumer attention, less than a quarter of consumers' total attention. This conclusion is consistent with the related research results. |
Shlomit Yuval-Greenberg; Anat Keren; Rinat Hilo; Adar Paz; Navah Ratzon Gaze control during simulator driving in adolescents with and without attention deficit hyperactivity disorder Journal Article American Journal of Occupational Therapy, 73 (3), pp. 1–8, 2019. @article{YuvalGreenberg2019, title = {Gaze control during simulator driving in adolescents with and without attention deficit hyperactivity disorder}, author = {Shlomit Yuval-Greenberg and Anat Keren and Rinat Hilo and Adar Paz and Navah Ratzon}, doi = {10.5014/ajot.2019.031500}, year = {2019}, date = {2019-04-01}, journal = {American Journal of Occupational Therapy}, volume = {73}, number = {3}, pages = {1--8}, publisher = {American Occupational Therapy Association}, abstract = {Importance: Attention deficit hyperactivity disorder (ADHD) is associated with driving deficits. Visual standards for driving define minimum qualifications for safe driving, including acuity and field of vision, but they do not consider the ability to explore the environment efficiently by shifting the gaze, which is a critical element of safe driving. Objective: To examine visual exploration during simulated driving in adolescents with and without ADHD. Design: Adolescents with and without ADHD drove a driving simulator for approximately 10 min while their gaze was monitored. They then completed a battery of questionnaires. Setting: University lab. Participants: Participants with (n = 16) and without (n = 15) ADHD were included. Participants had a history of neurological disorders other than ADHD and normal or corrected-to-normal vision. Control participants reported not having a diagnosis of ADHD. Participants with ADHD had been previously diagnosed by a qualified professional. Outcomes and Measures: We compared the following measures between ADHD and non-ADHD groups: dashboard dwell times, fixation variance, entropy, and fixation duration. Results: Findings showed that participants with ADHD were more restricted in their patterns of exploration than control group participants. They spent considerably more time gazing at the dashboard and had longer periods of fixation with lower variability and randomness. Conclusions and Relevance: The results support the hypothesis that adolescents with ADHD engage in less active exploration during simulated driving. What This Article Adds: This study raises concerns regarding the driving competence of people with ADHD and opens up new directions for potential training programs that focus on exploratory gaze control.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Importance: Attention deficit hyperactivity disorder (ADHD) is associated with driving deficits. Visual standards for driving define minimum qualifications for safe driving, including acuity and field of vision, but they do not consider the ability to explore the environment efficiently by shifting the gaze, which is a critical element of safe driving. Objective: To examine visual exploration during simulated driving in adolescents with and without ADHD. Design: Adolescents with and without ADHD drove a driving simulator for approximately 10 min while their gaze was monitored. They then completed a battery of questionnaires. Setting: University lab. Participants: Participants with (n = 16) and without (n = 15) ADHD were included. Participants had a history of neurological disorders other than ADHD and normal or corrected-to-normal vision. Control participants reported not having a diagnosis of ADHD. Participants with ADHD had been previously diagnosed by a qualified professional. Outcomes and Measures: We compared the following measures between ADHD and non-ADHD groups: dashboard dwell times, fixation variance, entropy, and fixation duration. Results: Findings showed that participants with ADHD were more restricted in their patterns of exploration than control group participants. They spent considerably more time gazing at the dashboard and had longer periods of fixation with lower variability and randomness. Conclusions and Relevance: The results support the hypothesis that adolescents with ADHD engage in less active exploration during simulated driving. What This Article Adds: This study raises concerns regarding the driving competence of people with ADHD and opens up new directions for potential training programs that focus on exploratory gaze control. |
Thomas Zawisza; Ray Garza Using an eye tracking device to assess vulnerabilities to burglary Journal Article Journal of Police and Criminal Psychology, 32 (3), pp. 203–213, 2017. @article{Zawisza2017, title = {Using an eye tracking device to assess vulnerabilities to burglary}, author = {Thomas Zawisza and Ray Garza}, doi = {10.1007/s11896-016-9213-x}, year = {2017}, date = {2017-01-01}, journal = {Journal of Police and Criminal Psychology}, volume = {32}, number = {3}, pages = {203--213}, publisher = {Journal of Police and Criminal Psychology}, abstract = {This research examines the extent to which visual cues influence a person's decision to burglarize. Participants in this study (n = 65) viewed ten houses through an eye tracking device and were asked whether or not they thought each house was vulnerable to burglary. The eye tracking device recorded where a person looked and for how long they looked (in milliseconds). Our findings showed that windows and doors were two of the most important visual stimuli. Results from our follow-up questionnaire revealed that stimuli such as fencing, beware of pet signs, cars in driveways, and alarm systems are also considered. There are a number of implications for future research and policy.}, keywords = {}, pubstate = {published}, tppubtype = {article} } This research examines the extent to which visual cues influence a person's decision to burglarize. Participants in this study (n = 65) viewed ten houses through an eye tracking device and were asked whether or not they thought each house was vulnerable to burglary. The eye tracking device recorded where a person looked and for how long they looked (in milliseconds). Our findings showed that windows and doors were two of the most important visual stimuli. Results from our follow-up questionnaire revealed that stimuli such as fencing, beware of pet signs, cars in driveways, and alarm systems are also considered. There are a number of implications for future research and policy. |
Li Zhang; Jie Ren; Liang Xu; Xue Jun Qiu; Jost B Jonas Visual comfort and fatigue when watching three-dimensional displays as measured by eye movement analysis Journal Article British Journal of Ophthalmology, 97 (7), pp. 941–942, 2013. @article{Zhang2013a, title = {Visual comfort and fatigue when watching three-dimensional displays as measured by eye movement analysis}, author = {Li Zhang and Jie Ren and Liang Xu and Xue Jun Qiu and Jost B Jonas}, doi = {10.1136/bjophthalmol-2012-303001}, year = {2013}, date = {2013-01-01}, journal = {British Journal of Ophthalmology}, volume = {97}, number = {7}, pages = {941--942}, abstract = {With the growth in three-dimensional viewing of movies, we assessed whether visual fatigue or alertness differed between three-dimensional (3D) viewing versus two- dimensional (2D) viewing of movies. We used a camera-based analysis of eye move- ments to measure blinking, fixation and sac- cades as surrogates of visual fatigue.}, keywords = {}, pubstate = {published}, tppubtype = {article} } With the growth in three-dimensional viewing of movies, we assessed whether visual fatigue or alertness differed between three-dimensional (3D) viewing versus two- dimensional (2D) viewing of movies. We used a camera-based analysis of eye move- ments to measure blinking, fixation and sac- cades as surrogates of visual fatigue. |
Li Zhang; Ya Qin Zhang; Jing Shang Zhang; Liang Xu; Jost B Jonas Visual fatigue and discomfort after stereoscopic display viewing Journal Article Acta Ophthalmologica, 91 (2), pp. 149–153, 2013. @article{Zhang2013b, title = {Visual fatigue and discomfort after stereoscopic display viewing}, author = {Li Zhang and Ya Qin Zhang and Jing Shang Zhang and Liang Xu and Jost B Jonas}, doi = {10.1111/aos.12006}, year = {2013}, date = {2013-01-01}, journal = {Acta Ophthalmologica}, volume = {91}, number = {2}, pages = {149--153}, abstract = {Purpose: Different types of stereoscopic video displays have recently been introduced. We measured and compared visual fatigue and visual discomfort induced by viewing two different stereoscopic displays that either used the pattern retarder-spatial domain technology with linearly polarized three-dimensional technology or the circularly polarized three-dimensional technology using shutter glasses.; Methods: During this observational cross-over study performed at two subsequent days, a video was watched by 30 subjects (age: 20-30 years). Half of the participants watched the screen with a pattern retard three-dimensional display at the first day and a shutter glasses three-dimensional display at the second day, and reverse. The study participants underwent a standardized interview on visual discomfort and fatigue, and a series of functional examinations prior to, and shortly after viewing the movie. Additionally, a subjective score for visual fatigue was given.; Results: Accommodative magnitude (right eye: p textless 0.001; left eye: p = 0.01), accommodative facility (p = 0.008), near-point convergence break-up point (p = 0.007), near-point convergence recover point (p = 0.001), negative (p = 0.03) and positive (p = 0.001) relative accommodation were significantly smaller, and the visual fatigue score was significantly higher (1.65 ± 1.18 versus 1.20 ± 1.03; p = 0.02) after viewing the shutter glasses three-dimensional display than after viewing the pattern retard three-dimensional display.; Conclusions: Stereoscopic viewing using pattern retard (polarized) three-dimensional displays as compared with stereoscopic viewing using shutter glasses three-dimensional displays resulted in significantly less visual fatigue as assessed subjectively, parallel to significantly better values of accommodation and convergence as measured objectively.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Purpose: Different types of stereoscopic video displays have recently been introduced. We measured and compared visual fatigue and visual discomfort induced by viewing two different stereoscopic displays that either used the pattern retarder-spatial domain technology with linearly polarized three-dimensional technology or the circularly polarized three-dimensional technology using shutter glasses.; Methods: During this observational cross-over study performed at two subsequent days, a video was watched by 30 subjects (age: 20-30 years). Half of the participants watched the screen with a pattern retard three-dimensional display at the first day and a shutter glasses three-dimensional display at the second day, and reverse. The study participants underwent a standardized interview on visual discomfort and fatigue, and a series of functional examinations prior to, and shortly after viewing the movie. Additionally, a subjective score for visual fatigue was given.; Results: Accommodative magnitude (right eye: p textless 0.001; left eye: p = 0.01), accommodative facility (p = 0.008), near-point convergence break-up point (p = 0.007), near-point convergence recover point (p = 0.001), negative (p = 0.03) and positive (p = 0.001) relative accommodation were significantly smaller, and the visual fatigue score was significantly higher (1.65 ± 1.18 versus 1.20 ± 1.03; p = 0.02) after viewing the shutter glasses three-dimensional display than after viewing the pattern retard three-dimensional display.; Conclusions: Stereoscopic viewing using pattern retard (polarized) three-dimensional displays as compared with stereoscopic viewing using shutter glasses three-dimensional displays resulted in significantly less visual fatigue as assessed subjectively, parallel to significantly better values of accommodation and convergence as measured objectively. |
Luming Zhang; Meng Wang; Liqiang Nie; Liang Hong; Yong Rui; Qi Tian Retargeting semantically-rich photos Journal Article IEEE Transactions on Multimedia, 17 (9), pp. 1538–1549, 2015. @article{Zhang2015a, title = {Retargeting semantically-rich photos}, author = {Luming Zhang and Meng Wang and Liqiang Nie and Liang Hong and Yong Rui and Qi Tian}, year = {2015}, date = {2015-01-01}, journal = {IEEE Transactions on Multimedia}, volume = {17}, number = {9}, pages = {1538--1549}, abstract = {Semantically-rich photos contain a rich variety of semantic objects (e.g., pedestrians and bicycles). Retargeting these photos is a challenging task since each semantic object has fixed geometric characteristics. Shrinking these objects simultaneously during retargeting is prone to distortion. In this paper, we propose to retarget semantically-rich photos by detecting photo semantics from image tags, which are predicted by a multi-label SVM. The key technique is a generative model termed latent stability discovery (LSD). It can robustly localize various semantic objects in a photo by making use of the predicted noisy image tags. BasedonLSD,afeaturefusionalgorithm is proposed to detect salient regions at both the low-level and high-level. These salient regions are linked into a path sequentially to simulate human visual perception. Finally, we learn the prior distribution of such paths from aesthetically pleasing training photos. The prior enforces the path of a retargeted photo to be maximally similar to those from the training photos. In the experiment, we collect 217 1600x1200 photos, each containing over seven salient objects. Comprehensive user studies demonstrate the competitiveness of our method.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Semantically-rich photos contain a rich variety of semantic objects (e.g., pedestrians and bicycles). Retargeting these photos is a challenging task since each semantic object has fixed geometric characteristics. Shrinking these objects simultaneously during retargeting is prone to distortion. In this paper, we propose to retarget semantically-rich photos by detecting photo semantics from image tags, which are predicted by a multi-label SVM. The key technique is a generative model termed latent stability discovery (LSD). It can robustly localize various semantic objects in a photo by making use of the predicted noisy image tags. BasedonLSD,afeaturefusionalgorithm is proposed to detect salient regions at both the low-level and high-level. These salient regions are linked into a path sequentially to simulate human visual perception. Finally, we learn the prior distribution of such paths from aesthetically pleasing training photos. The prior enforces the path of a retargeted photo to be maximally similar to those from the training photos. In the experiment, we collect 217 1600x1200 photos, each containing over seven salient objects. Comprehensive user studies demonstrate the competitiveness of our method. |
Youming Zhang; Jorma Laurikkala; Martti Juhola; Youming Zhang; Jorma Laurikkala; Martti Juhola Biometric verification with eye movements: Results from a long-term recording series Journal Article IET Biometrics, 4 (3), pp. 162–168, 2015. @article{Zhang2015c, title = {Biometric verification with eye movements: Results from a long-term recording series}, author = {Youming Zhang and Jorma Laurikkala and Martti Juhola and Youming Zhang and Jorma Laurikkala and Martti Juhola}, doi = {10.1049/iet-bmt.2014.0044}, year = {2015}, date = {2015-01-01}, journal = {IET Biometrics}, volume = {4}, number = {3}, pages = {162--168}, abstract = {The authors present the author's results of using saccadic eye movements for biometric user verification. The method can be applied to computers or other devices, in which it is possible to include an eye movement camera system. Thus far, this idea has been little researched. As they have extensively studied eye movement signals for medical applications, they saw an opportunity for the biometric use of saccades. Saccades are the fastest of all eye movements, and are easy to stimulate and detect from signals. As signals measured from a physiological origin, the properties of eye movements (e.g. latency and maximum angular velocity) may contain considerable variability between different times of day, between days or weeks and so on. Since such variability might impair biometric verification based on saccades, they attempted to tackle this issue. In contrast to their earlier results, where they did not include such long intervals between sessions of eye movement recordings as in the present research, their results showed that – notwithstanding some variability present in saccadic variables – this variability was not considerable enough to essentially disturb or impair verification results. The only exception was a test series of very long intervals ∼16 or 32 months in length. For the best results obtained with various classification methods, false rejection and false acceptance rates were textless5%. Thus, they conclude that saccadic eye movements can provide a realistic basis for biometric user verification.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The authors present the author's results of using saccadic eye movements for biometric user verification. The method can be applied to computers or other devices, in which it is possible to include an eye movement camera system. Thus far, this idea has been little researched. As they have extensively studied eye movement signals for medical applications, they saw an opportunity for the biometric use of saccades. Saccades are the fastest of all eye movements, and are easy to stimulate and detect from signals. As signals measured from a physiological origin, the properties of eye movements (e.g. latency and maximum angular velocity) may contain considerable variability between different times of day, between days or weeks and so on. Since such variability might impair biometric verification based on saccades, they attempted to tackle this issue. In contrast to their earlier results, where they did not include such long intervals between sessions of eye movement recordings as in the present research, their results showed that – notwithstanding some variability present in saccadic variables – this variability was not considerable enough to essentially disturb or impair verification results. The only exception was a test series of very long intervals ∼16 or 32 months in length. For the best results obtained with various classification methods, false rejection and false acceptance rates were textless5%. Thus, they conclude that saccadic eye movements can provide a realistic basis for biometric user verification. |
Bao Zhang; Shuhui Liu; Cenlou Hu; Ziwen Luo; Sai Huang; Jie Sui Enhanced memory-driven attentional capture in action video game players Journal Article Computers in Human Behavior, 107 , pp. 1–7, 2020. @article{Zhang2020a, title = {Enhanced memory-driven attentional capture in action video game players}, author = {Bao Zhang and Shuhui Liu and Cenlou Hu and Ziwen Luo and Sai Huang and Jie Sui}, doi = {10.1016/j.chb.2020.106271}, year = {2020}, date = {2020-01-01}, journal = {Computers in Human Behavior}, volume = {107}, pages = {1--7}, publisher = {Elsevier Ltd}, abstract = {Action video game players (AVGPs) have been shown to have an enhanced cognitive control ability to reduce stimulus-driven attentional capture (e.g., from an exogenous salient distractor) compared with non-action video game players (NVGPs). Here we examined whether these benefits could extend to the memory-driven attentional capture (i.e., working memory (WM) representations bias visual attention toward a matching distractor). AVGPs and NVGPs were instructed to complete a visual search task while actively maintaining 1, 2 or 4 items in WM. There was a robust advantage to the memory-driven attentional capture in reaction time and first eye movement fixation in the AVGPs compared to the NVGPs when they had to maintain one item in WM. Moreover, the effect of memory-driven attentional capture was maintained in the AVGPs when the WM load was increased, but it was eliminated in the NVGPs. The results suggest that AVGPs may devote more attentional resources to sustaining the cognitive control rather than to suppressing the attentional capture driven by the active WM representations.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Action video game players (AVGPs) have been shown to have an enhanced cognitive control ability to reduce stimulus-driven attentional capture (e.g., from an exogenous salient distractor) compared with non-action video game players (NVGPs). Here we examined whether these benefits could extend to the memory-driven attentional capture (i.e., working memory (WM) representations bias visual attention toward a matching distractor). AVGPs and NVGPs were instructed to complete a visual search task while actively maintaining 1, 2 or 4 items in WM. There was a robust advantage to the memory-driven attentional capture in reaction time and first eye movement fixation in the AVGPs compared to the NVGPs when they had to maintain one item in WM. Moreover, the effect of memory-driven attentional capture was maintained in the AVGPs when the WM load was increased, but it was eliminated in the NVGPs. The results suggest that AVGPs may devote more attentional resources to sustaining the cognitive control rather than to suppressing the attentional capture driven by the active WM representations. |
Xinru Zhang; Zhongling Pi; Chenyu Li; Weiping Hu Intrinsic motivation enhances online group creativity via promoting members' effort, not interaction Journal Article British Journal of Educational Technology, pp. 1–13, 2020. @article{Zhang2020eb, title = {Intrinsic motivation enhances online group creativity via promoting members' effort, not interaction}, author = {Xinru Zhang and Zhongling Pi and Chenyu Li and Weiping Hu}, doi = {10.1111/bjet.13045}, year = {2020}, date = {2020-01-01}, journal = {British Journal of Educational Technology}, pages = {1--13}, abstract = {Intrinsic motivation is seen as the principal source of vitality in educational settings. This study examined whether intrinsic motivation promoted online group creativity and tested a cognitive mechanism that might explain this effect. University students (N = 72; 61 women) who volunteered to participate were asked to fulfill a creative task with a peer using online software. The peer was actually a fake participant who was programed to send prepared answers in sequence. Ratings of creativity (fluency, flexibility and originality) and eye movement data (focus on own vs. peer's ideas on the screen) were used to compare students who were induced to have high intrinsic motivation and those induced to have low intrinsic motivation. Results showed that compared to participants with low intrinsic motivation, those with high intrinsic motivation showed higher fluency and flexibility on the creative task and spent a larger percentage of time looking at their own ideas on the screen. The two groups did not differ in how much they looked at the peer's ideas. In addition, students' percentage dwell time on their own ideas mediated the beneficial effect of intrinsic motivation on idea fluency. These results suggest that although intrinsic motivation could enhance the fluency of creative ideas in an online group, it does not necessarily promote interaction among group members. Given the importance of interaction in online group setting, findings of this study suggest that in addition to enhancing intrinsic motivation, other measures should be taken to promote the interaction behavior in online groups. Practitioner Notes What is already known about this topic The generation of creative ideas in group settings calls for both individual effort and cognitive stimulation from other members. Intrinsic motivation has been shown to foster creativity in face-to-face groups, which is primarily due the promotion of individual effort. In online group settings, students' creativity tends to rely on intrinsic motivation because the extrinsic motivation typically provided by teachers' supervision and peer pressure in face-to-face settings is minimized online. What this paper adds Creative performance in online groups benefits from intrinsic motivation. Intrinsic motivation promotes creativity through an individual's own cognitive effort instead of interaction among members. Implications for practice and/or policy Improving students' intrinsic motivation is an effective way to promote creativity in online groups. Teachers should take additional steps to encourage students to interact more with each other in online groups.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Intrinsic motivation is seen as the principal source of vitality in educational settings. This study examined whether intrinsic motivation promoted online group creativity and tested a cognitive mechanism that might explain this effect. University students (N = 72; 61 women) who volunteered to participate were asked to fulfill a creative task with a peer using online software. The peer was actually a fake participant who was programed to send prepared answers in sequence. Ratings of creativity (fluency, flexibility and originality) and eye movement data (focus on own vs. peer's ideas on the screen) were used to compare students who were induced to have high intrinsic motivation and those induced to have low intrinsic motivation. Results showed that compared to participants with low intrinsic motivation, those with high intrinsic motivation showed higher fluency and flexibility on the creative task and spent a larger percentage of time looking at their own ideas on the screen. The two groups did not differ in how much they looked at the peer's ideas. In addition, students' percentage dwell time on their own ideas mediated the beneficial effect of intrinsic motivation on idea fluency. These results suggest that although intrinsic motivation could enhance the fluency of creative ideas in an online group, it does not necessarily promote interaction among group members. Given the importance of interaction in online group setting, findings of this study suggest that in addition to enhancing intrinsic motivation, other measures should be taken to promote the interaction behavior in online groups. Practitioner Notes What is already known about this topic The generation of creative ideas in group settings calls for both individual effort and cognitive stimulation from other members. Intrinsic motivation has been shown to foster creativity in face-to-face groups, which is primarily due the promotion of individual effort. In online group settings, students' creativity tends to rely on intrinsic motivation because the extrinsic motivation typically provided by teachers' supervision and peer pressure in face-to-face settings is minimized online. What this paper adds Creative performance in online groups benefits from intrinsic motivation. Intrinsic motivation promotes creativity through an individual's own cognitive effort instead of interaction among members. Implications for practice and/or policy Improving students' intrinsic motivation is an effective way to promote creativity in online groups. Teachers should take additional steps to encourage students to interact more with each other in online groups. |
Chenzhu Zhao Near or far? The effect of latest booking time on hotel booking intention: Based on eye-tracking experiments Journal Article International Journal of Frontiers in Sociology, 2 (7), pp. 1–12, 2020. @article{Zhao2020a, title = {Near or far? The effect of latest booking time on hotel booking intention: Based on eye-tracking experiments}, author = {Chenzhu Zhao}, doi = {10.25236/IJFS.2020.020701}, year = {2020}, date = {2020-01-01}, journal = {International Journal of Frontiers in Sociology}, volume = {2}, number = {7}, pages = {1--12}, abstract = {Online travel agencies (OTAs) depends on marketing clues to reduce the consumer uncertainty perceptions of online travel-related products. The latest booking time (LBT) provided by the consumer has a significant impact on purchasing decisions. This study aims to explore the effect of LBT on consumer visual attention and booking intention along with the moderation effect of online comment valence (OCV). Since eye movement is bound up with the transfer of visual attention, eye-tracking is used to record visual attention of consumer. Our research used a 3 (LBT: near vs. medium vs. far) × 3 (OCV: high vs. medium vs. low) design to conduct the experiments. The main findings showed the following:(1) LBT can obviously increase the visual attention to the whole advertisements and improve the booking intention;(2) OCV moderates the effect of LBT on both visual attention to the whole advertisements and booking intention. Only when OCV are medium and high, LBT can obviously improve attention to the whole advertisements and increase consumers' booking intention. The experiment results show that OTAs can improve the advertising effectiveness by adding LBT label, but LBT have no effect with low-level OCV.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Online travel agencies (OTAs) depends on marketing clues to reduce the consumer uncertainty perceptions of online travel-related products. The latest booking time (LBT) provided by the consumer has a significant impact on purchasing decisions. This study aims to explore the effect of LBT on consumer visual attention and booking intention along with the moderation effect of online comment valence (OCV). Since eye movement is bound up with the transfer of visual attention, eye-tracking is used to record visual attention of consumer. Our research used a 3 (LBT: near vs. medium vs. far) × 3 (OCV: high vs. medium vs. low) design to conduct the experiments. The main findings showed the following:(1) LBT can obviously increase the visual attention to the whole advertisements and improve the booking intention;(2) OCV moderates the effect of LBT on both visual attention to the whole advertisements and booking intention. Only when OCV are medium and high, LBT can obviously improve attention to the whole advertisements and increase consumers' booking intention. The experiment results show that OTAs can improve the advertising effectiveness by adding LBT label, but LBT have no effect with low-level OCV. |
Yan Zhou Psychological analysis of online teaching in colleges based on eye-tracking technology Journal Article Revista Argentina de Clinica Psicologica, 29 (2), pp. 523–529, 2020. @article{Zhou2020a, title = {Psychological analysis of online teaching in colleges based on eye-tracking technology}, author = {Yan Zhou}, doi = {10.24205/03276716.2020.272}, year = {2020}, date = {2020-01-01}, journal = {Revista Argentina de Clinica Psicologica}, volume = {29}, number = {2}, pages = {523--529}, abstract = {Eye-tracking technology has been widely adopted to capture the psychological changes of college students in the learning process. With the aid of eye-tracking technology this paper establishes a psychological analysis model for students in online teaching. Four eye movement parameters were selected for the model, including pupil diameter, fixation time, re-reading time and retrospective time. A total of 100 college students were selected for an eye movement test in online teaching environment. The test data were analyzed on the SPSS software. The results show that the eye movement parameters are greatly affected by the key points in teaching and the contents that interest the students; the two influencing factors can arouse and attract the students' attention in the teaching process. The research results provide an important reference for the psychological study of online teaching in colleges.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Eye-tracking technology has been widely adopted to capture the psychological changes of college students in the learning process. With the aid of eye-tracking technology this paper establishes a psychological analysis model for students in online teaching. Four eye movement parameters were selected for the model, including pupil diameter, fixation time, re-reading time and retrospective time. A total of 100 college students were selected for an eye movement test in online teaching environment. The test data were analyzed on the SPSS software. The results show that the eye movement parameters are greatly affected by the key points in teaching and the contents that interest the students; the two influencing factors can arouse and attract the students' attention in the teaching process. The research results provide an important reference for the psychological study of online teaching in colleges. |
Jiawen Zhu; Kara Dawson; Albert D Ritzhaupt; Pavlo Pasha Antonenko Journal of Educational Multimedia and Hypermedia, 29 (3), pp. 265–284, 2020. @article{Zhu2020, title = {Investigating how multimedia and modality design principles influence student learning performance, satisfaction, mental effort, and visual attention}, author = {Jiawen Zhu and Kara Dawson and Albert D Ritzhaupt and Pavlo Pasha Antonenko}, year = {2020}, date = {2020-01-01}, journal = {Journal of Educational Multimedia and Hypermedia}, volume = {29}, number = {3}, pages = {265--284}, abstract = {This study investigated the effects of multimedia and modal- ity design principles using a learning intervention about Aus- tralia with a sample of college students and employing mea- sures of learning outcomes, visual attention, satisfaction, and mental effort. Seventy-five college students were systemati- cally assigned to one of four conditions: a) text with pictures, b) text without pictures, c) narration with pictures, or d) nar- ration without pictures. No significant differences were found among the four groups in learning performance, satisfaction, or self-reported mental effort, and participants rarely focused their visual attention on the representational pictures provid- ed in the intervention. Neither the multimedia nor the modal- ity principles held true in this study. However, participants in narration environments focused significantly more visual at- tention on the “Next” button, a navigational aid included on all slides. This study contributes to the research on visual at- tention and navigational aids in multimedia learning, and it suggests such features may cause distractions, particularly when spoken text is provided without on-screen text. The paper also offers implications for the design of multimedia learning}, keywords = {}, pubstate = {published}, tppubtype = {article} } This study investigated the effects of multimedia and modal- ity design principles using a learning intervention about Aus- tralia with a sample of college students and employing mea- sures of learning outcomes, visual attention, satisfaction, and mental effort. Seventy-five college students were systemati- cally assigned to one of four conditions: a) text with pictures, b) text without pictures, c) narration with pictures, or d) nar- ration without pictures. No significant differences were found among the four groups in learning performance, satisfaction, or self-reported mental effort, and participants rarely focused their visual attention on the representational pictures provid- ed in the intervention. Neither the multimedia nor the modal- ity principles held true in this study. However, participants in narration environments focused significantly more visual at- tention on the “Next” button, a navigational aid included on all slides. This study contributes to the research on visual at- tention and navigational aids in multimedia learning, and it suggests such features may cause distractions, particularly when spoken text is provided without on-screen text. The paper also offers implications for the design of multimedia learning |