Esther X W Wu; Syed O Gilani; Jeroen J A van Boxtel; Ido Amihai; Fook K Chua; Shih Cheng Yen Parallel programming of saccades during natural scene viewing: Evidence from eye movement positions Journal Article Journal of Vision, 13 (12), pp. 17–17, 2013. @article{Wu2013a, title = {Parallel programming of saccades during natural scene viewing: Evidence from eye movement positions}, author = {Esther X W Wu and Syed O Gilani and Jeroen J A van Boxtel and Ido Amihai and Fook K Chua and Shih Cheng Yen}, doi = {10.1167/13.12.17}, year = {2013}, date = {2013-01-01}, journal = {Journal of Vision}, volume = {13}, number = {12}, pages = {17--17}, abstract = {Previous studies have shown that saccade plans during natural scene viewing can be programmed in parallel. This evidence comes mainly from temporal indicators, i.e., fixation durations and latencies. In the current study, we asked whether eye movement positions recorded during scene viewing also reflect parallel programming of saccades. As participants viewed scenes in preparation for a memory task, their inspection of the scene was suddenly disrupted by a transition to another scene. We examined whether saccades after the transition were invariably directed immediately toward the center or were contingent on saccade onset times relative to the transition. The results, which showed a dissociation in eye movement behavior between two groups of saccades after the scene transition, supported the parallel programming account. Saccades with relatively long onset times (textgreater100 ms) after the transition were directed immediately toward the center of the scene, probably to restart scene exploration. Saccades with short onset times (textless100 ms) moved to the center only one saccade later. Our data on eye movement positions provide novel evidence of parallel programming of saccades during scene viewing. Additionally, results from the analyses of intersaccadic intervals were also consistent with the parallel programming hypothesis.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Previous studies have shown that saccade plans during natural scene viewing can be programmed in parallel. This evidence comes mainly from temporal indicators, i.e., fixation durations and latencies. In the current study, we asked whether eye movement positions recorded during scene viewing also reflect parallel programming of saccades. As participants viewed scenes in preparation for a memory task, their inspection of the scene was suddenly disrupted by a transition to another scene. We examined whether saccades after the transition were invariably directed immediately toward the center or were contingent on saccade onset times relative to the transition. The results, which showed a dissociation in eye movement behavior between two groups of saccades after the scene transition, supported the parallel programming account. Saccades with relatively long onset times (textgreater100 ms) after the transition were directed immediately toward the center of the scene, probably to restart scene exploration. Saccades with short onset times (textless100 ms) moved to the center only one saccade later. Our data on eye movement positions provide novel evidence of parallel programming of saccades during scene viewing. Additionally, results from the analyses of intersaccadic intervals were also consistent with the parallel programming hypothesis. |
Esther X W Wu; Fook K Chua; Shih Cheng Yen Saccade plan overlap and cancellation during free viewing Journal Article Vision Research, 127 , pp. 122–131, 2016. @article{Wu2016a, title = {Saccade plan overlap and cancellation during free viewing}, author = {Esther X W Wu and Fook K Chua and Shih Cheng Yen}, doi = {10.1016/j.visres.2016.07.009}, year = {2016}, date = {2016-01-01}, journal = {Vision Research}, volume = {127}, pages = {122--131}, publisher = {Elsevier Ltd}, abstract = {In the current study, we examined how the saccadic system responds when visual information changes dynamically in our environment. Previous studies, using the double-step task, have shown that (a) saccade plans could overlap, such that saccade preparation to an object started even while the saccade preparation to another object was ongoing, and (b) saccade plans could be cancelled before they were completed. In these studies, saccade targets were restricted to a few, experimenter-defined locations. Here, we examined whether saccade plan overlap and cancellation mechanisms could be observed in free-viewing conditions. For each trial, we constructed sets of two images, each containing five objects. All objects have unique positions. Image 1 was presented for several fixations, before Image 2 was presented during a fixation, presumably while a saccade plan to an object in Image 1 was ongoing. There were two crucial findings: (a) First, the saccade immediately following the transition was sometimes executed towards objects in Image 2, and not an object in Image 1, suggesting that the earlier saccade plan to an Image 1 object had been cancelled. Second, analysis of the temporal data also suggested that preparation of the first post-transition saccade started before an earlier saccade plan to an Image 1 object was executed, implying that saccade plans overlapped.}, keywords = {}, pubstate = {published}, tppubtype = {article} } In the current study, we examined how the saccadic system responds when visual information changes dynamically in our environment. Previous studies, using the double-step task, have shown that (a) saccade plans could overlap, such that saccade preparation to an object started even while the saccade preparation to another object was ongoing, and (b) saccade plans could be cancelled before they were completed. In these studies, saccade targets were restricted to a few, experimenter-defined locations. Here, we examined whether saccade plan overlap and cancellation mechanisms could be observed in free-viewing conditions. For each trial, we constructed sets of two images, each containing five objects. All objects have unique positions. Image 1 was presented for several fixations, before Image 2 was presented during a fixation, presumably while a saccade plan to an object in Image 1 was ongoing. There were two crucial findings: (a) First, the saccade immediately following the transition was sometimes executed towards objects in Image 2, and not an object in Image 1, suggesting that the earlier saccade plan to an Image 1 object had been cancelled. Second, analysis of the temporal data also suggested that preparation of the first post-transition saccade started before an earlier saccade plan to an Image 1 object was executed, implying that saccade plans overlapped. |
Chia-Chien Wu; Bo Cao; Veena Dali; Celia Gagliardi; Olivier J Barthelemy; Robert D Salazar; Marc Pomplun; Alice Cronin-Golomb; Arash Yazdanbakhsh Eye movement control during visual pursuit in Parkinson's disease Journal Article PeerJ, 2018 (8), pp. 1–22, 2018. @article{Wu2018c, title = {Eye movement control during visual pursuit in Parkinson's disease}, author = {Chia-Chien Wu and Bo Cao and Veena Dali and Celia Gagliardi and Olivier J Barthelemy and Robert D Salazar and Marc Pomplun and Alice Cronin-Golomb and Arash Yazdanbakhsh}, doi = {10.7717/peerj.5442}, year = {2018}, date = {2018-01-01}, journal = {PeerJ}, volume = {2018}, number = {8}, pages = {1--22}, abstract = {Background: Prior studies of oculomotor function in Parkinson's disease (PD) have either focused on saccades without considering smooth pursuit, or tested smooth pursuit while excluding saccades. The present study investigated the control of saccadic eye movements during pursuit tasksand assessed the quality of binocular coordinationas potential sensitive markers of PD. Methods: Observers fixated on a central cross while a target moved toward it. Once the target reached the fixation cross, observers began to pursue the moving target. To further investigate binocular coordination, the moving target was presented on both eyes (binocular condition), or on one eye only (dichoptic condition). Results: The PD group made more saccades than age-matched normal control adults (NC) both during fixation and pursuit. The difference between left and right gaze positions increased over time during the pursuit period for PD but not for NC. The findings were not related to age, as NC and young-adult control group (YC) performed similarly on most of the eye movement measures, and were not correlated with classical measures of PD severity (e.g., Unified Parkinson's Disease Rating Scale (UPDRS) score). Discussion: Our results suggest that PD may be associated with impairment not only in saccade inhibition, but also in binocular coordination during pursuit, and these aspects of dysfunction may be useful in PD diagnosis or tracking of disease course.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Background: Prior studies of oculomotor function in Parkinson's disease (PD) have either focused on saccades without considering smooth pursuit, or tested smooth pursuit while excluding saccades. The present study investigated the control of saccadic eye movements during pursuit tasksand assessed the quality of binocular coordinationas potential sensitive markers of PD. Methods: Observers fixated on a central cross while a target moved toward it. Once the target reached the fixation cross, observers began to pursue the moving target. To further investigate binocular coordination, the moving target was presented on both eyes (binocular condition), or on one eye only (dichoptic condition). Results: The PD group made more saccades than age-matched normal control adults (NC) both during fixation and pursuit. The difference between left and right gaze positions increased over time during the pursuit period for PD but not for NC. The findings were not related to age, as NC and young-adult control group (YC) performed similarly on most of the eye movement measures, and were not correlated with classical measures of PD severity (e.g., Unified Parkinson's Disease Rating Scale (UPDRS) score). Discussion: Our results suggest that PD may be associated with impairment not only in saccade inhibition, but also in binocular coordination during pursuit, and these aspects of dysfunction may be useful in PD diagnosis or tracking of disease course. |
Xin Yu Xie; Lei Liu; Cong Yu A new perceptual training strategy to improve vision impaired by central vision loss Journal Article Vision Research, 174 , pp. 69–76, 2020. @article{Xie2020b, title = {A new perceptual training strategy to improve vision impaired by central vision loss}, author = {Xin Yu Xie and Lei Liu and Cong Yu}, doi = {10.1016/j.visres.2020.05.010}, year = {2020}, date = {2020-01-01}, journal = {Vision Research}, volume = {174}, pages = {69--76}, abstract = {Patients with central vision loss depend on peripheral vision for everyday functions. A preferred retinal locus (PRL) on the intact retina is commonly trained as a new “fovea” to help. However, reprogramming the fovea-centered oculomotor control is difficult, so saccades often bring the defunct fovea to block the target. Aligning PRL with distant targets also requires multiple saccades and sometimes head movements. To overcome these problems, we attempted to train normal-sighted observers to form a preferred retinal annulus (PRA) around a simulated scotoma, so that they could rely on the same fovea-centered oculomotor system and make short saccades to align PRA with the target. Observers with an invisible simulated central scotoma (5° radius) practiced making saccades to see a tumbling-E target at 10° eccentricity. The otherwise blurred E target became clear when saccades brought a scotoma-abutting clear window (2° radius) to it. The location of the clear window was either fixed for PRL training, or changing among 12 locations for PRA training. Various cues aided the saccades through training. Practice quickly established a PRL or PRA. Comparing to PRL-trained observers whose first saccade persistently blocked the target with scotoma, PRA-trained observers produced more accurate first saccade. The benefits of more accurate PRA-based saccades also outweighed the costs of slower latency. PRA training may provide an efficient strategy to cope with central vision loss, especially for aging patients who have major difficulties adapting to a PRL.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Patients with central vision loss depend on peripheral vision for everyday functions. A preferred retinal locus (PRL) on the intact retina is commonly trained as a new “fovea” to help. However, reprogramming the fovea-centered oculomotor control is difficult, so saccades often bring the defunct fovea to block the target. Aligning PRL with distant targets also requires multiple saccades and sometimes head movements. To overcome these problems, we attempted to train normal-sighted observers to form a preferred retinal annulus (PRA) around a simulated scotoma, so that they could rely on the same fovea-centered oculomotor system and make short saccades to align PRA with the target. Observers with an invisible simulated central scotoma (5° radius) practiced making saccades to see a tumbling-E target at 10° eccentricity. The otherwise blurred E target became clear when saccades brought a scotoma-abutting clear window (2° radius) to it. The location of the clear window was either fixed for PRL training, or changing among 12 locations for PRA training. Various cues aided the saccades through training. Practice quickly established a PRL or PRA. Comparing to PRL-trained observers whose first saccade persistently blocked the target with scotoma, PRA-trained observers produced more accurate first saccade. The benefits of more accurate PRA-based saccades also outweighed the costs of slower latency. PRA training may provide an efficient strategy to cope with central vision loss, especially for aging patients who have major difficulties adapting to a PRL. |
Cheng Xue; Antonino Calapai; Julius Krumbiegel; Stefan Treue Sustained spatial attention accounts for the direction bias of human microsaccades Journal Article Scientific Reports, 10 , pp. 1–10, 2020. @article{Xue2020, title = {Sustained spatial attention accounts for the direction bias of human microsaccades}, author = {Cheng Xue and Antonino Calapai and Julius Krumbiegel and Stefan Treue}, doi = {10.1038/s41598-020-77455-7}, year = {2020}, date = {2020-01-01}, journal = {Scientific Reports}, volume = {10}, pages = {1--10}, publisher = {Nature Publishing Group UK}, abstract = {Small ballistic eye movements, so called microsaccades, occur even while foveating an object. Previous studies using covert attention tasks have shown that shortly after a symbolic spatial cue, specifying a behaviorally relevant location, microsaccades tend to be directed toward the cued location. This suggests that microsaccades can serve as an index for the covert orientation of spatial attention. However, this hypothesis faces two major challenges: First, effects associated with visual spatial attention are hard to distinguish from those that associated with the contemplation of foveating a peripheral stimulus. Second, it is less clear whether endogenously sustained attention alone can bias microsaccade directions without a spatial cue on each trial. To address the first issue, we investigated the direction of microsaccades in human subjects while they attended to a behaviorally relevant location and prepared a response eye movement either toward or away from this location. We find that directions of microsaccades are biased toward the attended location rather than towards the saccade target. To tackle the second issue, we verbally indicated the location to attend before the start of each block of trials, to exclude potential visual cue-specific effects on microsaccades. Our results indicate that sustained spatial attention alone reliably produces the microsaccade direction effect. Overall, our findings demonstrate that sustained spatial attention alone, even in the absence of saccade planning or a spatial cue, is sufficient to explain the direction bias observed in microsaccades.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Small ballistic eye movements, so called microsaccades, occur even while foveating an object. Previous studies using covert attention tasks have shown that shortly after a symbolic spatial cue, specifying a behaviorally relevant location, microsaccades tend to be directed toward the cued location. This suggests that microsaccades can serve as an index for the covert orientation of spatial attention. However, this hypothesis faces two major challenges: First, effects associated with visual spatial attention are hard to distinguish from those that associated with the contemplation of foveating a peripheral stimulus. Second, it is less clear whether endogenously sustained attention alone can bias microsaccade directions without a spatial cue on each trial. To address the first issue, we investigated the direction of microsaccades in human subjects while they attended to a behaviorally relevant location and prepared a response eye movement either toward or away from this location. We find that directions of microsaccades are biased toward the attended location rather than towards the saccade target. To tackle the second issue, we verbally indicated the location to attend before the start of each block of trials, to exclude potential visual cue-specific effects on microsaccades. Our results indicate that sustained spatial attention alone reliably produces the microsaccade direction effect. Overall, our findings demonstrate that sustained spatial attention alone, even in the absence of saccade planning or a spatial cue, is sufficient to explain the direction bias observed in microsaccades. |
Yoshiko Yabe; Melvyn A Goodale; Hiroaki Shigemasu Temporal order judgments are disrupted more by reflexive than by voluntary saccades Journal Article Journal of Neurophysiology, 111 (10), pp. 2103–2108, 2014. @article{Yabe2014, title = {Temporal order judgments are disrupted more by reflexive than by voluntary saccades}, author = {Yoshiko Yabe and Melvyn A Goodale and Hiroaki Shigemasu}, doi = {10.1152/jn.00767.2013}, year = {2014}, date = {2014-01-01}, journal = {Journal of Neurophysiology}, volume = {111}, number = {10}, pages = {2103--2108}, abstract = {We do not always perceive the sequence of events as they actually unfold. For example, when two events occur before a rapid eye movement (saccade), the interval between them is often perceived as shorter than it really is and the order of those events can be sometimes reversed (Morrone MC, Ross J, Burr DC. Nat Neurosci 8: 950-954, 2005). In the present article we show that these misperceptions of the temporal order of events critically depend on whether the saccade is reflexive or voluntary. In the first experiment, participants judged the temporal order of two visual stimuli that were presented one after the other just before a reflexive or voluntary saccadic eye movement. In the reflexive saccade condition, participants moved their eyes to a target that suddenly appeared. In the voluntary saccade condition, participants moved their eyes to a target that was present already. Similarly to the above-cited study, we found that the temporal order of events was often misjudged just before a reflexive saccade to a suddenly appearing target. However, when people made a voluntary saccade to a target that was already present, there was a significant reduction in the probability of misjudging the temporal order of the same events. In the second experiment, the reduction was seen in a memory-delay task. It is likely that the nature of the motor command and its origin determine how time is perceived during the moments preceding the motor act.}, keywords = {}, pubstate = {published}, tppubtype = {article} } We do not always perceive the sequence of events as they actually unfold. For example, when two events occur before a rapid eye movement (saccade), the interval between them is often perceived as shorter than it really is and the order of those events can be sometimes reversed (Morrone MC, Ross J, Burr DC. Nat Neurosci 8: 950-954, 2005). In the present article we show that these misperceptions of the temporal order of events critically depend on whether the saccade is reflexive or voluntary. In the first experiment, participants judged the temporal order of two visual stimuli that were presented one after the other just before a reflexive or voluntary saccadic eye movement. In the reflexive saccade condition, participants moved their eyes to a target that suddenly appeared. In the voluntary saccade condition, participants moved their eyes to a target that was present already. Similarly to the above-cited study, we found that the temporal order of events was often misjudged just before a reflexive saccade to a suddenly appearing target. However, when people made a voluntary saccade to a target that was already present, there was a significant reduction in the probability of misjudging the temporal order of the same events. In the second experiment, the reduction was seen in a memory-delay task. It is likely that the nature of the motor command and its origin determine how time is perceived during the moments preceding the motor act. |
Maya Yablonski; Uri Polat; Yoram S Bonneh; Michal Ben-Shachar Microsaccades are sensitive to word structure: A novel approach to study language processing Journal Article Scientific Reports, 7 , pp. 3999, 2017. @article{Yablonski2017, title = {Microsaccades are sensitive to word structure: A novel approach to study language processing}, author = {Maya Yablonski and Uri Polat and Yoram S Bonneh and Michal Ben-Shachar}, doi = {10.1038/s41598-017-04391-4}, year = {2017}, date = {2017-01-01}, journal = {Scientific Reports}, volume = {7}, pages = {3999}, publisher = {Springer US}, abstract = {Microsaccades are miniature eye movements that occur involuntarily during fixation. They are typically inhibited following stimulus onset and are released from inhibition about 300 ms post-stimulus. Microsaccade-inhibition is modulated by low level features of visual stimuli, but it is currently unknown whether they are sensitive to higher level, abstract linguistic properties. To address this question, we measured the timing of microsaccades while subjects were presented with written Hebrew words and pronounceable nonwords (pseudowords). We manipulated the underlying structure of pseudowords such that half of them contained real roots while the other half contained invented roots. Importantly, orthographic similarity to real words was equated between the two conditions. Microsaccade onset was significantly slower following real-root compared to invented-root stimuli. Similar results were obtained when considering post-stimulus delay of eye blinks. Moreover, microsaccade-delay was positively and significantly correlated with measures of real-word similarity. These findings demonstrate, for the first time, sensitivity of microsaccades to linguistic structure. Because microsaccades are involuntary and can be measured in the absence of overt response, our results provide initial evidence that they can be used as a novel physiological measure in the study of language processes in healthy and clinical populations.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Microsaccades are miniature eye movements that occur involuntarily during fixation. They are typically inhibited following stimulus onset and are released from inhibition about 300 ms post-stimulus. Microsaccade-inhibition is modulated by low level features of visual stimuli, but it is currently unknown whether they are sensitive to higher level, abstract linguistic properties. To address this question, we measured the timing of microsaccades while subjects were presented with written Hebrew words and pronounceable nonwords (pseudowords). We manipulated the underlying structure of pseudowords such that half of them contained real roots while the other half contained invented roots. Importantly, orthographic similarity to real words was equated between the two conditions. Microsaccade onset was significantly slower following real-root compared to invented-root stimuli. Similar results were obtained when considering post-stimulus delay of eye blinks. Moreover, microsaccade-delay was positively and significantly correlated with measures of real-word similarity. These findings demonstrate, for the first time, sensitivity of microsaccades to linguistic structure. Because microsaccades are involuntary and can be measured in the absence of overt response, our results provide initial evidence that they can be used as a novel physiological measure in the study of language processes in healthy and clinical populations. |
Shimpei Yamagishi; Shigeto Furukawa Factors influencing saccadic reaction time: Effect of task modality, stimulus saliency, spatial congruency of stimuli, and pupil size Journal Article Frontiers in Human Neuroscience, 14 , pp. 1–11, 2020. @article{Yamagishi2020, title = {Factors influencing saccadic reaction time: Effect of task modality, stimulus saliency, spatial congruency of stimuli, and pupil size}, author = {Shimpei Yamagishi and Shigeto Furukawa}, doi = {10.3389/fnhum.2020.571893}, year = {2020}, date = {2020-01-01}, journal = {Frontiers in Human Neuroscience}, volume = {14}, pages = {1--11}, abstract = {It is often assumed that the reaction time of a saccade toward visual and/or auditory stimuli reflects the sensitivities of our oculomotor-orienting system to stimulus saliency. Endogenous factors, as well as stimulus-related factors, would also affect the saccadic reaction time (SRT). However, it was not clear how these factors interact and to what extent visual and auditory-targeting saccades are accounted for by common mechanisms. The present study examined the effect of, and the interaction between, stimulus saliency and audiovisual spatial congruency on the SRT for visual- and for auditory-target conditions. We also analyzed pre-target pupil size to examine the relationship between saccade preparation and pupil size. Pupil size is considered to reflect arousal states coupling with locus-coeruleus (LC) activity during a cognitive task. The main findings were that (1) the pattern of the examined effects on the SRT varied between visual- and auditory-auditory target conditions, (2) the effect of stimulus saliency was significant for the visual-target condition, but not significant for the auditory-target condition, (3) Pupil velocity, not absolute pupil size, was sensitive to task set (i.e., visual-targeting saccade vs. auditory-targeting saccade), and (4) there was a significant correlation between the pre-saccade absolute pupil size and the SRTs for the visual-target condition but not for the auditory-target condition. The discrepancy between target modalities for the effect of pupil velocity and between the absolute pupil size and pupil velocity for the correlation with SRT may imply that the pupil effect for the visual-target condition was caused by a modality-specific link between pupil size modulation and the SC rather than by the LC-NE (locus coeruleus-norepinephrine) system. These results support the idea that different threshold mechanisms in the SC may be involved in the initiation of saccades toward visual and auditory targets.}, keywords = {}, pubstate = {published}, tppubtype = {article} } It is often assumed that the reaction time of a saccade toward visual and/or auditory stimuli reflects the sensitivities of our oculomotor-orienting system to stimulus saliency. Endogenous factors, as well as stimulus-related factors, would also affect the saccadic reaction time (SRT). However, it was not clear how these factors interact and to what extent visual and auditory-targeting saccades are accounted for by common mechanisms. The present study examined the effect of, and the interaction between, stimulus saliency and audiovisual spatial congruency on the SRT for visual- and for auditory-target conditions. We also analyzed pre-target pupil size to examine the relationship between saccade preparation and pupil size. Pupil size is considered to reflect arousal states coupling with locus-coeruleus (LC) activity during a cognitive task. The main findings were that (1) the pattern of the examined effects on the SRT varied between visual- and auditory-auditory target conditions, (2) the effect of stimulus saliency was significant for the visual-target condition, but not significant for the auditory-target condition, (3) Pupil velocity, not absolute pupil size, was sensitive to task set (i.e., visual-targeting saccade vs. auditory-targeting saccade), and (4) there was a significant correlation between the pre-saccade absolute pupil size and the SRTs for the visual-target condition but not for the auditory-target condition. The discrepancy between target modalities for the effect of pupil velocity and between the absolute pupil size and pupil velocity for the correlation with SRT may imply that the pupil effect for the visual-target condition was caused by a modality-specific link between pupil size modulation and the SC rather than by the LC-NE (locus coeruleus-norepinephrine) system. These results support the idea that different threshold mechanisms in the SC may be involved in the initiation of saccades toward visual and auditory targets. |
Ming Yan; Jinger Pan; Nathalie N Bélanger; Hua Shu Chinese deaf readers have early access to parafoveal semantics Journal Article Journal of Experimental Psychology: Learning, Memory, and Cognition, 41 (1), pp. 254–261, 2015. @article{Yan2015c, title = {Chinese deaf readers have early access to parafoveal semantics}, author = {Ming Yan and Jinger Pan and Nathalie N Bélanger and Hua Shu}, doi = {10.1037/xlm0000035}, year = {2015}, date = {2015-01-01}, journal = {Journal of Experimental Psychology: Learning, Memory, and Cognition}, volume = {41}, number = {1}, pages = {254--261}, abstract = {In the present study, we manipulated different types of information available in the parafovea during the reading of Chinese sentences and examined how deaf readers make use of the parafoveal information. Results clearly indicate that although the reading-level matched hearing readers make greater use of orthographic information in the parafovea, parafoveal semantic information is obtained earlier among the deaf readers. In addition, a phonological preview benefit effect was found for the better deaf readers (relative to less-skilled deaf readers), although we also provide an alternative explanation for this effect. Providing evidence that Chinese deaf readers have higher efficiency when processing parafoveal semantics, the study indicates flexibility across individuals in the mechanisms underlying word recognition adapting to the inputs available in the linguistic environment.}, keywords = {}, pubstate = {published}, tppubtype = {article} } In the present study, we manipulated different types of information available in the parafovea during the reading of Chinese sentences and examined how deaf readers make use of the parafoveal information. Results clearly indicate that although the reading-level matched hearing readers make greater use of orthographic information in the parafovea, parafoveal semantic information is obtained earlier among the deaf readers. In addition, a phonological preview benefit effect was found for the better deaf readers (relative to less-skilled deaf readers), although we also provide an alternative explanation for this effect. Providing evidence that Chinese deaf readers have higher efficiency when processing parafoveal semantics, the study indicates flexibility across individuals in the mechanisms underlying word recognition adapting to the inputs available in the linguistic environment. |
Chuyao Yan; Tao He; Raymond M Klein; Zhiguo Wang Predictive remapping gives rise to environmental inhibition of return Journal Article Psychonomic Bulletin & Review, 23 (6), pp. 1860–1866, 2016. @article{Yan2016a, title = {Predictive remapping gives rise to environmental inhibition of return}, author = {Chuyao Yan and Tao He and Raymond M Klein and Zhiguo Wang}, doi = {10.3758/s13423-016-1066-x}, year = {2016}, date = {2016-01-01}, journal = {Psychonomic Bulletin & Review}, volume = {23}, number = {6}, pages = {1860--1866}, publisher = {Psychonomic Bulletin & Review}, abstract = {Neurons in various brain regions predictively respond to stimuli that will be brought to their receptive fields by an impending eye movement. This neural mechanism, known as predictive remapping, has been suggested to underlie spatial constancy. Inhibition of return (IOR) is a bias against recently attended locations. The present study examined whether predictive remapping is a mechanism underlying IOR effects observed in environmental coordinates. The participant made saccades to a peripheral location after an IOR effect had been elicited by an onset cue and discriminated a target presented around the time of saccade onset. Immediately before the required saccade, IOR emerged at the retinal locus that would be brought to the cued location. A second task in which the participant maintained fixation during the entire trial ruled out the possibility that this IOR effect was simply the spillover of IOR from the cued location. These findings, for the first time, provide direct behavioral evidence that predictive remapping is a mechanism underlying environmental IOR.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Neurons in various brain regions predictively respond to stimuli that will be brought to their receptive fields by an impending eye movement. This neural mechanism, known as predictive remapping, has been suggested to underlie spatial constancy. Inhibition of return (IOR) is a bias against recently attended locations. The present study examined whether predictive remapping is a mechanism underlying IOR effects observed in environmental coordinates. The participant made saccades to a peripheral location after an IOR effect had been elicited by an onset cue and discriminated a target presented around the time of saccade onset. Immediately before the required saccade, IOR emerged at the retinal locus that would be brought to the cued location. A second task in which the participant maintained fixation during the entire trial ruled out the possibility that this IOR effect was simply the spillover of IOR from the cued location. These findings, for the first time, provide direct behavioral evidence that predictive remapping is a mechanism underlying environmental IOR. |
Ming Yan; Jinger Pan; Wenshuo Chang; Reinhold Kliegl Read sideways or not: Vertical saccade advantage in sentence reading Journal Article Reading and Writing, 32 (8), pp. 1911–1926, 2019. @article{Yan2019a, title = {Read sideways or not: Vertical saccade advantage in sentence reading}, author = {Ming Yan and Jinger Pan and Wenshuo Chang and Reinhold Kliegl}, doi = {10.1007/s11145-018-9930-x}, year = {2019}, date = {2019-01-01}, journal = {Reading and Writing}, volume = {32}, number = {8}, pages = {1911--1926}, publisher = {Springer Netherlands}, abstract = {During the reading of alphabetic scripts and scene perception, eye movements are programmed more efficiently in horizontal direction than in vertical direction. We propose that such a directional advantage may be due the overwhelming reading experience in the horizontal direction. Writing orientation is highly flexible for Traditional Chinese sentences. We compare horizontal and vertical eye movements during reading of such sentences and provide first evidence of a text-orientation effect on eye-movement control during reading. In addition to equivalent reading speed in both directions, more fine-grained analyses demonstrate a tradeoff between longer fixation durations and better fixation locations in vertical than in horizontal reading. Our results suggest that with extensive reading experience, Traditional Chinese readers can generate saccades more efficiently in vertical than in horizontal direction.}, keywords = {}, pubstate = {published}, tppubtype = {article} } During the reading of alphabetic scripts and scene perception, eye movements are programmed more efficiently in horizontal direction than in vertical direction. We propose that such a directional advantage may be due the overwhelming reading experience in the horizontal direction. Writing orientation is highly flexible for Traditional Chinese sentences. We compare horizontal and vertical eye movements during reading of such sentences and provide first evidence of a text-orientation effect on eye-movement control during reading. In addition to equivalent reading speed in both directions, more fine-grained analyses demonstrate a tradeoff between longer fixation durations and better fixation locations in vertical than in horizontal reading. Our results suggest that with extensive reading experience, Traditional Chinese readers can generate saccades more efficiently in vertical than in horizontal direction. |
Guoli Yan; Zebo Lan; Zhu Meng; Yingchao Wang; Valerie Benson Phonological coding during sentence reading in Chinese deaf readers: An eye-tracking study Journal Article Scientific Studies of Reading, pp. 1–17, 2020. @article{Yan2020, title = {Phonological coding during sentence reading in Chinese deaf readers: An eye-tracking study}, author = {Guoli Yan and Zebo Lan and Zhu Meng and Yingchao Wang and Valerie Benson}, doi = {10.1080/10888438.2020.1778000}, year = {2020}, date = {2020-01-01}, journal = {Scientific Studies of Reading}, pages = {1--17}, publisher = {Routledge}, abstract = {Phonological coding plays an important role in reading for hearing students. Experimental findings regarding phonological coding in deaf readers are controversial, and whether deaf readers are able to use phonological coding remains unclear. In the current study we examined whether Chinese deaf students could use phonological coding during sentence reading. Deaf middle school students, chronological age-matched hearing students, and reading ability-matched hearing students had their eye movements recorded as they read sentences containing correctly spelled characters, homophones, or unrelated characters. Both hearing groups had shorter total reading times on homophones than they did on unrelated characters. In contrast, no significant difference was found between homophones and unrelated characters for the deaf students. However, when the deaf group was divided into more-skilled and less-skilled readers according to their scores on reading fluency, the homophone advantage noted for the hearing controls was also observed for the more-skilled deaf students.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Phonological coding plays an important role in reading for hearing students. Experimental findings regarding phonological coding in deaf readers are controversial, and whether deaf readers are able to use phonological coding remains unclear. In the current study we examined whether Chinese deaf students could use phonological coding during sentence reading. Deaf middle school students, chronological age-matched hearing students, and reading ability-matched hearing students had their eye movements recorded as they read sentences containing correctly spelled characters, homophones, or unrelated characters. Both hearing groups had shorter total reading times on homophones than they did on unrelated characters. In contrast, no significant difference was found between homophones and unrelated characters for the deaf students. However, when the deaf group was divided into more-skilled and less-skilled readers according to their scores on reading fluency, the homophone advantage noted for the hearing controls was also observed for the more-skilled deaf students. |
Qing Yang; Marine Vernet; Christophe Orssaud; Pierre Bonfils; Alain Londero; Zoï Kapoula Central crosstalk for somatic tinnitus: Abnormal vergence eye movements Journal Article PLoS ONE, 5 (7), pp. e11845, 2010. @article{Yang2010, title = {Central crosstalk for somatic tinnitus: Abnormal vergence eye movements}, author = {Qing Yang and Marine Vernet and Christophe Orssaud and Pierre Bonfils and Alain Londero and Zo{ï} Kapoula}, doi = {10.1371/journal.pone.0011845}, year = {2010}, date = {2010-01-01}, journal = {PLoS ONE}, volume = {5}, number = {7}, pages = {e11845}, abstract = {Background: Frequent oulomotricity problems with orthoptic testing were reported in patients with tinnitus. This study examines with objective recordings vergence eye movements in patients with somatic tinnitus patients with ability to modify their subjective tinnitus percept by various movements, such as jaw, neck, eye movements or skin pressure. Methods: Vergence eye movements were recorded with the Eyelink II video system in 15 (23–63 years) control adults and 19 (36–62 years) subjects with somatic tinnitus. Findings: 1) Accuracy of divergence but not of convergence was lower in subjects with somatic tinnitus than in control subjects. 2) Vergence duration was longer and peak velocity was lower in subjects with somatic tinnitus than in control subjects. 3) The number of embedded saccades and the amplitude of saccades coinciding with the peak velocity of vergence were higher for tinnitus subjects. Yet, saccades did not increase peak velocity of vergence for tinnitus subjects, but they did so for controls. 4) In contrast, there was no significant difference of vergence latency between these two groups. Interpretation: The results suggest dysfunction of vergence areas involving cortical-brainstem-cerebellar circuits. We hypothesize that central auditory dysfunction related to tinnitus percept could trigger mild cerebellar-brainstem dysfunction or that tinnitus and vergence dysfunction could both be manifestations of mild cortical-brainstem-cerebellar syndrome reflecting abnormal cross-modality interactions between vergence eye movements and auditory signals.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Background: Frequent oulomotricity problems with orthoptic testing were reported in patients with tinnitus. This study examines with objective recordings vergence eye movements in patients with somatic tinnitus patients with ability to modify their subjective tinnitus percept by various movements, such as jaw, neck, eye movements or skin pressure. Methods: Vergence eye movements were recorded with the Eyelink II video system in 15 (23–63 years) control adults and 19 (36–62 years) subjects with somatic tinnitus. Findings: 1) Accuracy of divergence but not of convergence was lower in subjects with somatic tinnitus than in control subjects. 2) Vergence duration was longer and peak velocity was lower in subjects with somatic tinnitus than in control subjects. 3) The number of embedded saccades and the amplitude of saccades coinciding with the peak velocity of vergence were higher for tinnitus subjects. Yet, saccades did not increase peak velocity of vergence for tinnitus subjects, but they did so for controls. 4) In contrast, there was no significant difference of vergence latency between these two groups. Interpretation: The results suggest dysfunction of vergence areas involving cortical-brainstem-cerebellar circuits. We hypothesize that central auditory dysfunction related to tinnitus percept could trigger mild cerebellar-brainstem dysfunction or that tinnitus and vergence dysfunction could both be manifestations of mild cortical-brainstem-cerebellar syndrome reflecting abnormal cross-modality interactions between vergence eye movements and auditory signals. |
Beier Yao; Sebastiaan F W Neggers; Martin Rolfs; Rösler. Lara; Ilse A Thompson; Helene J Hopman; Livon Ghermezi; René S Kahn; Katharine N Thakkar Structural thalamofrontal hypoconnectivity is related to oculomotor corollary discharge dysfunction in schizophrenia Journal Article Journal of Neuroscience, 39 (11), pp. 2102–2113, 2019. @article{Yao2019, title = {Structural thalamofrontal hypoconnectivity is related to oculomotor corollary discharge dysfunction in schizophrenia}, author = {Beier Yao and Sebastiaan F W Neggers and Martin Rolfs and Rösler. Lara and Ilse A Thompson and Helene J Hopman and Livon Ghermezi and René S Kahn and Katharine N Thakkar}, doi = {10.1523/JNEUROSCI.1473-18.2019}, year = {2019}, date = {2019-01-01}, journal = {Journal of Neuroscience}, volume = {39}, number = {11}, pages = {2102--2113}, abstract = {By predicting sensory consequences of actions, humans can distinguish self-generated sensory inputs from those that are elicited externally. This is one mechanism by which we achieve a subjective sense of agency over our actions. Corollary discharge (CD) signals-"copies" of motor signals sent to sensory areas-permit such predictions, and CD abnormalities are a hypothesized mechanism for the agency disruptions in schizophrenia that characterize a subset of symptoms. Indeed, behavioral evidence of altered CD, including in the oculomotor system, has been observed in schizophrenia patients. A pathway projecting from the superior colliculus to the frontal eye fields (FEFs) via the mediodorsal thalamus (MD) conveys oculomotor CD associated with saccadic eye movements in nonhuman primates. This animal work provides a promising translational framework in which to investigate CD abnormalities in clinical populations. In the current study, we examined whether structural connectivity of this MD-FEF pathway relates to oculomotor CD functioning in schizophrenia. Twenty-two schizophrenia patients and 24 healthy control participants of both sexes underwent diffusion tensor imaging, and a large subset performed a trans-saccadic perceptual task that yields measures of CD. Using probabilistic tractography, we identified anatomical connections between FEF and MD and extracted indices of microstructural integrity. Patients exhibited compromised microstructural integrity in the MD-FEF pathway, which was correlated with greater oculomotor CD abnormalities and more severe psychotic symptoms. These data reinforce the role of the MD-FEF pathway in transmitting oculomotor CD signals and suggest that disturbances in this pathway may relate to psychotic symptom manifestation in patients.}, keywords = {}, pubstate = {published}, tppubtype = {article} } By predicting sensory consequences of actions, humans can distinguish self-generated sensory inputs from those that are elicited externally. This is one mechanism by which we achieve a subjective sense of agency over our actions. Corollary discharge (CD) signals-"copies" of motor signals sent to sensory areas-permit such predictions, and CD abnormalities are a hypothesized mechanism for the agency disruptions in schizophrenia that characterize a subset of symptoms. Indeed, behavioral evidence of altered CD, including in the oculomotor system, has been observed in schizophrenia patients. A pathway projecting from the superior colliculus to the frontal eye fields (FEFs) via the mediodorsal thalamus (MD) conveys oculomotor CD associated with saccadic eye movements in nonhuman primates. This animal work provides a promising translational framework in which to investigate CD abnormalities in clinical populations. In the current study, we examined whether structural connectivity of this MD-FEF pathway relates to oculomotor CD functioning in schizophrenia. Twenty-two schizophrenia patients and 24 healthy control participants of both sexes underwent diffusion tensor imaging, and a large subset performed a trans-saccadic perceptual task that yields measures of CD. Using probabilistic tractography, we identified anatomical connections between FEF and MD and extracted indices of microstructural integrity. Patients exhibited compromised microstructural integrity in the MD-FEF pathway, which was correlated with greater oculomotor CD abnormalities and more severe psychotic symptoms. These data reinforce the role of the MD-FEF pathway in transmitting oculomotor CD signals and suggest that disturbances in this pathway may relate to psychotic symptom manifestation in patients. |
Amit Yashar; Xiuyun Wu; Jiageng Chen; Marisa Carrasco Crowding and binding: Not all feature dimensions behave in the same way Journal Article Psychological Science, 30 (10), pp. 1533–1546, 2019. @article{Yashar2019, title = {Crowding and binding: Not all feature dimensions behave in the same way}, author = {Amit Yashar and Xiuyun Wu and Jiageng Chen and Marisa Carrasco}, doi = {10.1177/0956797619870779}, year = {2019}, date = {2019-01-01}, journal = {Psychological Science}, volume = {30}, number = {10}, pages = {1533--1546}, abstract = {Humans often fail to identify a target because of nearby flankers. The nature and stages at which this crowding occurs are unclear, and whether crowding operates via a common mechanism across visual dimensions is unknown. Using a dual-estimation report (N = 42), we quantitatively assessed the processing of features alone and in conjunction with another feature both within and between dimensions. Under crowding, observers misreported colors and orientations (i.e., reported a flanker value instead of the target's value) but averaged the target's and flankers' spatial frequencies (SFs). Interestingly, whereas orientation and color errors were independent, orientation and SF errors were interdependent. These qualitative differences of errors across dimensions revealed a tight link between crowding and feature binding, which is contingent on the type of feature dimension. These results and a computational model suggest that crowding and misbinding are due to pooling across a joint coding of orientations and SFs but not of colors.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Humans often fail to identify a target because of nearby flankers. The nature and stages at which this crowding occurs are unclear, and whether crowding operates via a common mechanism across visual dimensions is unknown. Using a dual-estimation report (N = 42), we quantitatively assessed the processing of features alone and in conjunction with another feature both within and between dimensions. Under crowding, observers misreported colors and orientations (i.e., reported a flanker value instead of the target's value) but averaged the target's and flankers' spatial frequencies (SFs). Interestingly, whereas orientation and color errors were independent, orientation and SF errors were interdependent. These qualitative differences of errors across dimensions revealed a tight link between crowding and feature binding, which is contingent on the type of feature dimension. These results and a computational model suggest that crowding and misbinding are due to pooling across a joint coding of orientations and SFs but not of colors. |
Rachel Yep; Stephen Soncin; Donald C Brien; Brian C Coe; Alina Marin; Douglas P Munoz Brain and Cognition, 124 , pp. 1–13, 2018. @article{Yep2018, title = {Using an emotional saccade task to characterize executive functioning and emotion processing in attention-deficit hyperactivity disorder and bipolar disorder}, author = {Rachel Yep and Stephen Soncin and Donald C Brien and Brian C Coe and Alina Marin and Douglas P Munoz}, doi = {10.1016/j.bandc.2018.04.002}, year = {2018}, date = {2018-01-01}, journal = {Brain and Cognition}, volume = {124}, pages = {1--13}, publisher = {Elsevier}, abstract = {Despite distinct diagnostic criteria, attention-deficit hyperactivity disorder (ADHD) and bipolar disorder (BD) share cognitive and emotion processing deficits that complicate diagnoses. The goal of this study was to use an emotional saccade task to characterize executive functioning and emotion processing in adult ADHD and BD. Participants (21 control, 20 ADHD, 20 BD) performed an interleaved pro/antisaccade task (look toward vs. look away from a visual target, respectively) in which the sex of emotional face stimuli acted as the cue to perform either the pro- or antisaccade. Both patient groups made more direction (erroneous prosaccades on antisaccade trials) and anticipatory (saccades made before cue processing) errors than controls. Controls exhibited lower microsaccade rates preceding correct anti- vs. prosaccade initiation, but this task-related modulation was absent in both patient groups. Regarding emotion processing, the ADHD group performed worse than controls on neutral face trials, while the BD group performed worse than controls on trials presenting faces of all valence. These findings support the role of fronto-striatal circuitry in mediating response inhibition deficits in both ADHD and BD, and suggest that such deficits are exacerbated in BD during emotion processing, presumably via dysregulated limbic system circuitry involving the anterior cingulate and orbitofrontal cortex.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Despite distinct diagnostic criteria, attention-deficit hyperactivity disorder (ADHD) and bipolar disorder (BD) share cognitive and emotion processing deficits that complicate diagnoses. The goal of this study was to use an emotional saccade task to characterize executive functioning and emotion processing in adult ADHD and BD. Participants (21 control, 20 ADHD, 20 BD) performed an interleaved pro/antisaccade task (look toward vs. look away from a visual target, respectively) in which the sex of emotional face stimuli acted as the cue to perform either the pro- or antisaccade. Both patient groups made more direction (erroneous prosaccades on antisaccade trials) and anticipatory (saccades made before cue processing) errors than controls. Controls exhibited lower microsaccade rates preceding correct anti- vs. prosaccade initiation, but this task-related modulation was absent in both patient groups. Regarding emotion processing, the ADHD group performed worse than controls on neutral face trials, while the BD group performed worse than controls on trials presenting faces of all valence. These findings support the role of fronto-striatal circuitry in mediating response inhibition deficits in both ADHD and BD, and suggest that such deficits are exacerbated in BD during emotion processing, presumably via dysregulated limbic system circuitry involving the anterior cingulate and orbitofrontal cortex. |
Lok-Kin Yeung; Rosanna K Olsen; Bryan Hong; Valentina Mihajlovic; Maria C D'Angelo; Arber Kacollja; Jennifer D Ryan; Morgan D Barense Object-in-place memory predicted by anterolateral entorhinal cortex and parahippocampal cortex in older adults Journal Article Journal of Cognitive Neuroscience, 31 (5), pp. 711–729, 2019. @article{Yeung2019, title = {Object-in-place memory predicted by anterolateral entorhinal cortex and parahippocampal cortex in older adults}, author = {Lok-Kin Yeung and Rosanna K Olsen and Bryan Hong and Valentina Mihajlovic and Maria C D'Angelo and Arber Kacollja and Jennifer D Ryan and Morgan D Barense}, doi = {10.1162/jocn}, year = {2019}, date = {2019-01-01}, journal = {Journal of Cognitive Neuroscience}, volume = {31}, number = {5}, pages = {711--729}, abstract = {The lateral portion of the entorhinal cortex is one of the first brain regions affected by tau pathology, an important biomarker for Alzheimer disease. Improving our understanding of this region's cognitive role may help identify better cognitive tests for early detection of Alzheimer disease. Based on its functional connections, we tested the idea that the human anterolateral entorhinal cortex (alERC) may play a role in integrating spatial information into object representations. We recently demonstrated that the volume of the alERC was related to processing the spatial relationships of the features within an object [Yeung, L. K., Olsen, R. K., Bild-Enkin, H. E. P., D'Angelo, M. C., Kacollja, A., McQuiggan, D. A., et al. Anterolateral entorhinal cortex volume predicted by altered intra-item configural processing. Journal of Neuroscience, 37, 5527–5538, 2017]. In this study, we investi- gated whether the human alERC might also play a role in processing the spatial relationships between an object and its environment using an eye-tracking task that assessed visual fixations to a critical object within a scene. Guided by rodent work, we measured both object-in-place memory, the association of an object with a given context [Wilson, D. I., Langston, R. F., Schlesiger, M. I., Wagner, M., Watanabe, S., & Ainge, J. A. Lateral entorhinal cortex is critical for novel object-context recog- nition. Hippocampus, 23,352–366, 2013], and object-trace memory, the memory for the former location of objects [Tsao, A., Moser, M. B., & Moser, E. I. Traces of experience in the lateral entorhinal cortex. Current Biology, 23,399–405, 2013]. In a group of older adults with varying stages of brain atrophy and cognitive decline, we found that the volume of the alERC and the volume of the parahippocampal cortex selectively predicted object-in-place memory, but not object-trace memory. These results provide support for the notion that the alERC may integrate spatial information into object representations.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The lateral portion of the entorhinal cortex is one of the first brain regions affected by tau pathology, an important biomarker for Alzheimer disease. Improving our understanding of this region's cognitive role may help identify better cognitive tests for early detection of Alzheimer disease. Based on its functional connections, we tested the idea that the human anterolateral entorhinal cortex (alERC) may play a role in integrating spatial information into object representations. We recently demonstrated that the volume of the alERC was related to processing the spatial relationships of the features within an object [Yeung, L. K., Olsen, R. K., Bild-Enkin, H. E. P., D'Angelo, M. C., Kacollja, A., McQuiggan, D. A., et al. Anterolateral entorhinal cortex volume predicted by altered intra-item configural processing. Journal of Neuroscience, 37, 5527–5538, 2017]. In this study, we investi- gated whether the human alERC might also play a role in processing the spatial relationships between an object and its environment using an eye-tracking task that assessed visual fixations to a critical object within a scene. Guided by rodent work, we measured both object-in-place memory, the association of an object with a given context [Wilson, D. I., Langston, R. F., Schlesiger, M. I., Wagner, M., Watanabe, S., & Ainge, J. A. Lateral entorhinal cortex is critical for novel object-context recog- nition. Hippocampus, 23,352–366, 2013], and object-trace memory, the memory for the former location of objects [Tsao, A., Moser, M. B., & Moser, E. I. Traces of experience in the lateral entorhinal cortex. Current Biology, 23,399–405, 2013]. In a group of older adults with varying stages of brain atrophy and cognitive decline, we found that the volume of the alERC and the volume of the parahippocampal cortex selectively predicted object-in-place memory, but not object-trace memory. These results provide support for the notion that the alERC may integrate spatial information into object representations. |
Takemasa Yokoyama; Yasuki Noguchi; Shinichi Kita Attentional shifts by gaze direction in voluntary orienting: evidence from a microsaccade study Journal Article Experimental Brain Research, 223 (2), pp. 291–300, 2012. @article{Yokoyama2012, title = {Attentional shifts by gaze direction in voluntary orienting: evidence from a microsaccade study}, author = {Takemasa Yokoyama and Yasuki Noguchi and Shinichi Kita}, doi = {10.1007/s00221-012-3260-z}, year = {2012}, date = {2012-01-01}, journal = {Experimental Brain Research}, volume = {223}, number = {2}, pages = {291--300}, abstract = {Shifts in spatial attention can be induced by the gaze direction of another. However, it is unclear whether gaze direction influences the allocation of attention by reflexive or voluntary orienting. The present study was designed to examine which type of attentional orienting is elicited by gaze direction. We conducted two experiments to answer this question. In Experiment 1, we used a modified Posner paradigm with gaze cues and measured microsaccades to index the allocation of attention. We found that microsaccade direction followed cue direction between 200 and 400 ms after gaze cues were presented. This is consistent with the latencies observed in other microsaccade studies in which voluntary orienting is manipulated, suggesting that gaze direction elicits voluntary orienting. However, Experiment 1 did not separate voluntary and reflexive orienting directionally, so in Experiment 2, we used an anticue task in which cue direction (direction to allocate attention) was the opposite of gaze direction (direction of gaze in depicted face). The results in Experiment 2 were consistent with those from Experiment 1. Microsaccade direction followed the cue direction, not gaze direction. Taken together, these results indicate that the shift in spatial attention elicited by gaze direction is voluntary orienting.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Shifts in spatial attention can be induced by the gaze direction of another. However, it is unclear whether gaze direction influences the allocation of attention by reflexive or voluntary orienting. The present study was designed to examine which type of attentional orienting is elicited by gaze direction. We conducted two experiments to answer this question. In Experiment 1, we used a modified Posner paradigm with gaze cues and measured microsaccades to index the allocation of attention. We found that microsaccade direction followed cue direction between 200 and 400 ms after gaze cues were presented. This is consistent with the latencies observed in other microsaccade studies in which voluntary orienting is manipulated, suggesting that gaze direction elicits voluntary orienting. However, Experiment 1 did not separate voluntary and reflexive orienting directionally, so in Experiment 2, we used an anticue task in which cue direction (direction to allocate attention) was the opposite of gaze direction (direction of gaze in depicted face). The results in Experiment 2 were consistent with those from Experiment 1. Microsaccade direction followed the cue direction, not gaze direction. Taken together, these results indicate that the shift in spatial attention elicited by gaze direction is voluntary orienting. |
Keir X X Yong; Timothy J Shakespeare; Dave Cash; Susie M D Henley; Jennifer M Nicholas; Gerard R Ridgway; Hannah L Golden; Elizabeth K Warrington; Amelia M Carton; Diego Kaski; Jonathan M Schott; Jason D Warren; Sebastian J Crutch Prominent effects and neural correlates of visual crowding in a neurodegenerative disease population Journal Article Brain, 137 (12), pp. 3284–3299, 2014. @article{Yong2014, title = {Prominent effects and neural correlates of visual crowding in a neurodegenerative disease population}, author = {Keir X X Yong and Timothy J Shakespeare and Dave Cash and Susie M D Henley and Jennifer M Nicholas and Gerard R Ridgway and Hannah L Golden and Elizabeth K Warrington and Amelia M Carton and Diego Kaski and Jonathan M Schott and Jason D Warren and Sebastian J Crutch}, doi = {10.1093/brain/awu293}, year = {2014}, date = {2014-01-01}, journal = {Brain}, volume = {137}, number = {12}, pages = {3284--3299}, abstract = {Crowding is a breakdown in the ability to identify objects in clutter, and is a major constraint on object recognition. Crowding particularly impairs object perception in peripheral, amblyopic and possibly developing vision. Here we argue that crowding is also a critical factor limiting object perception in central vision of individuals with neurodegeneration of the occipital cortices. In the current study, individuals with posterior cortical atrophy (n=26), typical Alzheimer's disease (n=17) and healthy control subjects (n=14) completed centrally-presented tests of letter identification under six different flanking conditions (unflanked, and with letter, shape, number, same polarity and reverse polarity flankers) with two different target-flanker spacings (condensed, spaced). Patients with posterior cortical atrophy were significantly less accurate and slower to identify targets in the condensed than spaced condition even when the target letters were surrounded by flankers of a different category. Importantly, this spacing effect was observed for same, but not reverse, polarity flankers. The difference in accuracy between spaced and condensed stimuli was significantly associated with lower grey matter volume in the right collateral sulcus, in a region lying between the fusiform and lingual gyri. Detailed error analysis also revealed that similarity between the error response and the averaged target and flanker stimuli (but not individual target or flanker stimuli) was a significant predictor of error rate, more consistent with averaging than substitution accounts of crowding. Our findings suggest that crowding in posterior cortical atrophy can be regarded as a pre-attentive process that uses averaging to regularize the pathologically noisy representation of letter feature position in central vision. These results also help to clarify the cortical localization of feature integration components of crowding. More broadly, we suggest that posterior cortical atrophy provides a neurodegenerative disease model for exploring the basis of crowding. These data have significant implications for patients with, or who will go on to develop, dementia-related visual impairment, in whom acquired excessive crowding likely contributes to deficits in word, object, face and scene perception.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Crowding is a breakdown in the ability to identify objects in clutter, and is a major constraint on object recognition. Crowding particularly impairs object perception in peripheral, amblyopic and possibly developing vision. Here we argue that crowding is also a critical factor limiting object perception in central vision of individuals with neurodegeneration of the occipital cortices. In the current study, individuals with posterior cortical atrophy (n=26), typical Alzheimer's disease (n=17) and healthy control subjects (n=14) completed centrally-presented tests of letter identification under six different flanking conditions (unflanked, and with letter, shape, number, same polarity and reverse polarity flankers) with two different target-flanker spacings (condensed, spaced). Patients with posterior cortical atrophy were significantly less accurate and slower to identify targets in the condensed than spaced condition even when the target letters were surrounded by flankers of a different category. Importantly, this spacing effect was observed for same, but not reverse, polarity flankers. The difference in accuracy between spaced and condensed stimuli was significantly associated with lower grey matter volume in the right collateral sulcus, in a region lying between the fusiform and lingual gyri. Detailed error analysis also revealed that similarity between the error response and the averaged target and flanker stimuli (but not individual target or flanker stimuli) was a significant predictor of error rate, more consistent with averaging than substitution accounts of crowding. Our findings suggest that crowding in posterior cortical atrophy can be regarded as a pre-attentive process that uses averaging to regularize the pathologically noisy representation of letter feature position in central vision. These results also help to clarify the cortical localization of feature integration components of crowding. More broadly, we suggest that posterior cortical atrophy provides a neurodegenerative disease model for exploring the basis of crowding. These data have significant implications for patients with, or who will go on to develop, dementia-related visual impairment, in whom acquired excessive crowding likely contributes to deficits in word, object, face and scene perception. |
Keir X X Yong; Kishan Rajdev; Timothy J Shakespeare; Alexander P Leff; Sebastian J Crutch Facilitating text reading in posterior cortical atrophy Journal Article Neurology, 85 (4), pp. 339–348, 2015. @article{Yong2015, title = {Facilitating text reading in posterior cortical atrophy}, author = {Keir X X Yong and Kishan Rajdev and Timothy J Shakespeare and Alexander P Leff and Sebastian J Crutch}, doi = {10.1212/WNL.0000000000001782}, year = {2015}, date = {2015-01-01}, journal = {Neurology}, volume = {85}, number = {4}, pages = {339--348}, abstract = {OBJECTIVE: We report (1) the quantitative investigation of text reading in posterior cortical atrophy (PCA), and (2) the effects of 2 novel software-based reading aids that result in dramatic improvements in the reading ability of patients with PCA. METHODS: Reading performance, eye movements, and fixations were assessed in patients with PCA and typical Alzheimer disease and in healthy controls (experiment 1). Two reading aids (single- and double-word) were evaluated based on the notion that reducing the spatial and oculomotor demands of text reading might support reading in PCA (experiment 2). RESULTS: Mean reading accuracy in patients with PCA was significantly worse (57%) compared with both patients with typical Alzheimer disease (98%) and healthy controls (99%); spatial aspects of passages were the primary determinants of text reading ability in PCA. Both aids led to considerable gains in reading accuracy (PCA mean reading accuracy: single-word reading aid = 96%; individual patient improvement range: 6%-270%) and self-rated measures of reading. Data suggest a greater efficiency of fixations and eye movements under the single-word reading aid in patients with PCA. CONCLUSIONS: These findings demonstrate how neurologic characterization of a neurodegenerative syndrome (PCA) and detailed cognitive analysis of an important everyday skill (reading) can combine to yield aids capable of supporting important everyday functional abilities. CLASSIFICATION OF EVIDENCE: This study provides Class III evidence that for patients with PCA, 2 software-based reading aids (single-word and double-word) improve reading accuracy.}, keywords = {}, pubstate = {published}, tppubtype = {article} } OBJECTIVE: We report (1) the quantitative investigation of text reading in posterior cortical atrophy (PCA), and (2) the effects of 2 novel software-based reading aids that result in dramatic improvements in the reading ability of patients with PCA. METHODS: Reading performance, eye movements, and fixations were assessed in patients with PCA and typical Alzheimer disease and in healthy controls (experiment 1). Two reading aids (single- and double-word) were evaluated based on the notion that reducing the spatial and oculomotor demands of text reading might support reading in PCA (experiment 2). RESULTS: Mean reading accuracy in patients with PCA was significantly worse (57%) compared with both patients with typical Alzheimer disease (98%) and healthy controls (99%); spatial aspects of passages were the primary determinants of text reading ability in PCA. Both aids led to considerable gains in reading accuracy (PCA mean reading accuracy: single-word reading aid = 96%; individual patient improvement range: 6%-270%) and self-rated measures of reading. Data suggest a greater efficiency of fixations and eye movements under the single-word reading aid in patients with PCA. CONCLUSIONS: These findings demonstrate how neurologic characterization of a neurodegenerative syndrome (PCA) and detailed cognitive analysis of an important everyday skill (reading) can combine to yield aids capable of supporting important everyday functional abilities. CLASSIFICATION OF EVIDENCE: This study provides Class III evidence that for patients with PCA, 2 software-based reading aids (single-word and double-word) improve reading accuracy. |
Tehrim Yoon; Afareen Jaleel; Alaa A Ahmed; Reza Shadmehr Saccade vigor and the subjective economic value of visual stimuli Journal Article Journal of Neurophysiology, 123 (6), pp. 2161–2172, 2020. @article{Yoon2020, title = {Saccade vigor and the subjective economic value of visual stimuli}, author = {Tehrim Yoon and Afareen Jaleel and Alaa A Ahmed and Reza Shadmehr}, doi = {10.1152/jn.00700.2019}, year = {2020}, date = {2020-01-01}, journal = {Journal of Neurophysiology}, volume = {123}, number = {6}, pages = {2161--2172}, abstract = {Decisions are made based on the subjective value that the brain assigns to options. However, subjective value is a mathematical construct that cannot be measured directly, but rather is inferred from choices. Recent results have demonstrated that reaction time, amplitude, and velocity of movements are modulated by reward, raising the possibility that there is a link between how the brain evaluates an option and how it controls movements toward that option. Here, we asked people to choose among risky options represented by abstract stimuli, some associated with gain (points in a game), and others with loss. From their choices we estimated the subjective value that they assigned to each stimulus. In probe trials, a single stimulus appeared at center, instructing subjects to make a saccade to a peripheral target. We found that the reaction time, peak velocity, and amplitude of the peripherally directed saccade varied roughly linearly with the subjective value that the participant had assigned to the central stimulus: reaction time was shorter, velocity was higher, and amplitude was larger for stimuli that the participant valued more. Naturally, participants differed in how much they valued a given stimulus. Remarkably, those who valued a stimulus more, as evidenced by their choices in decision trials, tended to move with shorter reaction time and greater velocity in response to that stimulus in probe trials. Overall, the reaction time of the saccade in response to a stimulus partly predicted the subjective value that the brain assigned to that stimulus.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Decisions are made based on the subjective value that the brain assigns to options. However, subjective value is a mathematical construct that cannot be measured directly, but rather is inferred from choices. Recent results have demonstrated that reaction time, amplitude, and velocity of movements are modulated by reward, raising the possibility that there is a link between how the brain evaluates an option and how it controls movements toward that option. Here, we asked people to choose among risky options represented by abstract stimuli, some associated with gain (points in a game), and others with loss. From their choices we estimated the subjective value that they assigned to each stimulus. In probe trials, a single stimulus appeared at center, instructing subjects to make a saccade to a peripheral target. We found that the reaction time, peak velocity, and amplitude of the peripherally directed saccade varied roughly linearly with the subjective value that the participant had assigned to the central stimulus: reaction time was shorter, velocity was higher, and amplitude was larger for stimuli that the participant valued more. Naturally, participants differed in how much they valued a given stimulus. Remarkably, those who valued a stimulus more, as evidenced by their choices in decision trials, tended to move with shorter reaction time and greater velocity in response to that stimulus in probe trials. Overall, the reaction time of the saccade in response to a stimulus partly predicted the subjective value that the brain assigned to that stimulus. |
Kyoko Yoshida; Yasuhiro Go; Itaru Kushima; Atsushi Toyoda; Asao Fujiyama; Hiroo Imai; Nobuhito Saito; Atsushi Iriki; Norio Ozaki; Masaki Isoda Single-neuron and genetic correlates of autistic behavior in macaque Journal Article Science Advances, 2 , pp. e1600558, 2016. @article{Yoshida2016, title = {Single-neuron and genetic correlates of autistic behavior in macaque}, author = {Kyoko Yoshida and Yasuhiro Go and Itaru Kushima and Atsushi Toyoda and Asao Fujiyama and Hiroo Imai and Nobuhito Saito and Atsushi Iriki and Norio Ozaki and Masaki Isoda}, doi = {10.1126/sciadv.1600558}, year = {2016}, date = {2016-01-01}, journal = {Science Advances}, volume = {2}, pages = {e1600558}, abstract = {Atypical neurodevelopment in autism spectrum disorder is a mystery, defying explanation despite increasing attention. We report on a Japanese macaque that spontaneously exhibited autistic traits, namely, impaired social ability as well as restricted and repetitive behaviors, along with our single-neuron and genomic analyses. Its social ability was measured in a turn-taking task, where two monkeys monitor each other's actions for adaptive behavioral planning. In its brain, the medial frontal neurons responding to others' actions, abundant in the controls, were almost nonexistent. In its genes, whole-exome sequencing and copy number variation analyses identified rare coding variants linked to human neuropsychiatric disorders in 5-hydroxytryptamine (serotonin) receptor 2C (HTR2C) and adenosine triphosphate (ATP)–binding cassette subfamily A13 (ABCA13). This combination of systems neuroscience and cognitive genomics in macaques suggests a new, phenotype-to-genotype approach to studying mental disorders.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Atypical neurodevelopment in autism spectrum disorder is a mystery, defying explanation despite increasing attention. We report on a Japanese macaque that spontaneously exhibited autistic traits, namely, impaired social ability as well as restricted and repetitive behaviors, along with our single-neuron and genomic analyses. Its social ability was measured in a turn-taking task, where two monkeys monitor each other's actions for adaptive behavioral planning. In its brain, the medial frontal neurons responding to others' actions, abundant in the controls, were almost nonexistent. In its genes, whole-exome sequencing and copy number variation analyses identified rare coding variants linked to human neuropsychiatric disorders in 5-hydroxytryptamine (serotonin) receptor 2C (HTR2C) and adenosine triphosphate (ATP)–binding cassette subfamily A13 (ABCA13). This combination of systems neuroscience and cognitive genomics in macaques suggests a new, phenotype-to-genotype approach to studying mental disorders. |
Gongchen Yu; Mingpo Yang; Peng Yu; Michael C Dorris Time compression of visual perception around microsaccades Journal Article Journal of Neurophysiology, 118 (1), pp. 416–424, 2017. @article{Yu2017a, title = {Time compression of visual perception around microsaccades}, author = {Gongchen Yu and Mingpo Yang and Peng Yu and Michael C Dorris}, doi = {10.1152/jn.00029.2017}, year = {2017}, date = {2017-01-01}, journal = {Journal of Neurophysiology}, volume = {118}, number = {1}, pages = {416--424}, abstract = {Even during fixation, our eyes are in constant motion. For example, microsaccades are small (typicallytextless1°) eye movements that occur 1~3 times/second. Despite their tiny and transient nature, our percept of visual space is compressed prior to microsaccades (Hafed 2013). As visual space and time are interconnected at both the 46 physical and physiological levels, we asked whether microsaccades also affect the temporal aspects of visual perception. Here we demonstrate that the perceived interval between transient visual stimuli was compressed if accompanied by microsaccades. This temporal compression extended approximately ±200 ms from microsaccade occurrence, and depending on their particular pattern, multiple microsaccades further enhanced or counteracted this temporal compression. The compression of time surrounding microsaccades resembles that associated with more voluntary macrosaccades (Morrone et al. 2005). Our results suggest common neural processes underlying both saccade and microsaccade misperceptions, mediated, likely, through extra-retinal mechanisms.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Even during fixation, our eyes are in constant motion. For example, microsaccades are small (typicallytextless1°) eye movements that occur 1~3 times/second. Despite their tiny and transient nature, our percept of visual space is compressed prior to microsaccades (Hafed 2013). As visual space and time are interconnected at both the 46 physical and physiological levels, we asked whether microsaccades also affect the temporal aspects of visual perception. Here we demonstrate that the perceived interval between transient visual stimuli was compressed if accompanied by microsaccades. This temporal compression extended approximately ±200 ms from microsaccade occurrence, and depending on their particular pattern, multiple microsaccades further enhanced or counteracted this temporal compression. The compression of time surrounding microsaccades resembles that associated with more voluntary macrosaccades (Morrone et al. 2005). Our results suggest common neural processes underlying both saccade and microsaccade misperceptions, mediated, likely, through extra-retinal mechanisms. |
Shulin Yue; Zhenlan Jin; Chenggui Fan; Qian Zhang; Ling Li Interference between smooth pursuit and color working memory Journal Article Journal of Eye Movement Research, 10 (3), pp. 1–10, 2017. @article{Yue2017, title = {Interference between smooth pursuit and color working memory}, author = {Shulin Yue and Zhenlan Jin and Chenggui Fan and Qian Zhang and Ling Li}, doi = {10.16910/jemr.10.3.6}, year = {2017}, date = {2017-01-01}, journal = {Journal of Eye Movement Research}, volume = {10}, number = {3}, pages = {1--10}, abstract = {Spatial working memory (WM) and spatial attention are closely related, but the relationship between non-spatial WM and spatial attention still remains unclear. The present study aimed to investigate the interaction between color WM and smooth pursuit eye movements. A modified delayed-match-to-sample paradigm (DMS) was applied with 2 or 4 items presented in each visual field. Subjects memorized the colors of items in the cued visual field and smoothly moved eyes towards or away from memorized items during retention interval despite that the colored items were no longer visible. The WM performance decreased with higher load in general. More importantly, the WM performance was better when subjects pursued towards rather than away from the cued visual field. Meanwhile, the pursuit gain decreased with higher load and demonstrated a higher result when pursuing away from the cued visual field. These results indicated that spatial attention, guiding attention to the memorized items, benefits color WM. Therefore, we propose that a competition for attention resources exists between color WM and smooth pursuit eye movements.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Spatial working memory (WM) and spatial attention are closely related, but the relationship between non-spatial WM and spatial attention still remains unclear. The present study aimed to investigate the interaction between color WM and smooth pursuit eye movements. A modified delayed-match-to-sample paradigm (DMS) was applied with 2 or 4 items presented in each visual field. Subjects memorized the colors of items in the cued visual field and smoothly moved eyes towards or away from memorized items during retention interval despite that the colored items were no longer visible. The WM performance decreased with higher load in general. More importantly, the WM performance was better when subjects pursued towards rather than away from the cued visual field. Meanwhile, the pursuit gain decreased with higher load and demonstrated a higher result when pursuing away from the cued visual field. These results indicated that spatial attention, guiding attention to the memorized items, benefits color WM. Therefore, we propose that a competition for attention resources exists between color WM and smooth pursuit eye movements. |
Shlomit Yuval-Greenberg; Elisha P Merriam; David J Heeger Spontaneous microsaccades reflect shifts in covert attention Journal Article Journal of Neuroscience, 34 (41), pp. 13693–13700, 2014. @article{YuvalGreenberg2014, title = {Spontaneous microsaccades reflect shifts in covert attention}, author = {Shlomit Yuval-Greenberg and Elisha P Merriam and David J Heeger}, doi = {10.1523/JNEUROSCI.0582-14.2014}, year = {2014}, date = {2014-01-01}, journal = {Journal of Neuroscience}, volume = {34}, number = {41}, pages = {13693--13700}, abstract = {Microsaccade rate during fixation is modulated by the presentation of a visual stimulus. When the stimulus is an endogenous attention cue, the ensuing microsaccades tend to be directed toward the cue. This finding has been taken as evidence that microsaccades index the locus of spatial attention. But the vast majority of microsaccades that subjects make are not triggered by visual stimuli. Under natural viewing conditions, spontaneous microsaccades occur frequently (2-3 Hz), even in the absence of a stimulus or a task. While spontaneous microsaccades may depend on low-level visual demands, such as retinal fatigue, image fading, or fixation shifts, it is unknown whether their occurrence corresponds to changes in the attentional state. We developed a protocol to measure whether spontaneous microsaccades reflect shifts in spatial attention. Human subjects fixated a cross while microsaccades were detected from streaming eye-position data. Detection of a microsaccade triggered the appearance of a peripheral ring of grating patches, which were followed by an arrow (a postcue) indicating one of them as the target. The target was either congruent or incongruent (opposite) with respect to the direction of the microsaccade (which preceded the stimulus). Subjects reported the tilt of the target (clockwise or counterclockwise relative to vertical). We found that accuracy was higher for congruent than for incongruent trials. We conclude that the direction of spontaneous microsaccades is inherently linked to shifts in spatial attention.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Microsaccade rate during fixation is modulated by the presentation of a visual stimulus. When the stimulus is an endogenous attention cue, the ensuing microsaccades tend to be directed toward the cue. This finding has been taken as evidence that microsaccades index the locus of spatial attention. But the vast majority of microsaccades that subjects make are not triggered by visual stimuli. Under natural viewing conditions, spontaneous microsaccades occur frequently (2-3 Hz), even in the absence of a stimulus or a task. While spontaneous microsaccades may depend on low-level visual demands, such as retinal fatigue, image fading, or fixation shifts, it is unknown whether their occurrence corresponds to changes in the attentional state. We developed a protocol to measure whether spontaneous microsaccades reflect shifts in spatial attention. Human subjects fixated a cross while microsaccades were detected from streaming eye-position data. Detection of a microsaccade triggered the appearance of a peripheral ring of grating patches, which were followed by an arrow (a postcue) indicating one of them as the target. The target was either congruent or incongruent (opposite) with respect to the direction of the microsaccade (which preceded the stimulus). Subjects reported the tilt of the target (clockwise or counterclockwise relative to vertical). We found that accuracy was higher for congruent than for incongruent trials. We conclude that the direction of spontaneous microsaccades is inherently linked to shifts in spatial attention. |
Shlomit Yuval-Greenberg; Anat Keren; Rinat Hilo; Adar Paz; Navah Ratzon Gaze control during simulator driving in adolescents with and without attention deficit hyperactivity disorder Journal Article American Journal of Occupational Therapy, 73 (3), pp. 1–8, 2019. @article{YuvalGreenberg2019, title = {Gaze control during simulator driving in adolescents with and without attention deficit hyperactivity disorder}, author = {Shlomit Yuval-Greenberg and Anat Keren and Rinat Hilo and Adar Paz and Navah Ratzon}, doi = {10.5014/ajot.2019.031500}, year = {2019}, date = {2019-04-01}, journal = {American Journal of Occupational Therapy}, volume = {73}, number = {3}, pages = {1--8}, publisher = {American Occupational Therapy Association}, abstract = {Importance: Attention deficit hyperactivity disorder (ADHD) is associated with driving deficits. Visual standards for driving define minimum qualifications for safe driving, including acuity and field of vision, but they do not consider the ability to explore the environment efficiently by shifting the gaze, which is a critical element of safe driving. Objective: To examine visual exploration during simulated driving in adolescents with and without ADHD. Design: Adolescents with and without ADHD drove a driving simulator for approximately 10 min while their gaze was monitored. They then completed a battery of questionnaires. Setting: University lab. Participants: Participants with (n = 16) and without (n = 15) ADHD were included. Participants had a history of neurological disorders other than ADHD and normal or corrected-to-normal vision. Control participants reported not having a diagnosis of ADHD. Participants with ADHD had been previously diagnosed by a qualified professional. Outcomes and Measures: We compared the following measures between ADHD and non-ADHD groups: dashboard dwell times, fixation variance, entropy, and fixation duration. Results: Findings showed that participants with ADHD were more restricted in their patterns of exploration than control group participants. They spent considerably more time gazing at the dashboard and had longer periods of fixation with lower variability and randomness. Conclusions and Relevance: The results support the hypothesis that adolescents with ADHD engage in less active exploration during simulated driving. What This Article Adds: This study raises concerns regarding the driving competence of people with ADHD and opens up new directions for potential training programs that focus on exploratory gaze control.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Importance: Attention deficit hyperactivity disorder (ADHD) is associated with driving deficits. Visual standards for driving define minimum qualifications for safe driving, including acuity and field of vision, but they do not consider the ability to explore the environment efficiently by shifting the gaze, which is a critical element of safe driving. Objective: To examine visual exploration during simulated driving in adolescents with and without ADHD. Design: Adolescents with and without ADHD drove a driving simulator for approximately 10 min while their gaze was monitored. They then completed a battery of questionnaires. Setting: University lab. Participants: Participants with (n = 16) and without (n = 15) ADHD were included. Participants had a history of neurological disorders other than ADHD and normal or corrected-to-normal vision. Control participants reported not having a diagnosis of ADHD. Participants with ADHD had been previously diagnosed by a qualified professional. Outcomes and Measures: We compared the following measures between ADHD and non-ADHD groups: dashboard dwell times, fixation variance, entropy, and fixation duration. Results: Findings showed that participants with ADHD were more restricted in their patterns of exploration than control group participants. They spent considerably more time gazing at the dashboard and had longer periods of fixation with lower variability and randomness. Conclusions and Relevance: The results support the hypothesis that adolescents with ADHD engage in less active exploration during simulated driving. What This Article Adds: This study raises concerns regarding the driving competence of people with ADHD and opens up new directions for potential training programs that focus on exploratory gaze control. |
C Yu-Wai-Man; K Petheram; A W Davidson; T Williams; P G Griffiths A supranuclear disorder of ocular motility as a rare initial presentation of motor neurone disease Journal Article Neuro-Ophthalmology, 35 (1), pp. 38–39, 2011. @article{YuWaiMan2011, title = {A supranuclear disorder of ocular motility as a rare initial presentation of motor neurone disease}, author = {C Yu-Wai-Man and K Petheram and A W Davidson and T Williams and P G Griffiths}, doi = {10.3109/01658107.2010.518333}, year = {2011}, date = {2011-01-01}, journal = {Neuro-Ophthalmology}, volume = {35}, number = {1}, pages = {38--39}, abstract = {A case is described of motor neurone disease presenting with an ocular motor disorder characterised by saccadic intrusions, impaired horizontal and vertical saccades, and apraxia of eyelid opening. The occurrence of eye movement abnormalities in motor neurone disease is discussed.}, keywords = {}, pubstate = {published}, tppubtype = {article} } A case is described of motor neurone disease presenting with an ocular motor disorder characterised by saccadic intrusions, impaired horizontal and vertical saccades, and apraxia of eyelid opening. The occurrence of eye movement abnormalities in motor neurone disease is discussed. |
Theodoros P Zanos; Patrick J Mineault; Daniel Guitton; Christopher C Pack Mechanisms of saccadic suppression in primate cortical area V4 Journal Article Journal of Neuroscience, 36 (35), pp. 9227–9239, 2016. @article{Zanos2016, title = {Mechanisms of saccadic suppression in primate cortical area V4}, author = {Theodoros P Zanos and Patrick J Mineault and Daniel Guitton and Christopher C Pack}, doi = {10.1523/JNEUROSCI.1015-16.2016}, year = {2016}, date = {2016-01-01}, journal = {Journal of Neuroscience}, volume = {36}, number = {35}, pages = {9227--9239}, abstract = {Psychophysical studies have shown that subjects are often unaware of visual stimuli presented around the time of an eye movement. This saccadic suppression is thought to be a mechanism for maintaining perceptual stability. The brain might accomplish saccadic suppression by reducing the gain of visual responses to specific stimuli or by simply suppressing firing uniformly for all stimuli. Moreover, the suppression might be identical across the visual field or concentrated at specific points. To evaluate these possibilities, we recorded from individual neurons in cortical area V4 of nonhuman primates trained to execute saccadic eye movements. We found that both modes of suppression were evident in the visual responses of these neurons and that the two modes showed different spatial and temporal profiles: while gain changes started earlier and were more widely distributed across visual space, nonspecific suppression was found more often in the peripheral visual field, after the completion of the saccade. Peripheral suppression was also associated with increased noise correlations and stronger local field potential oscillations in the $alpha$ frequency band. This pattern of results suggests that saccadic suppression shares some of the circuitry responsible for allocating voluntary attention. SIGNIFICANCE STATEMENT We explore our surroundings by looking at things, but each eye movement that we make causes an abrupt shift of the visual input. Why doesn't the world look like a film recorded on a shaky camera? The answer in part is a brain mechanism called saccadic suppression, which reduces the responses of visual neurons around the time of each eye movement. Here we reveal several new properties of the underlying mechanisms. First, the suppression operates differently in the central and peripheral visual fields. Second, it appears to be controlled by oscillations in the local field potentials at frequencies traditionally associated with attention. These results suggest that saccadic suppression shares the brain circuits responsible for actively ignoring irrelevant stimuli.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Psychophysical studies have shown that subjects are often unaware of visual stimuli presented around the time of an eye movement. This saccadic suppression is thought to be a mechanism for maintaining perceptual stability. The brain might accomplish saccadic suppression by reducing the gain of visual responses to specific stimuli or by simply suppressing firing uniformly for all stimuli. Moreover, the suppression might be identical across the visual field or concentrated at specific points. To evaluate these possibilities, we recorded from individual neurons in cortical area V4 of nonhuman primates trained to execute saccadic eye movements. We found that both modes of suppression were evident in the visual responses of these neurons and that the two modes showed different spatial and temporal profiles: while gain changes started earlier and were more widely distributed across visual space, nonspecific suppression was found more often in the peripheral visual field, after the completion of the saccade. Peripheral suppression was also associated with increased noise correlations and stronger local field potential oscillations in the $alpha$ frequency band. This pattern of results suggests that saccadic suppression shares some of the circuitry responsible for allocating voluntary attention. SIGNIFICANCE STATEMENT We explore our surroundings by looking at things, but each eye movement that we make causes an abrupt shift of the visual input. Why doesn't the world look like a film recorded on a shaky camera? The answer in part is a brain mechanism called saccadic suppression, which reduces the responses of visual neurons around the time of each eye movement. Here we reveal several new properties of the underlying mechanisms. First, the suppression operates differently in the central and peripheral visual fields. Second, it appears to be controlled by oscillations in the local field potentials at frequencies traditionally associated with attention. These results suggest that saccadic suppression shares the brain circuits responsible for actively ignoring irrelevant stimuli. |
Alexandre Zénon; Yann Duclos; Romain Carron; Tatiana Witjas; Christelle Baunez; Jean Régis; Jean Philippe Azulay; Peter Brown; Alexandre Eusebio The human subthalamic nucleus encodes the subjective value of reward and the cost of effort during decision-making Journal Article Brain, 139 (6), pp. 1830–1843, 2016. @article{Zenon2016a, title = {The human subthalamic nucleus encodes the subjective value of reward and the cost of effort during decision-making}, author = {Alexandre Zénon and Yann Duclos and Romain Carron and Tatiana Witjas and Christelle Baunez and Jean Régis and Jean Philippe Azulay and Peter Brown and Alexandre Eusebio}, doi = {10.1093/brain/aww075}, year = {2016}, date = {2016-01-01}, journal = {Brain}, volume = {139}, number = {6}, pages = {1830--1843}, abstract = {Adaptive behaviour entails the capacity to select actions as a function of their energy cost and expected value and the disruption of this faculty is now viewed as a possible cause of the symptoms of Parkinsons disease. Indirect evidence points to the involvement of the subthalamic nucleus--the most common target for deep brain stimulation in Parkinsons disease--in cost-benefit computation. However, this putative function appears at odds with the current view that the subthalamic nucleus is important for adjusting behaviour to conflict. Here we tested these contrasting hypotheses by recording the neuronal activity of the subthalamic nucleus of patients with Parkinsons disease during an effort-based decision task. Local field potentials were recorded from the subthalamic nucleus of 12 patients with advanced Parkinsons disease (mean age 63.8 years +/- 6.8; mean disease duration 9.4 years +/- 2.5) both OFF and ON levodopa while they had to decide whether to engage in an effort task based on the level of effort required and the value of the reward promised in return. The data were analysed using generalized linear mixed models and cluster-based permutation methods. Behaviourally, the probability of trial acceptance increased with the reward value and decreased with the required effort level. Dopamine replacement therapy increased the rate of acceptance for efforts associated with low rewards. When recording the subthalamic nucleus activity, we found a clear neural response to both reward and effort cues in the 1-10 Hz range. In addition these responses were informative of the subjective value of reward and level of effort rather than their actual quantities, such that they were predictive of the participants decisions. OFF levodopa, this link with acceptance was weakened. Finally, we found that these responses did not index conflict, as they did not vary as a function of the distance from indifference in the acceptance decision. These findings show that low-frequency neuronal activity in the subthalamic nucleus may encode the information required to make cost-benefit comparisons, rather than signal conflict. The link between these neural responses and behaviour was stronger under dopamine replacement therapy. Our findings are consistent with the view that Parkinsons disease symptoms may be caused by a disruption of the processes involved in balancing the value of actions with their associated effort cost.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Adaptive behaviour entails the capacity to select actions as a function of their energy cost and expected value and the disruption of this faculty is now viewed as a possible cause of the symptoms of Parkinsons disease. Indirect evidence points to the involvement of the subthalamic nucleus--the most common target for deep brain stimulation in Parkinsons disease--in cost-benefit computation. However, this putative function appears at odds with the current view that the subthalamic nucleus is important for adjusting behaviour to conflict. Here we tested these contrasting hypotheses by recording the neuronal activity of the subthalamic nucleus of patients with Parkinsons disease during an effort-based decision task. Local field potentials were recorded from the subthalamic nucleus of 12 patients with advanced Parkinsons disease (mean age 63.8 years +/- 6.8; mean disease duration 9.4 years +/- 2.5) both OFF and ON levodopa while they had to decide whether to engage in an effort task based on the level of effort required and the value of the reward promised in return. The data were analysed using generalized linear mixed models and cluster-based permutation methods. Behaviourally, the probability of trial acceptance increased with the reward value and decreased with the required effort level. Dopamine replacement therapy increased the rate of acceptance for efforts associated with low rewards. When recording the subthalamic nucleus activity, we found a clear neural response to both reward and effort cues in the 1-10 Hz range. In addition these responses were informative of the subjective value of reward and level of effort rather than their actual quantities, such that they were predictive of the participants decisions. OFF levodopa, this link with acceptance was weakened. Finally, we found that these responses did not index conflict, as they did not vary as a function of the distance from indifference in the acceptance decision. These findings show that low-frequency neuronal activity in the subthalamic nucleus may encode the information required to make cost-benefit comparisons, rather than signal conflict. The link between these neural responses and behaviour was stronger under dopamine replacement therapy. Our findings are consistent with the view that Parkinsons disease symptoms may be caused by a disruption of the processes involved in balancing the value of actions with their associated effort cost. |
Alexandre Zénon Time-domain analysis for extracting fast-paced pupil responses Journal Article Scientific Reports, 7 , pp. 41484, 2017. @article{Zenon2017, title = {Time-domain analysis for extracting fast-paced pupil responses}, author = {Alexandre Zénon}, doi = {10.1038/srep41484}, year = {2017}, date = {2017-01-01}, journal = {Scientific Reports}, volume = {7}, pages = {41484}, publisher = {Nature Publishing Group}, abstract = {The eye pupil reacts to cognitive processes, but its analysis is challenging when luminance varies or when stimulation is fast-paced. Current approaches relying on deconvolution techniques do not account for the strong low-frequency spontaneous changes in pupil size or the large interindividual variability in the shape of the responses. Here a system identification framework is proposed in which the pupil responses to different parameters are extracted by means of an autoregressive model with exogenous inputs. In an example application of this technique, pupil size was shown to respond to the luminance and arousal scores of affective pictures presented in rapid succession. This result was significant in each subject (N = 5), but the pupil response varied between individuals both in amplitude and latency, highlighting the need for determining impulse responses subjectwise. The same method was also used to account for pupil size variations caused by respiration, illustrating the possibility to model the relation between pupil size and other continuous signals. In conclusion, this new framework for the analysis of pupil size data allows us to dissociate the response of the eye pupil from intermingled sources of influence and can be used to study the relation between pupil size and other physiological signals.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The eye pupil reacts to cognitive processes, but its analysis is challenging when luminance varies or when stimulation is fast-paced. Current approaches relying on deconvolution techniques do not account for the strong low-frequency spontaneous changes in pupil size or the large interindividual variability in the shape of the responses. Here a system identification framework is proposed in which the pupil responses to different parameters are extracted by means of an autoregressive model with exogenous inputs. In an example application of this technique, pupil size was shown to respond to the luminance and arousal scores of affective pictures presented in rapid succession. This result was significant in each subject (N = 5), but the pupil response varied between individuals both in amplitude and latency, highlighting the need for determining impulse responses subjectwise. The same method was also used to account for pupil size variations caused by respiration, illustrating the possibility to model the relation between pupil size and other continuous signals. In conclusion, this new framework for the analysis of pupil size data allows us to dissociate the response of the eye pupil from intermingled sources of influence and can be used to study the relation between pupil size and other physiological signals. |
Paul Zerr; Katharine N Thakkar; Siarhei Uzunbajakau; Stefan Van der Stigchel Error compensation in random vector double step saccades with and without global adaptation Journal Article Vision Research, 127 , pp. 141–151, 2016. @article{Zerr2016, title = {Error compensation in random vector double step saccades with and without global adaptation}, author = {Paul Zerr and Katharine N Thakkar and Siarhei Uzunbajakau and Stefan {Van der Stigchel}}, doi = {10.1016/j.visres.2016.06.014}, year = {2016}, date = {2016-01-01}, journal = {Vision Research}, volume = {127}, pages = {141--151}, publisher = {Elsevier Ltd}, abstract = {In saccade sequences without visual feedback endpoint errors pose a problem for subsequent saccades. Accurate error compensation has previously been demonstrated in double step saccades (DSS) and is thought to rely on a copy of the saccade motor vector. However, these studies typically use fixed target vectors on each trial, calling into question the generalizability of the findings due to the high stimulus predictability. We present a random walk DSS paradigm (random target vector amplitudes and directions) to provide a more complete, realistic and generalizable description of error compensation in saccade sequences. We regressed the vector between the endpoint of the second saccade and the endpoint of a hypothetical second saccade that does not take first saccade error into account on the ideal compensation vector. This provides a direct and complete estimation of error compensation in DSS. We observed error compensation with varying stimulus displays that was comparable to previous findings. We also employed this paradigm to extend experiments that showed accurate compensation for systematic undershoots after specific-vector saccade adaptation. Utilizing the random walk paradigm for saccade adaptation by Rolfs et al. (2010) together with our random walk DSS paradigm we now also demonstrate transfer of adaptation from reactive to memory guided saccades for global saccade adaptation. We developed a new, generalizable DSS paradigm with unpredictable stimuli and successfully employed it to verify, replicate and extend previous findings, demonstrating that endpoint errors are compensated for saccades in all directions and variable amplitudes.}, keywords = {}, pubstate = {published}, tppubtype = {article} } In saccade sequences without visual feedback endpoint errors pose a problem for subsequent saccades. Accurate error compensation has previously been demonstrated in double step saccades (DSS) and is thought to rely on a copy of the saccade motor vector. However, these studies typically use fixed target vectors on each trial, calling into question the generalizability of the findings due to the high stimulus predictability. We present a random walk DSS paradigm (random target vector amplitudes and directions) to provide a more complete, realistic and generalizable description of error compensation in saccade sequences. We regressed the vector between the endpoint of the second saccade and the endpoint of a hypothetical second saccade that does not take first saccade error into account on the ideal compensation vector. This provides a direct and complete estimation of error compensation in DSS. We observed error compensation with varying stimulus displays that was comparable to previous findings. We also employed this paradigm to extend experiments that showed accurate compensation for systematic undershoots after specific-vector saccade adaptation. Utilizing the random walk paradigm for saccade adaptation by Rolfs et al. (2010) together with our random walk DSS paradigm we now also demonstrate transfer of adaptation from reactive to memory guided saccades for global saccade adaptation. We developed a new, generalizable DSS paradigm with unpredictable stimuli and successfully employed it to verify, replicate and extend previous findings, demonstrating that endpoint errors are compensated for saccades in all directions and variable amplitudes. |
Paul Zerr; José Pablo Ossandón; Idris Shareef; Stefan Van der Stigchel; Ramesh Kekunnaya; Brigitte Röder Successful visually guided eye movements following sight restoration after congenital cataracts Journal Article Journal of Vision, 20 (7), pp. 1–24, 2020. @article{Zerr2020, title = {Successful visually guided eye movements following sight restoration after congenital cataracts}, author = {Paul Zerr and José Pablo Ossandón and Idris Shareef and Stefan {Van der Stigchel} and Ramesh Kekunnaya and Brigitte Röder}, doi = {10.1167/JOV.20.7.3}, year = {2020}, date = {2020-01-01}, journal = {Journal of Vision}, volume = {20}, number = {7}, pages = {1--24}, abstract = {Sensitive periods have previously been identified for several human visual system functions. Yet, it is unknown to what degree the development of visually guided oculomotor control depends on early visual experience-for example, whether and to what degree humans whose sight was restored after a transient period of congenital visual deprivation are able to conduct visually guided eye movements. In the present study, we developed new calibration and analysis techniques for eye tracking data contaminated with pervasive nystagmus, which is typical for this population. We investigated visually guided eye movements in sight recovery individuals with long periods of visual pattern deprivation (3-36 years) following birth due to congenital, dense, total, bilateral cataracts. As controls we assessed (1) individuals with nystagmus due to causes other than cataracts, (2) individuals with developmental cataracts after cataract removal, and (3) individuals with normal vision. Congenital cataract reversal individuals were able to perform visually guided gaze shifts, even when their blindness had lasted for decades. The typical extensive nystagmus of this group distorted eye movement trajectories, but measures of latency and accuracy were as expected from their prevailing nystagmus-that is, not worse than in the nystagmus control group. To the best of our knowledge, the present quantitative study is the first to investigate the characteristics of oculomotor control in congenital cataract reversal individuals, and it indicates a remarkable effectiveness of visually guided eye movements despite long-lasting periods of visual deprivation.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Sensitive periods have previously been identified for several human visual system functions. Yet, it is unknown to what degree the development of visually guided oculomotor control depends on early visual experience-for example, whether and to what degree humans whose sight was restored after a transient period of congenital visual deprivation are able to conduct visually guided eye movements. In the present study, we developed new calibration and analysis techniques for eye tracking data contaminated with pervasive nystagmus, which is typical for this population. We investigated visually guided eye movements in sight recovery individuals with long periods of visual pattern deprivation (3-36 years) following birth due to congenital, dense, total, bilateral cataracts. As controls we assessed (1) individuals with nystagmus due to causes other than cataracts, (2) individuals with developmental cataracts after cataract removal, and (3) individuals with normal vision. Congenital cataract reversal individuals were able to perform visually guided gaze shifts, even when their blindness had lasted for decades. The typical extensive nystagmus of this group distorted eye movement trajectories, but measures of latency and accuracy were as expected from their prevailing nystagmus-that is, not worse than in the nystagmus control group. To the best of our knowledge, the present quantitative study is the first to investigate the characteristics of oculomotor control in congenital cataract reversal individuals, and it indicates a remarkable effectiveness of visually guided eye movements despite long-lasting periods of visual deprivation. |
Hao Zhang; Hong Mei Yan; Keith M Kendrick; Chao Yi Li Both lexical and non-lexical characters are processed during saccadic eye movements Journal Article PLoS ONE, 7 (9), pp. e46383, 2012. @article{Zhang2012b, title = {Both lexical and non-lexical characters are processed during saccadic eye movements}, author = {Hao Zhang and Hong Mei Yan and Keith M Kendrick and Chao Yi Li}, doi = {10.1371/journal.pone.0046383}, year = {2012}, date = {2012-01-01}, journal = {PLoS ONE}, volume = {7}, number = {9}, pages = {e46383}, abstract = {On average our eyes make 3-5 saccadic movements per second when we read, although their neural mechanism is still unclear. It is generally thought that saccades help redirect the retinal fovea to specific characters and words but that actual discrimination of information only occurs during periods of fixation. Indeed, it has been proposed that there is active and selective suppression of information processing during saccades to avoid experience of blurring due to the high-speed movement. Here, using a paradigm where a string of either lexical (Chinese) or non-lexical (alphabetic) characters are triggered by saccadic eye movements, we show that subjects can discriminate both while making saccadic eye movement. Moreover, discrimination accuracy is significantly better for characters scanned during the saccadic movement to a fixation point than those not scanned beyond it. Our results showed that character information can be processed during the saccade, therefore saccades during reading not only function to redirect the fovea to fixate the next character or word but allow pre-processing of information from the ones adjacent to the fixation locations to help target the next most salient one. In this way saccades can not only promote continuity in reading words but also actively facilitate reading comprehension.}, keywords = {}, pubstate = {published}, tppubtype = {article} } On average our eyes make 3-5 saccadic movements per second when we read, although their neural mechanism is still unclear. It is generally thought that saccades help redirect the retinal fovea to specific characters and words but that actual discrimination of information only occurs during periods of fixation. Indeed, it has been proposed that there is active and selective suppression of information processing during saccades to avoid experience of blurring due to the high-speed movement. Here, using a paradigm where a string of either lexical (Chinese) or non-lexical (alphabetic) characters are triggered by saccadic eye movements, we show that subjects can discriminate both while making saccadic eye movement. Moreover, discrimination accuracy is significantly better for characters scanned during the saccadic movement to a fixation point than those not scanned beyond it. Our results showed that character information can be processed during the saccade, therefore saccades during reading not only function to redirect the fovea to fixate the next character or word but allow pre-processing of information from the ones adjacent to the fixation locations to help target the next most salient one. In this way saccades can not only promote continuity in reading words but also actively facilitate reading comprehension. |
Jiedong Zhang; Jia Liu; Yaoda Xu Neural decoding reveals impaired face configural processing in the right fusiform face area of individuals with developmental prosopagnosia Journal Article Journal of Neuroscience, 35 (4), pp. 1539–1548, 2015. @article{Zhang2015, title = {Neural decoding reveals impaired face configural processing in the right fusiform face area of individuals with developmental prosopagnosia}, author = {Jiedong Zhang and Jia Liu and Yaoda Xu}, doi = {10.1523/JNEUROSCI.2646-14.2015}, year = {2015}, date = {2015-01-01}, journal = {Journal of Neuroscience}, volume = {35}, number = {4}, pages = {1539--1548}, abstract = {Most of human daily social interactions rely on the ability to successfully recognize faces. Yet ∼2% of the human population suffers from face blindness without any acquired brain damage [this is also known as developmental prosopagnosia (DP) or congenital prosopagnosia]). Despite the presence of severe behavioral face recognition deficits, surprisingly, a majority of DP individuals exhibit normal face selectivity in the right fusiform face area (FFA), a key brain region involved in face configural processing. This finding, together with evidence showing impairments downstream from the right FFA in DP individuals, has led some to argue that perhaps the right FFA is largely intact in DP individuals. Using fMRI multivoxel pattern analysis, here we report the discovery of a neural impairment in the right FFA of DP individuals that may play a critical role in mediating their face-processing deficits. In seven individuals with DP, we discovered that, despite the right FFA's preference for faces and it showing decoding for the different face parts, it exhibited impaired face configural decoding and did not contain distinct neural response patterns for the intact and the scrambled face configurations. This abnormality was not present throughout the ventral visual cortex, as normal neural decoding was found in an adjacent object-processing region. To our knowledge, this is the first direct neural evidence showing impaired face configural processing in the right FFA in individuals with DP. The discovery of this neural impairment provides a new clue to our understanding of the neural basis of DP.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Most of human daily social interactions rely on the ability to successfully recognize faces. Yet ∼2% of the human population suffers from face blindness without any acquired brain damage [this is also known as developmental prosopagnosia (DP) or congenital prosopagnosia]). Despite the presence of severe behavioral face recognition deficits, surprisingly, a majority of DP individuals exhibit normal face selectivity in the right fusiform face area (FFA), a key brain region involved in face configural processing. This finding, together with evidence showing impairments downstream from the right FFA in DP individuals, has led some to argue that perhaps the right FFA is largely intact in DP individuals. Using fMRI multivoxel pattern analysis, here we report the discovery of a neural impairment in the right FFA of DP individuals that may play a critical role in mediating their face-processing deficits. In seven individuals with DP, we discovered that, despite the right FFA's preference for faces and it showing decoding for the different face parts, it exhibited impaired face configural decoding and did not contain distinct neural response patterns for the intact and the scrambled face configurations. This abnormality was not present throughout the ventral visual cortex, as normal neural decoding was found in an adjacent object-processing region. To our knowledge, this is the first direct neural evidence showing impaired face configural processing in the right FFA in individuals with DP. The discovery of this neural impairment provides a new clue to our understanding of the neural basis of DP. |
Xiaoli Zhang; Julie D Golomb Target localization after saccades and at fixation: Nontargets both facilitate and bias responses Journal Article Visual Cognition, 26 (9), pp. 734–752, 2018. @article{Zhang2018cb, title = {Target localization after saccades and at fixation: Nontargets both facilitate and bias responses}, author = {Xiaoli Zhang and Julie D Golomb}, doi = {10.1080/13506285.2018.1553810}, year = {2018}, date = {2018-01-01}, journal = {Visual Cognition}, volume = {26}, number = {9}, pages = {734--752}, publisher = {Taylor & Francis}, abstract = {The image on our retina changes every time we make an eye movement. To maintain visual stability after saccades, specifically to locate visual targets, we may use nontarget objects as “landmarks”. In the current study, we compared how the presence of nontargets affects target localization after saccades and during sustained fixation. Participants fixated a target object, which either maintained its location on the screen (sustained-fixation trials), or displaced to trigger a saccade (saccade trials). After the target disappeared, participants reported the most recent target location with a mouse click. We found that the presence of nontargets decreased response error magnitude and variability. However, this nontarget facilitation effect was not larger for saccade trials than sustained-fixation trials, indicating that nontarget facilitation might be a general effect for target localization, rather than of particular importance to post-saccadic stability. Additionally, participants' responses were biased towards the nontarget locations, particularly when the nontarget-target relationships were preserved in relative coordinates across the saccade. This nontarget bias interacted with biases from other spatial references, e.g., eye movement paths, possibly in a way that emphasized non-redundant information. In summary, the presence of nontargets is one of several sources of reference that combine to influence (both facilitate and bias) target localization.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The image on our retina changes every time we make an eye movement. To maintain visual stability after saccades, specifically to locate visual targets, we may use nontarget objects as “landmarks”. In the current study, we compared how the presence of nontargets affects target localization after saccades and during sustained fixation. Participants fixated a target object, which either maintained its location on the screen (sustained-fixation trials), or displaced to trigger a saccade (saccade trials). After the target disappeared, participants reported the most recent target location with a mouse click. We found that the presence of nontargets decreased response error magnitude and variability. However, this nontarget facilitation effect was not larger for saccade trials than sustained-fixation trials, indicating that nontarget facilitation might be a general effect for target localization, rather than of particular importance to post-saccadic stability. Additionally, participants' responses were biased towards the nontarget locations, particularly when the nontarget-target relationships were preserved in relative coordinates across the saccade. This nontarget bias interacted with biases from other spatial references, e.g., eye movement paths, possibly in a way that emphasized non-redundant information. In summary, the presence of nontargets is one of several sources of reference that combine to influence (both facilitate and bias) target localization. |
Yu Zhang; Aijuan Yan; Bingyu Liu; Ying Wan; Yuchen Zhao; Ying Liu; Jiangxiu Tan; Lu Song; Yong Gu; Zhenguo Liu Oculomotor performances are associated with motor and non-motor symptoms in Parkinson's disease Journal Article Frontiers in Neurology, 9 , pp. 1–8, 2018. @article{Zhang2018e, title = {Oculomotor performances are associated with motor and non-motor symptoms in Parkinson's disease}, author = {Yu Zhang and Aijuan Yan and Bingyu Liu and Ying Wan and Yuchen Zhao and Ying Liu and Jiangxiu Tan and Lu Song and Yong Gu and Zhenguo Liu}, doi = {10.3389/fneur.2018.00960}, year = {2018}, date = {2018-01-01}, journal = {Frontiers in Neurology}, volume = {9}, pages = {1--8}, abstract = {Background: Parkinson's disease (PD) patients exhibit deficits in oculomotor behavior, yet the results are inconsistent across studies. In addition, how these results are associated with clinical symptoms is unclear, especially in China. Methods: We designed a case-control study in China including 37 PD patients and 39 controls. Clinical manifestations in PD patients were recorded. Oculomotor performance was measured by a video-based eye tracker system. Results: We found that six oculomotor parameters, including fixation stability, saccadic latency, smooth pursuit gain, saccade frequency, viewing range, and saccade frequency during free-viewing context, were significantly different in PD patients and control group. Combining application of these six parameters could improve diagnostic accuracy to over 90%. Moreover, pursuit gain was significantly associated with PD duration, UPDRS III, in PD patients. Saccade latency was significantly associated with PD duration, Berg balance score, RBD score, and Total LEDD in PD patients. Conclusions: PD patients commonly exhibit oculomotor deficits in multiple behavioral contexts, which are associated with both motor and non-motor symptoms. Oculomotor test may provide a valuable tool for the clinical assessment of PD.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Background: Parkinson's disease (PD) patients exhibit deficits in oculomotor behavior, yet the results are inconsistent across studies. In addition, how these results are associated with clinical symptoms is unclear, especially in China. Methods: We designed a case-control study in China including 37 PD patients and 39 controls. Clinical manifestations in PD patients were recorded. Oculomotor performance was measured by a video-based eye tracker system. Results: We found that six oculomotor parameters, including fixation stability, saccadic latency, smooth pursuit gain, saccade frequency, viewing range, and saccade frequency during free-viewing context, were significantly different in PD patients and control group. Combining application of these six parameters could improve diagnostic accuracy to over 90%. Moreover, pursuit gain was significantly associated with PD duration, UPDRS III, in PD patients. Saccade latency was significantly associated with PD duration, Berg balance score, RBD score, and Total LEDD in PD patients. Conclusions: PD patients commonly exhibit oculomotor deficits in multiple behavioral contexts, which are associated with both motor and non-motor symptoms. Oculomotor test may provide a valuable tool for the clinical assessment of PD. |
Chen Zhang; Angelina Paolozza; Po He Tseng; James N Reynolds; Douglas P Munoz; Laurent Itti Detection of children/youth with fetal alcohol spectrum disorder through eye movement, psychometric, and neuroimaging data Journal Article Frontiers in Neurology, 10 (FEB), pp. 1–15, 2019. @article{Zhang2019a, title = {Detection of children/youth with fetal alcohol spectrum disorder through eye movement, psychometric, and neuroimaging data}, author = {Chen Zhang and Angelina Paolozza and Po He Tseng and James N Reynolds and Douglas P Munoz and Laurent Itti}, doi = {10.3389/fneur.2019.00080}, year = {2019}, date = {2019-01-01}, journal = {Frontiers in Neurology}, volume = {10}, number = {FEB}, pages = {1--15}, abstract = {Background: Fetal alcohol spectrum disorders (FASD) is one of the most common causes of developmental disabilities and neurobehavioral deficits. Despite the high-prevalence of FASD, the current diagnostic process is challenging and time- and money-consuming, with underreported profiles of the neurocognitive and neurobehavioral impairments because of limited clinical capacity. We assessed children/youth with FASD from a multimodal perspective and developed a high-performing, low-cost screening protocol using a machine learning framework. Methods and Findings: Participants with FASD and age-matched typically developing controls completed up to six assessments, including saccadic eye movement tasks (prosaccade, antisaccade, and memory-guided saccade), free viewing of videos, psychometric tests, and neuroimaging of the corpus callosum. We comparatively investigated new machine learning methods applied to these data, toward the acquisition of a quantitative signature of the neurodevelopmental deficits, and the development of an objective, high-throughput screening tool to identify children/youth with FASD. Our method provides a comprehensive profile of distinct measures in domains including sensorimotor and visuospatial control, visual perception, attention, inhibition, working memory, academic functions, and brain structure. We also showed that a combination of four to six assessments yields the best FASD vs. control classification accuracy; however, this protocol is expensive and time consuming. We conducted a cost/benefit analysis of the six assessments and developed a high-performing, low-cost screening protocol based on a subset of eye movement and psychometric tests that approached the best result under a range of constraints (time, cost, participant age, required administration, and access to neuroimaging facility). Using insights from the theory of value of information, we proposed an optimal annual screening procedure for children at risk of FASD. Conclusions: We developed a high-capacity, low-cost screening procedure under constrains, with high expected monetary benefit, substantial impact of the referral and diagnostic process, and expected maximized long-term benefits to the tested individuals and to society. This annual screening procedure for children/youth at risk of FASD can be easily and widely deployed for early identification, potentially leading to earlier intervention and treatment. This is crucial for neurodevelopmental disorders, to mitigate the severity of the disorder and/or frequency of secondary comorbidities.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Background: Fetal alcohol spectrum disorders (FASD) is one of the most common causes of developmental disabilities and neurobehavioral deficits. Despite the high-prevalence of FASD, the current diagnostic process is challenging and time- and money-consuming, with underreported profiles of the neurocognitive and neurobehavioral impairments because of limited clinical capacity. We assessed children/youth with FASD from a multimodal perspective and developed a high-performing, low-cost screening protocol using a machine learning framework. Methods and Findings: Participants with FASD and age-matched typically developing controls completed up to six assessments, including saccadic eye movement tasks (prosaccade, antisaccade, and memory-guided saccade), free viewing of videos, psychometric tests, and neuroimaging of the corpus callosum. We comparatively investigated new machine learning methods applied to these data, toward the acquisition of a quantitative signature of the neurodevelopmental deficits, and the development of an objective, high-throughput screening tool to identify children/youth with FASD. Our method provides a comprehensive profile of distinct measures in domains including sensorimotor and visuospatial control, visual perception, attention, inhibition, working memory, academic functions, and brain structure. We also showed that a combination of four to six assessments yields the best FASD vs. control classification accuracy; however, this protocol is expensive and time consuming. We conducted a cost/benefit analysis of the six assessments and developed a high-performing, low-cost screening protocol based on a subset of eye movement and psychometric tests that approached the best result under a range of constraints (time, cost, participant age, required administration, and access to neuroimaging facility). Using insights from the theory of value of information, we proposed an optimal annual screening procedure for children at risk of FASD. Conclusions: We developed a high-capacity, low-cost screening procedure under constrains, with high expected monetary benefit, substantial impact of the referral and diagnostic process, and expected maximized long-term benefits to the tested individuals and to society. This annual screening procedure for children/youth at risk of FASD can be easily and widely deployed for early identification, potentially leading to earlier intervention and treatment. This is crucial for neurodevelopmental disorders, to mitigate the severity of the disorder and/or frequency of secondary comorbidities. |
Manman Zhang; Simon P Liversedge; Xuejun Bai; Guoli Yan; Chuanli Zang The influence of foveal lexical processing load on parafoveal preview and saccadic targeting during Chinese reading Journal Article Journal of Experimental Psychology: Human Perception and Performance, 45 (6), pp. 812–825, 2019. @article{Zhang2019e, title = {The influence of foveal lexical processing load on parafoveal preview and saccadic targeting during Chinese reading}, author = {Manman Zhang and Simon P Liversedge and Xuejun Bai and Guoli Yan and Chuanli Zang}, doi = {10.1037/xhp0000644}, year = {2019}, date = {2019-01-01}, journal = {Journal of Experimental Psychology: Human Perception and Performance}, volume = {45}, number = {6}, pages = {812--825}, abstract = {Whether increased foveal load causes a reduction of parafoveal processing remains equivocal. The present study examined foveal load effects on parafoveal processing in natural Chinese reading. Parafoveal preview of a single-character parafoveal target word was manipulated by using the boundary paradigm (Rayner, 1975; pseudocharacter or identity previews) under high foveal load (low-frequency pretarget word) compared with low foveal load (high-frequency pretarget word) conditions. Despite an effective manipulation of foveal processing load, we obtained no evidence of any modulatory influence on parafoveal processing in first-pass reading times. However, our results clearly showed that saccadic targeting, in relation to forward saccade length from the pretarget word and in relation to target word skipping, was influenced by foveal load and this influence occurred independent of parafoveal preview. Given the optimal experimental conditions, these results provide very strong evidence that preview benefit is not modulated by foveal lexical load during Chinese reading.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Whether increased foveal load causes a reduction of parafoveal processing remains equivocal. The present study examined foveal load effects on parafoveal processing in natural Chinese reading. Parafoveal preview of a single-character parafoveal target word was manipulated by using the boundary paradigm (Rayner, 1975; pseudocharacter or identity previews) under high foveal load (low-frequency pretarget word) compared with low foveal load (high-frequency pretarget word) conditions. Despite an effective manipulation of foveal processing load, we obtained no evidence of any modulatory influence on parafoveal processing in first-pass reading times. However, our results clearly showed that saccadic targeting, in relation to forward saccade length from the pretarget word and in relation to target word skipping, was influenced by foveal load and this influence occurred independent of parafoveal preview. Given the optimal experimental conditions, these results provide very strong evidence that preview benefit is not modulated by foveal lexical load during Chinese reading. |
Xiaoxian Zhang; Wanlu Fu; Licheng Xue; Jing Zhao; Zhiguo Wang Children with mathematical learning difficulties are sluggish in disengaging attention Journal Article Frontiers in Psychology, 10 , pp. 1–9, 2019. @article{Zhang2019f, title = {Children with mathematical learning difficulties are sluggish in disengaging attention}, author = {Xiaoxian Zhang and Wanlu Fu and Licheng Xue and Jing Zhao and Zhiguo Wang}, doi = {10.3389/fpsyg.2019.00932}, year = {2019}, date = {2019-01-01}, journal = {Frontiers in Psychology}, volume = {10}, pages = {1--9}, abstract = {Mathematical learning difficulties (MLD) refer to a variety of deficits in math skills, typically pertaining to the domains of arithmetic and problem solving. The present study examined the time course of attentional orienting in MLD children with a spatial cueing task, by parametrically manipulating the cue-target onset asynchrony (CTOA). The results of Experiment 1 revealed that, in contrast to typical developing children, the inhibitory aftereffect of attentional orienting-frequently referred to as inhibition of return (IOR)-was not observed in the MLD children, even at the longest CTOA tested (800 ms). However, robust early facilitation effects were observed in the MLD children, suggesting that they have difficulties in attentional disengagement rather than attentional engagement. In a second experiment, a secondary cue was introduced to the cueing task to encourage attentional disengagement and IOR effects were observed in the MLD children. Taken together, the present experiments indicate that MLD children are sluggish in disengaging spatial attention.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Mathematical learning difficulties (MLD) refer to a variety of deficits in math skills, typically pertaining to the domains of arithmetic and problem solving. The present study examined the time course of attentional orienting in MLD children with a spatial cueing task, by parametrically manipulating the cue-target onset asynchrony (CTOA). The results of Experiment 1 revealed that, in contrast to typical developing children, the inhibitory aftereffect of attentional orienting-frequently referred to as inhibition of return (IOR)-was not observed in the MLD children, even at the longest CTOA tested (800 ms). However, robust early facilitation effects were observed in the MLD children, suggesting that they have difficulties in attentional disengagement rather than attentional engagement. In a second experiment, a secondary cue was introduced to the cueing task to encourage attentional disengagement and IOR effects were observed in the MLD children. Taken together, the present experiments indicate that MLD children are sluggish in disengaging spatial attention. |
Li Zhang; Guoli Yan; Li Zhou; Zebo Lan; Valerie Benson Journal of Autism and Developmental Disorders, 50 , pp. 500–512, 2020. @article{Zhang2020e, title = {The influence of irrelevant visual distractors on eye movement control in Chinese children with autism spectrum disorder: Evidence from the remote distractor paradigm}, author = {Li Zhang and Guoli Yan and Li Zhou and Zebo Lan and Valerie Benson}, doi = {10.1007/s10803-019-04271-y}, year = {2020}, date = {2020-01-01}, journal = {Journal of Autism and Developmental Disorders}, volume = {50}, pages = {500--512}, publisher = {Springer US}, abstract = {The current study examined eye movement control in autistic (ASD) children. Simple targets were presented in isolation, or with central, parafoveal, or peripheral distractors synchronously. Sixteen children with ASD (47–81 months) and nineteen age and IQ matched typically developing children were instructed to look to the target as accurately and quickly as possible. Both groups showed high proportions (40%) of saccadic errors towards parafoveal and peripheral distractors. For correctly executed eye movements to the targets, centrally presented distractors produced the longest latencies (time taken to initiate eye movements), followed by parafoveal and peripheral distractor conditions. Central distractors had a greater effect in the ASD group, indicating evidence for potential atypical voluntary attentional control in ASD children.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The current study examined eye movement control in autistic (ASD) children. Simple targets were presented in isolation, or with central, parafoveal, or peripheral distractors synchronously. Sixteen children with ASD (47–81 months) and nineteen age and IQ matched typically developing children were instructed to look to the target as accurately and quickly as possible. Both groups showed high proportions (40%) of saccadic errors towards parafoveal and peripheral distractors. For correctly executed eye movements to the targets, centrally presented distractors produced the longest latencies (time taken to initiate eye movements), followed by parafoveal and peripheral distractor conditions. Central distractors had a greater effect in the ASD group, indicating evidence for potential atypical voluntary attentional control in ASD children. |
Y Zhang; Q Yuan Indian Journal of Pharmaceutical Sciences, 82 , pp. 32–40, 2020. @article{Zhang2020f, title = {Effect of the combination of biofeedback and sequential psychotherapy on the cognitive function of trauma patients based on the fusion of set theory model}, author = {Y Zhang and Q Yuan}, doi = {10.36468/pharmaceutical-sciences.spl.78}, year = {2020}, date = {2020-01-01}, journal = {Indian Journal of Pharmaceutical Sciences}, volume = {82}, pages = {32--40}, abstract = {This study intended to take a special group of trauma patients as research subjects to propose a method analysing the effect of combination of biofeedback and sequential psychotherapy based on the fusion of the set theory model on the cognitive function of these patients with trauma. The occurrence and development of post-traumatic stress disorder and the cognitive function is investigated. The set theory model is used in this study to carry out a survey on the effect of the combination of biofeedback and sequential psychotherapy on patients with post-traumatic stress disorder to describe the occurrence, development, change trajectory and time course characteristics of post-traumatic stress disorder. The set theory model was employed to investigate the cognitive development characteristics of these trauma patients. In addition, through the set theory model, psychological behavior mechanism for the occurrence and development of post-traumatic stress disorder is revealed. The study of the combination of biofeedback and sequential psychotherapy is adopted to investigate the effect of the post-traumatic stress disorder on the cognitive function of the trauma patients. The results of this study could be used to provide scientific advice for the placement and psychological assistance of trauma patients in future, to provide a scientific basis for a targeted psychological intervention and overall planning of the intervention, and to provide scientific and objective indicators and methods for the diagnosis and assessment of intervention of traumatic psychology in patients with trauma in the future.}, keywords = {}, pubstate = {published}, tppubtype = {article} } This study intended to take a special group of trauma patients as research subjects to propose a method analysing the effect of combination of biofeedback and sequential psychotherapy based on the fusion of the set theory model on the cognitive function of these patients with trauma. The occurrence and development of post-traumatic stress disorder and the cognitive function is investigated. The set theory model is used in this study to carry out a survey on the effect of the combination of biofeedback and sequential psychotherapy on patients with post-traumatic stress disorder to describe the occurrence, development, change trajectory and time course characteristics of post-traumatic stress disorder. The set theory model was employed to investigate the cognitive development characteristics of these trauma patients. In addition, through the set theory model, psychological behavior mechanism for the occurrence and development of post-traumatic stress disorder is revealed. The study of the combination of biofeedback and sequential psychotherapy is adopted to investigate the effect of the post-traumatic stress disorder on the cognitive function of the trauma patients. The results of this study could be used to provide scientific advice for the placement and psychological assistance of trauma patients in future, to provide a scientific basis for a targeted psychological intervention and overall planning of the intervention, and to provide scientific and objective indicators and methods for the diagnosis and assessment of intervention of traumatic psychology in patients with trauma in the future. |
Sijia Zhao; Nga Wai Yum; Lucas Benjamin; Elia Benhamou; Makoto Yoneya; Shigeto Furukawa; Frederic Dick; Malcolm Slaney; Maria Chait Rapid ocular responses are modulated by bottom-up-driven auditory salience Journal Article Journal of Neuroscience, 39 (39), pp. 7703–7714, 2019. @article{Zhao2019c, title = {Rapid ocular responses are modulated by bottom-up-driven auditory salience}, author = {Sijia Zhao and Nga Wai Yum and Lucas Benjamin and Elia Benhamou and Makoto Yoneya and Shigeto Furukawa and Frederic Dick and Malcolm Slaney and Maria Chait}, doi = {10.1523/JNEUROSCI.0776-19.2019}, year = {2019}, date = {2019-01-01}, journal = {Journal of Neuroscience}, volume = {39}, number = {39}, pages = {7703--7714}, abstract = {Despite the prevalent use of alerting sounds in alarms and human–machine interface systems and the long-hypothesized role of the auditory system as the brain's “early warning system,” we have only a rudimentary understanding of what determines auditory salience — the automatic attraction of attention by sound —and which brain mechanisms underlie this process. A major roadblock has been the lack ofa robust, objective means of quantifying sound-driven attentional capture. Here we demonstrate that: (1) a reliable salience scale can be obtained from crowd-sourcing (N ⫽ 911), (2) acoustic roughness appears to be a driving feature behind this scaling, consistent with previous reports implicating roughness in the perceptual distinctiveness of sounds, and (3) crowd-sourced auditory salience correlates with objective autonomic measures. Specifically, we show that a salience ranking obtained from online raters correlated robustly with the superior colliculus-mediated ocular freezing response, microsaccadic inhibition (MSI), measured in naive, passively listening human participants (ofeither sex). More salient sounds evoked earlier and larger MSI, consistent with a faster orienting response. These results are consistent with the hypothesis that MSI reflects a general reorienting response that is evoked by potentially behaviorally important events regardless oftheir modality.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Despite the prevalent use of alerting sounds in alarms and human–machine interface systems and the long-hypothesized role of the auditory system as the brain's “early warning system,” we have only a rudimentary understanding of what determines auditory salience — the automatic attraction of attention by sound —and which brain mechanisms underlie this process. A major roadblock has been the lack ofa robust, objective means of quantifying sound-driven attentional capture. Here we demonstrate that: (1) a reliable salience scale can be obtained from crowd-sourcing (N ⫽ 911), (2) acoustic roughness appears to be a driving feature behind this scaling, consistent with previous reports implicating roughness in the perceptual distinctiveness of sounds, and (3) crowd-sourced auditory salience correlates with objective autonomic measures. Specifically, we show that a salience ranking obtained from online raters correlated robustly with the superior colliculus-mediated ocular freezing response, microsaccadic inhibition (MSI), measured in naive, passively listening human participants (ofeither sex). More salient sounds evoked earlier and larger MSI, consistent with a faster orienting response. These results are consistent with the hypothesis that MSI reflects a general reorienting response that is evoked by potentially behaviorally important events regardless oftheir modality. |
Jing Zhou; Adam Reeves; Scott N J Watamaniuk; Stephen J Heinen Shared attention for smooth pursuit and saccades Journal Article Journal of Vision, 13 (4), pp. 1–12, 2013. @article{Zhou2013, title = {Shared attention for smooth pursuit and saccades}, author = {Jing Zhou and Adam Reeves and Scott N J Watamaniuk and Stephen J Heinen}, doi = {10.1167/13.4.7}, year = {2013}, date = {2013-01-01}, journal = {Journal of Vision}, volume = {13}, number = {4}, pages = {1--12}, abstract = {Identification of brief luminance decrements on parafoveal stimuli presented during smooth pursuit improves when a spot pursuit target is surrounded by a larger random dot cinematogram (RDC) that moves with it (Heinen, Jin, & Watamaniuk, 2011). This was hypothesized to occur because the RDC provided an alternative, less attention-demanding pursuit drive, and therefore released attentional resources for visual perception tasks that are shared with those used to pursue the spot. Here, we used the RDC as a tool to probe whether spot pursuit also shares attentional resources with the saccadic system. To this end, we set out to determine if the RDC could release attention from pursuit of the spot to perform a saccade task. Observers made a saccade to one of four parafoveal targets that moved with the spot pursuit stimulus. The targets either moved alone or were surrounded by an RDC (100% coherence). Saccade latency decreased with the RDC, suggesting that the RDC released attention needed to pursue the spot, which was then used for the saccade task. Additional evidence that attention was released by the RDC was obtained in an experiment in which attention was anchored to the fovea by requiring observers to detect a brief color change applied 130 ms before the saccade target appeared. This manipulation eliminated the RDC advantage. The results imply that attentional resources used by the pursuit and saccadic eye movement control systems are shared.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Identification of brief luminance decrements on parafoveal stimuli presented during smooth pursuit improves when a spot pursuit target is surrounded by a larger random dot cinematogram (RDC) that moves with it (Heinen, Jin, & Watamaniuk, 2011). This was hypothesized to occur because the RDC provided an alternative, less attention-demanding pursuit drive, and therefore released attentional resources for visual perception tasks that are shared with those used to pursue the spot. Here, we used the RDC as a tool to probe whether spot pursuit also shares attentional resources with the saccadic system. To this end, we set out to determine if the RDC could release attention from pursuit of the spot to perform a saccade task. Observers made a saccade to one of four parafoveal targets that moved with the spot pursuit stimulus. The targets either moved alone or were surrounded by an RDC (100% coherence). Saccade latency decreased with the RDC, suggesting that the RDC released attention needed to pursue the spot, which was then used for the saccade task. Additional evidence that attention was released by the RDC was obtained in an experiment in which attention was anchored to the fovea by requiring observers to detect a brief color change applied 130 ms before the saccade target appeared. This manipulation eliminated the RDC advantage. The results imply that attentional resources used by the pursuit and saccadic eye movement control systems are shared. |
Yang Zhou; Gongchen Yu; Xuefei Yu; Si Wu; Mingsha Zhang Asymmetric representations of upper and lower visual fields in egocentric and allocentric references Journal Article Journal of Vision, 17 (1), pp. 1–11, 2017. @article{Zhou2017b, title = {Asymmetric representations of upper and lower visual fields in egocentric and allocentric references}, author = {Yang Zhou and Gongchen Yu and Xuefei Yu and Si Wu and Mingsha Zhang}, doi = {10.1167/17.1.9.doi}, year = {2017}, date = {2017-01-01}, journal = {Journal of Vision}, volume = {17}, number = {1}, pages = {1--11}, abstract = {Two spatial reference systems, i.e., the observer- centered (egocentric) and object-centered (allocentric) references, are most commonly used to locate the position of the external objects in space. Although we sense the world as a unified entity, visual processing is asymmetric between upper and lower visual fields (VFs). For example, the goal-directed reaching responses are more efficient in the lower VF. Such asymmetry suggests that the visual space might be composed of different realms regarding perception and action. Since the peripersonal realm includes the space that one can reach, mostly in the lower VF, it is highly likely that the peripersonal realm might mainly be represented in the egocentric reference for visuomotor operation. In contrast, the extrapersonal realm takes place away from the observer and is mostly observed in the upper VF, which is presumably represented in the allocentric reference for orientation in topographically defined space. This theory, however, has not been thoroughly tested experimentally. In the present study, we assessed the contributions of the egocentric and allocentric reference systems on visual discrimination in the upper and lower VFs through measuring the manual reaction times (RTs) of human subjects. We found that: (a) the influence of a target's egocentric location on visual discrimination was stronger in the lower VF; and (b) the influence of a target's allocentric location on visual discrimination was stronger in the upper VF. These results support the hypothesis that the upper and lower VFs are primarily represented in the allocentric and egocentric references, respectively.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Two spatial reference systems, i.e., the observer- centered (egocentric) and object-centered (allocentric) references, are most commonly used to locate the position of the external objects in space. Although we sense the world as a unified entity, visual processing is asymmetric between upper and lower visual fields (VFs). For example, the goal-directed reaching responses are more efficient in the lower VF. Such asymmetry suggests that the visual space might be composed of different realms regarding perception and action. Since the peripersonal realm includes the space that one can reach, mostly in the lower VF, it is highly likely that the peripersonal realm might mainly be represented in the egocentric reference for visuomotor operation. In contrast, the extrapersonal realm takes place away from the observer and is mostly observed in the upper VF, which is presumably represented in the allocentric reference for orientation in topographically defined space. This theory, however, has not been thoroughly tested experimentally. In the present study, we assessed the contributions of the egocentric and allocentric reference systems on visual discrimination in the upper and lower VFs through measuring the manual reaction times (RTs) of human subjects. We found that: (a) the influence of a target's egocentric location on visual discrimination was stronger in the lower VF; and (b) the influence of a target's allocentric location on visual discrimination was stronger in the upper VF. These results support the hypothesis that the upper and lower VFs are primarily represented in the allocentric and egocentric references, respectively. |
Ying Zhou; Bing Li; Gang Wang; Mingsha Zhang; Yujun Pan Leftward deviation and asymmetric speed of egocentric judgment between left and right visual fields Journal Article Frontiers in Neuroscience, 11 , pp. 1–10, 2017. @article{Zhou2017d, title = {Leftward deviation and asymmetric speed of egocentric judgment between left and right visual fields}, author = {Ying Zhou and Bing Li and Gang Wang and Mingsha Zhang and Yujun Pan}, doi = {10.3389/fnins.2017.00364}, year = {2017}, date = {2017-01-01}, journal = {Frontiers in Neuroscience}, volume = {11}, pages = {1--10}, abstract = {The egocentric reference frame is essential for body orientation and spatial localization of external objects. Recent neuroimaging and lesion studies have revealed that the right hemisphere of humans may play a more dominant role in processing egocentric information than the left hemisphere. However, previous studies of egocentric discrimination mainly focused on assessing the accuracy of egocentric judgment, leaving its timing unexplored. In addition, most previous studies never monitored the subjects' eye position during the experiments, so the influence of eye position on egocentric judgment could not be excluded. In the present study, we systematically assessed the processing of egocentric information in healthy human subjects by measuring the location of their visual subjective straight ahead (SSA) and their manual reaction time (RT) during fixation (monitored by eye tracker). In an egocentric discrimination task, subjects were required to judge the position of a visual cue relative to the subjective mid-sagittal plane and respond as quickly as possible. We found that the SSA of all subjects deviated to the left side of the body mid-sagittal plane. In addition, all subjects but one showed the longest RT at the location closest to the SSA; and in population, the RTs in the left visual field (VF) were longer than that in the right VF. These results might be due to the right hemisphere's dominant role in processing egocentric information, and its more prominent representation of the ipsilateral VF than that of the left hemisphere.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The egocentric reference frame is essential for body orientation and spatial localization of external objects. Recent neuroimaging and lesion studies have revealed that the right hemisphere of humans may play a more dominant role in processing egocentric information than the left hemisphere. However, previous studies of egocentric discrimination mainly focused on assessing the accuracy of egocentric judgment, leaving its timing unexplored. In addition, most previous studies never monitored the subjects' eye position during the experiments, so the influence of eye position on egocentric judgment could not be excluded. In the present study, we systematically assessed the processing of egocentric information in healthy human subjects by measuring the location of their visual subjective straight ahead (SSA) and their manual reaction time (RT) during fixation (monitored by eye tracker). In an egocentric discrimination task, subjects were required to judge the position of a visual cue relative to the subjective mid-sagittal plane and respond as quickly as possible. We found that the SSA of all subjects deviated to the left side of the body mid-sagittal plane. In addition, all subjects but one showed the longest RT at the location closest to the SSA; and in population, the RTs in the left visual field (VF) were longer than that in the right VF. These results might be due to the right hemisphere's dominant role in processing egocentric information, and its more prominent representation of the ipsilateral VF than that of the left hemisphere. |
Peng Zhou; Weiyi Ma; Likan Zhan; Huimin Ma Using the visual world paradigm to study sentence comprehension in Mandarin-speaking children with autism Journal Article Journal of Visualized Experiments, (140), pp. 1–8, 2018. @article{Zhou2018g, title = {Using the visual world paradigm to study sentence comprehension in Mandarin-speaking children with autism}, author = {Peng Zhou and Weiyi Ma and Likan Zhan and Huimin Ma}, doi = {10.3791/58452}, year = {2018}, date = {2018-01-01}, journal = {Journal of Visualized Experiments}, number = {140}, pages = {1--8}, abstract = {Sentence comprehension relies on the ability to rapidly integrate different types of linguistic and non-linguistic information. However, there is currently a paucity of research exploring how preschool children with autism understand sentences using different types of cues. The mechanisms underlying sentence comprehension remains largely unclear. The present study presents a protocol to examine the sentence comprehension abilities of preschool children with autism. More specifically, a visual world paradigm of eye-tracking is used to explore the moment-to-moment sentence comprehension in the children. The paradigm has multiple advantages. First, it is sensitive to the time course of sentence comprehension and thus can provide rich information about how sentence comprehension unfolds over time. Second, it requires minimal task and communication demands, so it is ideal for testing children with autism. To further minimize the computational burden of children, the present study measures eye movements that arise as automatic responses to linguistic input rather than measuring eye movements that accompany conscious responses to spoken instructions.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Sentence comprehension relies on the ability to rapidly integrate different types of linguistic and non-linguistic information. However, there is currently a paucity of research exploring how preschool children with autism understand sentences using different types of cues. The mechanisms underlying sentence comprehension remains largely unclear. The present study presents a protocol to examine the sentence comprehension abilities of preschool children with autism. More specifically, a visual world paradigm of eye-tracking is used to explore the moment-to-moment sentence comprehension in the children. The paradigm has multiple advantages. First, it is sensitive to the time course of sentence comprehension and thus can provide rich information about how sentence comprehension unfolds over time. Second, it requires minimal task and communication demands, so it is ideal for testing children with autism. To further minimize the computational burden of children, the present study measures eye movements that arise as automatic responses to linguistic input rather than measuring eye movements that accompany conscious responses to spoken instructions. |
Peng Zhou; Likan Zhan; Huimin Ma Predictive language processing in preschool children with autism spectrum disorder: An eye-tracking study Journal Article Journal of Psycholinguistic Research, 48 (2), pp. 431–452, 2019. @article{Zhou2019, title = {Predictive language processing in preschool children with autism spectrum disorder: An eye-tracking study}, author = {Peng Zhou and Likan Zhan and Huimin Ma}, doi = {10.1007/s10936-018-9612-5}, year = {2019}, date = {2019-01-01}, journal = {Journal of Psycholinguistic Research}, volume = {48}, number = {2}, pages = {431--452}, publisher = {Springer US}, abstract = {Sentence comprehension relies on the abilities to rapidly integrate different types of linguistic and non-linguistic information. The present study investigated whether Mandarin-speaking preschool children with autism spectrum disorder (ASD) are able to use verb information predictively to anticipate the upcoming linguistic input during real-time sentence comprehension. 26 five-year-olds with ASD, 25 typically developing (TD) five-year-olds and 24 TD four-year-olds were tested using the visual world eye-tracking paradigm. The results showed that the 5-year-olds with ASD, like their TD peers, exhibited verb-based anticipatory eye movements during real-time sentence comprehension. No difference was observed between the ASD and TD groups in the time course of their eye gaze patterns, indicating that Mandarin-speaking preschool children with ASD are able to use verb information as effectively and rapidly as TD peers to predict the upcoming linguistic input.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Sentence comprehension relies on the abilities to rapidly integrate different types of linguistic and non-linguistic information. The present study investigated whether Mandarin-speaking preschool children with autism spectrum disorder (ASD) are able to use verb information predictively to anticipate the upcoming linguistic input during real-time sentence comprehension. 26 five-year-olds with ASD, 25 typically developing (TD) five-year-olds and 24 TD four-year-olds were tested using the visual world eye-tracking paradigm. The results showed that the 5-year-olds with ASD, like their TD peers, exhibited verb-based anticipatory eye movements during real-time sentence comprehension. No difference was observed between the ASD and TD groups in the time course of their eye gaze patterns, indicating that Mandarin-speaking preschool children with ASD are able to use verb information as effectively and rapidly as TD peers to predict the upcoming linguistic input. |
Peng Zhou; Likan Zhan; Huimin Ma Understanding others' minds: Social inference in preschool children with autism spectrum disorder Journal Article Journal of Autism and Developmental Disorders, pp. 1–12, 2019. @article{Zhou2019a, title = {Understanding others' minds: Social inference in preschool children with autism spectrum disorder}, author = {Peng Zhou and Likan Zhan and Huimin Ma}, doi = {10.1007/s10803-019-04167-x}, year = {2019}, date = {2019-08-01}, journal = {Journal of Autism and Developmental Disorders}, pages = {1--12}, publisher = {Springer Science and Business Media LLC}, abstract = {The study used an eye-tracking task to investigate whether preschool children with autism spectrum disorder (ASD) are able to make inferences about others' behavior in terms of their mental states in a social setting. Fifty typically developing (TD) 4- and 5-year-olds and 22 5-year-olds with ASD participated in the study, where their eye-movements were recorded as automatic responses to given situations. The results show that unlike their TD peers, children with ASD failed to exhibit eye gaze patterns that reflect their ability to infer about others' behavior by spontaneously encoding socially relevant information and attributing mental states to others. Implications of the findings were discussed in relation to the proposal that implicit/spontaneous Theory of Mind is persistently impaired in ASD.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The study used an eye-tracking task to investigate whether preschool children with autism spectrum disorder (ASD) are able to make inferences about others' behavior in terms of their mental states in a social setting. Fifty typically developing (TD) 4- and 5-year-olds and 22 5-year-olds with ASD participated in the study, where their eye-movements were recorded as automatic responses to given situations. The results show that unlike their TD peers, children with ASD failed to exhibit eye gaze patterns that reflect their ability to infer about others' behavior by spontaneously encoding socially relevant information and attributing mental states to others. Implications of the findings were discussed in relation to the proposal that implicit/spontaneous Theory of Mind is persistently impaired in ASD. |
Peng Zhou; Weiyi Ma; Likan Zhan A deficit in using prosodic cues to understand communicative intentions by children with autism spectrum disorders: An eye-tracking study Journal Article First Language, 40 (1), pp. 41–63, 2020. @article{Zhou2020, title = {A deficit in using prosodic cues to understand communicative intentions by children with autism spectrum disorders: An eye-tracking study}, author = {Peng Zhou and Weiyi Ma and Likan Zhan}, doi = {10.1177/0142723719885270}, year = {2020}, date = {2020-01-01}, journal = {First Language}, volume = {40}, number = {1}, pages = {41--63}, abstract = {The present study investigated whether Mandarin-speaking preschool children with autism spectrum disorders (ASD) were able to use prosodic cues to understand others' communicative intentions. Using the visual world eye-tracking paradigm, the study found that unlike typically developing (TD) 4-year-olds, both 4-year-olds with ASD and 5-year-olds with ASD exhibited an eye gaze pattern that reflected their inability to use prosodic cues to infer the intended meaning of the speaker. Their performance was relatively independent of their verbal IQ and mean length of utterance. In addition, the findings also show that there was no development in this ability from 4 years of age to 5 years of age. The findings indicate that Mandarin-speaking preschool children with ASD exhibit a deficit in using prosodic cues to understand the communicative intentions of the speaker, and this ability might be inherently impaired in ASD.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The present study investigated whether Mandarin-speaking preschool children with autism spectrum disorders (ASD) were able to use prosodic cues to understand others' communicative intentions. Using the visual world eye-tracking paradigm, the study found that unlike typically developing (TD) 4-year-olds, both 4-year-olds with ASD and 5-year-olds with ASD exhibited an eye gaze pattern that reflected their inability to use prosodic cues to infer the intended meaning of the speaker. Their performance was relatively independent of their verbal IQ and mean length of utterance. In addition, the findings also show that there was no development in this ability from 4 years of age to 5 years of age. The findings indicate that Mandarin-speaking preschool children with ASD exhibit a deficit in using prosodic cues to understand the communicative intentions of the speaker, and this ability might be inherently impaired in ASD. |
Xiao Lin Zhu; Shu Ping Tan; Fu De Yang; Wei Sun; Chong Sheng Song; Jie Feng Cui; Yan Li Zhao; Feng Mei Fan; Ya Jun Li; Yun Long Tan; Yi Zhuang Zou Visual scanning of emotional faces in schizophrenia Journal Article Neuroscience Letters, 552 , pp. 46–51, 2013. @article{Zhu2013a, title = {Visual scanning of emotional faces in schizophrenia}, author = {Xiao Lin Zhu and Shu Ping Tan and Fu {De Yang} and Wei Sun and Chong Sheng Song and Jie Feng Cui and Yan Li Zhao and Feng Mei Fan and Ya Jun Li and Yun Long Tan and Yi Zhuang Zou}, doi = {10.1016/j.neulet.2013.07.046}, year = {2013}, date = {2013-01-01}, journal = {Neuroscience Letters}, volume = {552}, pages = {46--51}, publisher = {Elsevier Ireland Ltd}, abstract = {This study investigated eye movement differences during facial emotion recognition between 101 patients with chronic schizophrenia and 101 controls. Independent of facial emotion, patients with schizophrenia processed facial information inefficiently; they showed significantly more direct fixations that lasted longer to interest areas (IAs), such as the eyes, nose, mouth, and nasion. The total fixation number, mean fixation duration, and total fixation duration were significantly increased in schizophrenia. Additionally, the number of fixations per second to IAs (IA fixation number/s) was significantly lower in schizophrenia. However, no differences were found between the two groups in the proportion of number of fixations to IAs or total fixation number (IA fixation number %). Interestingly, the negative symptoms of patients with schizophrenia negatively correlated with IA fixation number %. Both groups showed significantly greater attention to positive faces. Compared to controls, patients with schizophrenia exhibited significantly more fixations directed to IAs, a higher total fixation number, and lower IA fixation number/s for negative faces. These results indicate that facial processing efficiency is significantly decreased in schizophrenia, but no difference was observed in processing strategy. Patients with schizophrenia may have special deficits in processing negative faces, and negative symptoms may affect visual scanning parameters.}, keywords = {}, pubstate = {published}, tppubtype = {article} } This study investigated eye movement differences during facial emotion recognition between 101 patients with chronic schizophrenia and 101 controls. Independent of facial emotion, patients with schizophrenia processed facial information inefficiently; they showed significantly more direct fixations that lasted longer to interest areas (IAs), such as the eyes, nose, mouth, and nasion. The total fixation number, mean fixation duration, and total fixation duration were significantly increased in schizophrenia. Additionally, the number of fixations per second to IAs (IA fixation number/s) was significantly lower in schizophrenia. However, no differences were found between the two groups in the proportion of number of fixations to IAs or total fixation number (IA fixation number %). Interestingly, the negative symptoms of patients with schizophrenia negatively correlated with IA fixation number %. Both groups showed significantly greater attention to positive faces. Compared to controls, patients with schizophrenia exhibited significantly more fixations directed to IAs, a higher total fixation number, and lower IA fixation number/s for negative faces. These results indicate that facial processing efficiency is significantly decreased in schizophrenia, but no difference was observed in processing strategy. Patients with schizophrenia may have special deficits in processing negative faces, and negative symptoms may affect visual scanning parameters. |
Hongzhi Zhu; Septimiu Salcudean; Robert Rohling The Neyman Pearson detection of microsaccades with maximum likelihood estimation of parameters Journal Article Journal of Vision, 19 (13), pp. 1–17, 2019. @article{Zhu2019b, title = {The Neyman Pearson detection of microsaccades with maximum likelihood estimation of parameters}, author = {Hongzhi Zhu and Septimiu Salcudean and Robert Rohling}, year = {2019}, date = {2019-01-01}, journal = {Journal of Vision}, volume = {19}, number = {13}, pages = {1--17}, abstract = {Despite the fact that the velocity threshold method is widely applied, the detection of microsaccades continues to be a challenging problem, due to gaze-tracking inaccuracy and the transient nature of microsaccades. Important parameters associated with a saccadic event, e.g., saccade duration, amplitude, and maximum velocity, are sometimes imprecisely estimated, which may lead to biases in inferring the roles of microsaccades in perception and cognition. To overcome the biases and have a better detection algorithm for microsaccades, we propose a novel statistical model for the tracked gaze positions during eye fixations. In this model, we incorporate a parametrization that has been previously applied to model saccades, which allows us to veridically capture the velocity profile of saccadic eye movements. Based on our model, we derive the Neyman Pearson Detector (NPD) for saccadic events. Implemented in conjunction with the maximum likelihood estimation method, our NPD can detect a saccadic event and estimate all parameters simultaneously. Because of its adaptive nature and its statistical optimality, our NPD method was able to better detect microsaccades in some datasets when compared with a recently proposed state-of-the-art method based on convolutional neural networks. NPD also yielded comparable performance with a recently developed Bayesian algorithm, with the added benefit of modeling a more biologically veridical velocity profile of the saccade. As opposed to these algorithms, NPD can lend itself better to online saccade detection, and thus has potential for human-computer interaction applications. Our algorithm is publicly available at https://github.com/hz-zhu/NPD-micro-saccade-detection.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Despite the fact that the velocity threshold method is widely applied, the detection of microsaccades continues to be a challenging problem, due to gaze-tracking inaccuracy and the transient nature of microsaccades. Important parameters associated with a saccadic event, e.g., saccade duration, amplitude, and maximum velocity, are sometimes imprecisely estimated, which may lead to biases in inferring the roles of microsaccades in perception and cognition. To overcome the biases and have a better detection algorithm for microsaccades, we propose a novel statistical model for the tracked gaze positions during eye fixations. In this model, we incorporate a parametrization that has been previously applied to model saccades, which allows us to veridically capture the velocity profile of saccadic eye movements. Based on our model, we derive the Neyman Pearson Detector (NPD) for saccadic events. Implemented in conjunction with the maximum likelihood estimation method, our NPD can detect a saccadic event and estimate all parameters simultaneously. Because of its adaptive nature and its statistical optimality, our NPD method was able to better detect microsaccades in some datasets when compared with a recently proposed state-of-the-art method based on convolutional neural networks. NPD also yielded comparable performance with a recently developed Bayesian algorithm, with the added benefit of modeling a more biologically veridical velocity profile of the saccade. As opposed to these algorithms, NPD can lend itself better to online saccade detection, and thus has potential for human-computer interaction applications. Our algorithm is publicly available at https://github.com/hz-zhu/NPD-micro-saccade-detection. |
Jing Zhu; Ying Wang; Rong La; Jiawei Zhan; Junhong Niu; Shuai Zeng; Xiping Hu Multimodal mild depression recognition based on EEG-EM synchronization acquisition network Journal Article IEEE Access, 7 , pp. 28196–28210, 2019. @article{Zhu2019b, title = {Multimodal mild depression recognition based on EEG-EM synchronization acquisition network}, author = {Jing Zhu and Ying Wang and Rong La and Jiawei Zhan and Junhong Niu and Shuai Zeng and Xiping Hu}, doi = {10.1109/ACCESS.2019.2901950}, year = {2019}, date = {2019-01-01}, journal = {IEEE Access}, volume = {7}, pages = {28196--28210}, publisher = {IEEE}, abstract = {In this paper, we used electroencephalography (EEG)-eye movement (EM) synchronization acquisition network to simultaneously record both EEG and EM physiological signals of the mild depression and normal controls during free viewing. Then, we consider a multimodal feature fusion method that can best discriminate between mild depression and normal control subjects as a step toward achieving our long-term aim of developing an objective and effective multimodal system that assists doctors during diagnosis and monitoring of mild depression. Based on the multimodal denoising autoencoder, we use two feature fusion strategies (feature fusion and hidden layer fusion) for fusion of the EEG and EM signals to improve the recognition performance of classifiers for mild depression. Our experimental results indicate that the EEG-EM synchronization acquisition network ensures that the recorded EM and EEG data require that both the data streams are synchronized with millisecond precision, and both fusion methods can improve the mild depression recognition accuracy, thus demonstrating the complementary nature of the modalities. Compared with the unimodal classification approach that uses only EEG or EM, the feature fusion method slightly improved the recognition accuracy by 1.88%, while the hidden layer fusion method significantly improved the classification rate by up to 7.36%. In particular, the highest classification accuracy achieved in this paper was 83.42%. These results indicate that the multimodal deep learning approaches with input data using a combination of EEG and EM signals are promising in achieving real-time monitoring and identification of mild depression.}, keywords = {}, pubstate = {published}, tppubtype = {article} } In this paper, we used electroencephalography (EEG)-eye movement (EM) synchronization acquisition network to simultaneously record both EEG and EM physiological signals of the mild depression and normal controls during free viewing. Then, we consider a multimodal feature fusion method that can best discriminate between mild depression and normal control subjects as a step toward achieving our long-term aim of developing an objective and effective multimodal system that assists doctors during diagnosis and monitoring of mild depression. Based on the multimodal denoising autoencoder, we use two feature fusion strategies (feature fusion and hidden layer fusion) for fusion of the EEG and EM signals to improve the recognition performance of classifiers for mild depression. Our experimental results indicate that the EEG-EM synchronization acquisition network ensures that the recorded EM and EEG data require that both the data streams are synchronized with millisecond precision, and both fusion methods can improve the mild depression recognition accuracy, thus demonstrating the complementary nature of the modalities. Compared with the unimodal classification approach that uses only EEG or EM, the feature fusion method slightly improved the recognition accuracy by 1.88%, while the hidden layer fusion method significantly improved the classification rate by up to 7.36%. In particular, the highest classification accuracy achieved in this paper was 83.42%. These results indicate that the multimodal deep learning approaches with input data using a combination of EEG and EM signals are promising in achieving real-time monitoring and identification of mild depression. |
Jing Zhu; Zihan Wang; Tao Gong; Shuai Zeng; Xiaowei Li; Bin Hu; Jianxiu Li; Shuting Sun; Lan Zhang An improved classification model for depression detection using EEG and eye tracking data Journal Article IEEE Transactions on Nanobioscience, 19 (3), pp. 527–537, 2020. @article{Zhu2020a, title = {An improved classification model for depression detection using EEG and eye tracking data}, author = {Jing Zhu and Zihan Wang and Tao Gong and Shuai Zeng and Xiaowei Li and Bin Hu and Jianxiu Li and Shuting Sun and Lan Zhang}, doi = {10.1109/TNB.2020.2990690}, year = {2020}, date = {2020-01-01}, journal = {IEEE Transactions on Nanobioscience}, volume = {19}, number = {3}, pages = {527--537}, abstract = {At present, depression has become a main health burden in the world. However, there are many problems with the diagnosis of depression, such as low patient cooperation, subjective bias and low accuracy. Therefore, reliable and objective evaluation method is needed to achieve effective depression detection. Electroencephalogram (EEG) and eye movements (EMs) data have been widely used for depression detection due to their advantages of easy recording and non-invasion. This research proposes a content based ensemble method (CBEM) to promote the depression detection accuracy, both static and dynamic CBEM were discussed. In the proposed model, EEG or EMs dataset was divided into subsets by the context of the experiments, and then a majority vote strategy was used to determine the subjects' label. The validation of the method is testified on two datasets which included free viewing eye tracking and resting-state EEG, and these two datasets have 36,34 subjects respectively. For these two datasets, CBEM achieves accuracies of 82.5% and 92.65% respectively. The results show that CBEM outperforms traditional classification methods. Our findings provide an effective solution for promoting the accuracy of depression identification, and provide an effective method for identification of depression, which in the future could be used for the auxiliary diagnosis of depression.}, keywords = {}, pubstate = {published}, tppubtype = {article} } At present, depression has become a main health burden in the world. However, there are many problems with the diagnosis of depression, such as low patient cooperation, subjective bias and low accuracy. Therefore, reliable and objective evaluation method is needed to achieve effective depression detection. Electroencephalogram (EEG) and eye movements (EMs) data have been widely used for depression detection due to their advantages of easy recording and non-invasion. This research proposes a content based ensemble method (CBEM) to promote the depression detection accuracy, both static and dynamic CBEM were discussed. In the proposed model, EEG or EMs dataset was divided into subsets by the context of the experiments, and then a majority vote strategy was used to determine the subjects' label. The validation of the method is testified on two datasets which included free viewing eye tracking and resting-state EEG, and these two datasets have 36,34 subjects respectively. For these two datasets, CBEM achieves accuracies of 82.5% and 92.65% respectively. The results show that CBEM outperforms traditional classification methods. Our findings provide an effective solution for promoting the accuracy of depression identification, and provide an effective method for identification of depression, which in the future could be used for the auxiliary diagnosis of depression. |
Eckart Zimmermann; Markus Lappe Mislocalization of flashed and stationary visual stimuli after adaptation of reactive and scanning saccades Journal Article Journal of Neuroscience, 29 (35), pp. 11055–11064, 2009. @article{Zimmermann2009, title = {Mislocalization of flashed and stationary visual stimuli after adaptation of reactive and scanning saccades}, author = {Eckart Zimmermann and Markus Lappe}, doi = {10.1523/JNEUROSCI.1604-09.2009}, year = {2009}, date = {2009-01-01}, journal = {Journal of Neuroscience}, volume = {29}, number = {35}, pages = {11055--11064}, abstract = {When we look around and register the location of visual objects, our oculomotor system continuously prepares targets for saccadic eye movements. The preparation of saccade targets may be directly involved in the perception of object location because modification of saccade amplitude by saccade adaptation leads to a distortion of the visual localization of briefly flashed spatial probes. Here, we investigated effects of adaptation on the localization of continuously visible objects. We compared adaptation-induced mislocalization of probes that were present for 20 ms during the saccade preparation period and of probes that were present for textgreater1 s before saccade initiation. We studied the mislocalization of these probes for two different saccade types, reactive saccades to a suddenly appearing target and scanning saccades in the self-paced viewing of a stationary scene. Adaptation of reactive saccades induced mislocalization of flashed probes. Adaptation of scanning saccades induced in addition also mislocalization of stationary objects. The mislocalization occurred in the absence of visual landmarks and must therefore originate from the change in saccade motor parameters. After adaptation of one type of saccade, the saccade amplitude change and the mislocalization transferred only weakly to the other saccade type. Mislocalization of flashed and stationary probes thus followed the selectivity of saccade adaptation. Since the generation and adaptation of reactive and scanning saccades are known to involve partially different brain mechanisms, our results suggest that visual localization of objects in space is linked to saccade targeting at multiple sites in the brain.}, keywords = {}, pubstate = {published}, tppubtype = {article} } When we look around and register the location of visual objects, our oculomotor system continuously prepares targets for saccadic eye movements. The preparation of saccade targets may be directly involved in the perception of object location because modification of saccade amplitude by saccade adaptation leads to a distortion of the visual localization of briefly flashed spatial probes. Here, we investigated effects of adaptation on the localization of continuously visible objects. We compared adaptation-induced mislocalization of probes that were present for 20 ms during the saccade preparation period and of probes that were present for textgreater1 s before saccade initiation. We studied the mislocalization of these probes for two different saccade types, reactive saccades to a suddenly appearing target and scanning saccades in the self-paced viewing of a stationary scene. Adaptation of reactive saccades induced mislocalization of flashed probes. Adaptation of scanning saccades induced in addition also mislocalization of stationary objects. The mislocalization occurred in the absence of visual landmarks and must therefore originate from the change in saccade motor parameters. After adaptation of one type of saccade, the saccade amplitude change and the mislocalization transferred only weakly to the other saccade type. Mislocalization of flashed and stationary probes thus followed the selectivity of saccade adaptation. Since the generation and adaptation of reactive and scanning saccades are known to involve partially different brain mechanisms, our results suggest that visual localization of objects in space is linked to saccade targeting at multiple sites in the brain. |
Eckart Zimmermann; Markus Lappe Motor signals in visual localization Journal Article Journal of Vision, 10 (6), pp. 1–11, 2010. @article{Zimmermann2010, title = {Motor signals in visual localization}, author = {Eckart Zimmermann and Markus Lappe}, doi = {10.1167/10.6.2}, year = {2010}, date = {2010-01-01}, journal = {Journal of Vision}, volume = {10}, number = {6}, pages = {1--11}, abstract = {We demonstrate a strong sensory-motor coupling in visual localization in which experimental modification of the control of saccadic eye movements leads to an associated change in the perceived location of objects. Amplitudes of saccades to peripheral targets were altered by saccadic adaptation, induced by an artificial step of the saccade target during the eye movement, which leads the oculomotor system to recalibrate saccade parameters. Increasing saccade amplitudes induced concurrent shifts in perceived location of visual objects. The magnitude of perceptual shift depended on the size and persistence of errors between intended and actual saccade amplitudes. This tight agreement between the change of eye movement control and the change of localization shows that perceptual space is shaped by motor knowledge rather than simply constructed from visual input.}, keywords = {}, pubstate = {published}, tppubtype = {article} } We demonstrate a strong sensory-motor coupling in visual localization in which experimental modification of the control of saccadic eye movements leads to an associated change in the perceived location of objects. Amplitudes of saccades to peripheral targets were altered by saccadic adaptation, induced by an artificial step of the saccade target during the eye movement, which leads the oculomotor system to recalibrate saccade parameters. Increasing saccade amplitudes induced concurrent shifts in perceived location of visual objects. The magnitude of perceptual shift depended on the size and persistence of errors between intended and actual saccade amplitudes. This tight agreement between the change of eye movement control and the change of localization shows that perceptual space is shaped by motor knowledge rather than simply constructed from visual input. |
Eckart Zimmermann; David C Burr; Concetta M Morrone Spatiotopic visual maps revealed by saccadic adaptation in humans Journal Article Current Biology, 21 (16), pp. 1380–1384, 2011. @article{Zimmermann2011, title = {Spatiotopic visual maps revealed by saccadic adaptation in humans}, author = {Eckart Zimmermann and David C Burr and Concetta M Morrone}, doi = {10.1016/j.cub.2011.06.014}, year = {2011}, date = {2011-01-01}, journal = {Current Biology}, volume = {21}, number = {16}, pages = {1380--1384}, publisher = {Elsevier Ltd}, abstract = {Saccadic adaptation [1] is a powerful experimental paradigm to probe the mechanisms of eye movement control and spatial vision, in which saccadic amplitudes change in response to false visual feedback. The adaptation occurs primarily in the motor system [2, 3], but there is also evidence for visual adaptation, depending on the size and the permanence of the postsaccadic error [4-7]. Here we confirm that adaptation has a strong visual component and show that the visual component of the adaptation is spatially selective in external, not retinal coordinates. Subjects performed a memory-guided, double-saccade, outward-adaptation task designed to maximize visual adaptation and to dissociate the visual and motor corrections. When the memorized saccadic target was in the same position (in external space) as that used in the adaptation training, saccade targeting was strongly influenced by adaptation (even if not matched in retinal or cranial position), but when in the same retinal or cranial but different external spatial position, targeting was unaffected by adaptation, demonstrating unequivocal spatiotopic selectivity. These results point to the existence of a spatiotopic neural representation for eye movement control that adapts in response to saccade error signals.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Saccadic adaptation [1] is a powerful experimental paradigm to probe the mechanisms of eye movement control and spatial vision, in which saccadic amplitudes change in response to false visual feedback. The adaptation occurs primarily in the motor system [2, 3], but there is also evidence for visual adaptation, depending on the size and the permanence of the postsaccadic error [4-7]. Here we confirm that adaptation has a strong visual component and show that the visual component of the adaptation is spatially selective in external, not retinal coordinates. Subjects performed a memory-guided, double-saccade, outward-adaptation task designed to maximize visual adaptation and to dissociate the visual and motor corrections. When the memorized saccadic target was in the same position (in external space) as that used in the adaptation training, saccade targeting was strongly influenced by adaptation (even if not matched in retinal or cranial position), but when in the same retinal or cranial but different external spatial position, targeting was unaffected by adaptation, demonstrating unequivocal spatiotopic selectivity. These results point to the existence of a spatiotopic neural representation for eye movement control that adapts in response to saccade error signals. |
Eckart Zimmermann The reference frames in saccade adaptation Journal Article Journal of Neurophysiology, 109 , pp. 1815, 2013. @article{Zimmermann2013a, title = {The reference frames in saccade adaptation}, author = {Eckart Zimmermann}, doi = {10.1152/jn.00743.2012}, year = {2013}, date = {2013-01-01}, journal = {Journal of Neurophysiology}, volume = {109}, pages = {1815}, abstract = {Saccade adaptation is a mecha- nism that adjusts saccade landing positions if they systematically fail to reach their intended target. In the laboratory, saccades can be shortened or lengthened if the saccade target is displaced during execution of the saccade. In this study, saccades were performed from different positions to an adapted saccade target to dissociate adapta- tion to a spatiotopic position in external space from a combined retinotopic and spatiotopic coding. The presentation duration of the saccade target before saccade execution was systematically varied, during adaptation and during test trials, with a delayed saccade paradigm. Spatiotopic shifts in landing positions depended on a certain preview duration of the target before saccade execution. When saccades were performed immediately to a suddenly appearing target, no spatiotopic adaptation was observed. These results suggest that a spatiotopic representation of the visual target signal builds up as a function of the duration the saccade target is visible before saccade execution. Different coordinate frames might also explain the separate adaptability of reactive and voluntary saccades. Spatiotopic effects were found only in outward adaptation but not in inward adaptation, which is consistent with the idea that outward adaptation takes place at the level of the visual target representation, whereas inward adap- tation is achieved at a purely motor level.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Saccade adaptation is a mecha- nism that adjusts saccade landing positions if they systematically fail to reach their intended target. In the laboratory, saccades can be shortened or lengthened if the saccade target is displaced during execution of the saccade. In this study, saccades were performed from different positions to an adapted saccade target to dissociate adapta- tion to a spatiotopic position in external space from a combined retinotopic and spatiotopic coding. The presentation duration of the saccade target before saccade execution was systematically varied, during adaptation and during test trials, with a delayed saccade paradigm. Spatiotopic shifts in landing positions depended on a certain preview duration of the target before saccade execution. When saccades were performed immediately to a suddenly appearing target, no spatiotopic adaptation was observed. These results suggest that a spatiotopic representation of the visual target signal builds up as a function of the duration the saccade target is visible before saccade execution. Different coordinate frames might also explain the separate adaptability of reactive and voluntary saccades. Spatiotopic effects were found only in outward adaptation but not in inward adaptation, which is consistent with the idea that outward adaptation takes place at the level of the visual target representation, whereas inward adap- tation is achieved at a purely motor level. |
Eckart Zimmermann; S Born; Gereon R Fink; P Cavanagh Masking produces compression of space and time in the absence of eye movements Journal Article Journal of Neurophysiology, 112 (12), pp. 3066–3076, 2014. @article{Zimmermann2014a, title = {Masking produces compression of space and time in the absence of eye movements}, author = {Eckart Zimmermann and S Born and Gereon R Fink and P Cavanagh}, doi = {10.1152/jn.00156.2014}, year = {2014}, date = {2014-01-01}, journal = {Journal of Neurophysiology}, volume = {112}, number = {12}, pages = {3066--3076}, abstract = {Whenever the visual stream is abruptly disturbed by eye movements, blinks, masks, or flashes of light, the visual system needs to retrieve the new locations of current targets and to reconstruct the timing of events to straddle the interruption. This process may introduce position and timing errors. We here report that very similar errors are seen in human subjects across three different paradigms when disturbances are caused by either eye movements, as is well known, or, as we now show, masking. We suggest that the characteristic effects of eye movements on position and time, spatial and temporal compression and saccadic suppression of displacement, are consequences of the interruption and the subsequent reconnection and are seen also when visual input is masked without any eye movements. Our data show that compression and suppression effects are not solely a product of ocular motor activity but instead can be properties of a correspondence process that links the targets of interest across interruptions in visual input, no matter what their source.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Whenever the visual stream is abruptly disturbed by eye movements, blinks, masks, or flashes of light, the visual system needs to retrieve the new locations of current targets and to reconstruct the timing of events to straddle the interruption. This process may introduce position and timing errors. We here report that very similar errors are seen in human subjects across three different paradigms when disturbances are caused by either eye movements, as is well known, or, as we now show, masking. We suggest that the characteristic effects of eye movements on position and time, spatial and temporal compression and saccadic suppression of displacement, are consequences of the interruption and the subsequent reconnection and are seen also when visual input is masked without any eye movements. Our data show that compression and suppression effects are not solely a product of ocular motor activity but instead can be properties of a correspondence process that links the targets of interest across interruptions in visual input, no matter what their source. |
Eckart Zimmermann; Concetta M Morrone; David C Burr The visual component to saccadic compression. Journal Article Journal of Vision, 14 (12), pp. 13–, 2014. @article{Zimmermann2014b, title = {The visual component to saccadic compression.}, author = {Eckart Zimmermann and Concetta M Morrone and David C Burr}, doi = {10.1167/14.12.13.doi}, year = {2014}, date = {2014-01-01}, journal = {Journal of Vision}, volume = {14}, number = {12}, pages = {13--}, abstract = {Visual objects presented around the time of saccadic eye movements are strongly mislocalized towards the saccadic target, a phenomenon known as "saccadic compression." Here we show that perisaccadic compression is modulated by the presence of a visual saccadic target. When subjects saccaded to the center of the screen with no visible target, perisaccadic localization was more veridical than when tested with a target. Presenting a saccadic target sometime before saccade initiation was sufficient to induce mislocalization. When we systematically varied the onset of the saccade target, we found that it had to be presented around 100 ms before saccade execution to cause strong mislocalization: saccadic targets presented after this time caused progressively less mislocalization. When subjects made a saccade to screen center with a reference object placed at various positions, mislocalization was focused towards the position of the reference object. The results suggest that saccadic compression is a signature of a mechanism attempting to match objects seen before the saccade with those seen after.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Visual objects presented around the time of saccadic eye movements are strongly mislocalized towards the saccadic target, a phenomenon known as "saccadic compression." Here we show that perisaccadic compression is modulated by the presence of a visual saccadic target. When subjects saccaded to the center of the screen with no visible target, perisaccadic localization was more veridical than when tested with a target. Presenting a saccadic target sometime before saccade initiation was sufficient to induce mislocalization. When we systematically varied the onset of the saccade target, we found that it had to be presented around 100 ms before saccade execution to cause strong mislocalization: saccadic targets presented after this time caused progressively less mislocalization. When subjects made a saccade to screen center with a reference object placed at various positions, mislocalization was focused towards the position of the reference object. The results suggest that saccadic compression is a signature of a mechanism attempting to match objects seen before the saccade with those seen after. |
Eckart Zimmermann Visual mislocalization during double-step saccades Journal Article Frontiers in Systems Neuroscience, 9 (132), pp. 1–9, 2015. @article{Zimmermann2015, title = {Visual mislocalization during double-step saccades}, author = {Eckart Zimmermann}, doi = {10.3389/fnsys.2015.00132}, year = {2015}, date = {2015-01-01}, journal = {Frontiers in Systems Neuroscience}, volume = {9}, number = {132}, pages = {1--9}, abstract = {Visual objects presented briefly at the time of saccade onset appear compressed toward the saccade target. Compression strength depends on the presentation of a visual saccade target signal and is strongly reduced during the second saccade of a double-step saccade sequence (Zimmermann et al., 2014b). Here, I tested whether perisaccadic compression is linked to saccade planning by contrasting two double-step paradigms. In the same-direction double-step paradigm, subjects were required to perform two rightward 10° saccades successively. At various times around execution of the saccade sequence a probe dot was briefly flashed. Subjects had to localize the position of the probe dot after they had completed both saccades. I found compression of visual space only at the time of the first but not at the time of the second saccade. In the reverse-direction paradigm, subjects performed first a rightward 10° saccade followed by a leftward 10° saccade back to initial fixation. In this paradigm compression was found in similar magnitude during both saccades. Analysis of the saccade parameters did not reveal indications of saccade sequence preplanning in this paradigm. I therefore conclude that saccade planning, rather than saccade execution factors, is involved in perisaccadic compression.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Visual objects presented briefly at the time of saccade onset appear compressed toward the saccade target. Compression strength depends on the presentation of a visual saccade target signal and is strongly reduced during the second saccade of a double-step saccade sequence (Zimmermann et al., 2014b). Here, I tested whether perisaccadic compression is linked to saccade planning by contrasting two double-step paradigms. In the same-direction double-step paradigm, subjects were required to perform two rightward 10° saccades successively. At various times around execution of the saccade sequence a probe dot was briefly flashed. Subjects had to localize the position of the probe dot after they had completed both saccades. I found compression of visual space only at the time of the first but not at the time of the second saccade. In the reverse-direction paradigm, subjects performed first a rightward 10° saccade followed by a leftward 10° saccade back to initial fixation. In this paradigm compression was found in similar magnitude during both saccades. Analysis of the saccade parameters did not reveal indications of saccade sequence preplanning in this paradigm. I therefore conclude that saccade planning, rather than saccade execution factors, is involved in perisaccadic compression. |
Eckart Zimmermann; Florian Ostendorf; C J Ploner; Markus Lappe Impairment of saccade adaptation in a patient with a focal thalamic lesion Journal Article Journal of Neurophysiology, 113 (7), pp. 2351–2359, 2015. @article{Zimmermann2015b, title = {Impairment of saccade adaptation in a patient with a focal thalamic lesion}, author = {Eckart Zimmermann and Florian Ostendorf and C J Ploner and Markus Lappe}, doi = {10.1152/jn.00744.2014}, year = {2015}, date = {2015-01-01}, journal = {Journal of Neurophysiology}, volume = {113}, number = {7}, pages = {2351--2359}, abstract = {The frequent jumps of the eyeballs-called saccades-imply the need for a constant correction of motor errors. If systematic errors are detected in saccade landing, the saccade amplitude adapts to compensate for the error. In the laboratory, saccade adaptation can be studied by displacing the saccade target. Functional selectivity of adaptation for different saccade types suggests that adaptation occurs at multiple sites in the oculomotor system. Saccade motor learning might be the result of a comparison between a prediction of the saccade landing position and its actual postsaccadic location. To investigate whether a thalamic feedback pathway might carry such a prediction signal, we studied a patient with a lesion in the posterior ventrolateral thalamic nucleus. Saccade adaptation was tested for reactive saccades, which are performed to suddenly appearing targets, and for scanning saccades, which are performed to stationary targets. For reactive saccades, we found a clear impairment in adaptation retention ipsilateral to the lesioned side and a larger-than-normal adaptation on the contralesional side. For scanning saccades, adaptation was intact on both sides and not different from the control group. Our results provide the first lesion evidence that adaptation of reactive and scanning saccades relies on distinct feedback pathways from cerebellum to cortex. They further demonstrate that saccade adaptation in humans is not restricted to the cerebellum but also involves cortical areas. The paradoxically strong adaptation for outward target steps can be explained by stronger reliance on visual targeting errors when prediction error signaling is impaired.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The frequent jumps of the eyeballs-called saccades-imply the need for a constant correction of motor errors. If systematic errors are detected in saccade landing, the saccade amplitude adapts to compensate for the error. In the laboratory, saccade adaptation can be studied by displacing the saccade target. Functional selectivity of adaptation for different saccade types suggests that adaptation occurs at multiple sites in the oculomotor system. Saccade motor learning might be the result of a comparison between a prediction of the saccade landing position and its actual postsaccadic location. To investigate whether a thalamic feedback pathway might carry such a prediction signal, we studied a patient with a lesion in the posterior ventrolateral thalamic nucleus. Saccade adaptation was tested for reactive saccades, which are performed to suddenly appearing targets, and for scanning saccades, which are performed to stationary targets. For reactive saccades, we found a clear impairment in adaptation retention ipsilateral to the lesioned side and a larger-than-normal adaptation on the contralesional side. For scanning saccades, adaptation was intact on both sides and not different from the control group. Our results provide the first lesion evidence that adaptation of reactive and scanning saccades relies on distinct feedback pathways from cerebellum to cortex. They further demonstrate that saccade adaptation in humans is not restricted to the cerebellum but also involves cortical areas. The paradoxically strong adaptation for outward target steps can be explained by stronger reliance on visual targeting errors when prediction error signaling is impaired. |
Eckart Zimmermann Spatiotopic buildup of saccade target representation depends on target size Journal Article Journal of Vision, 16 (15), pp. 11, 2016. @article{Zimmermann2016, title = {Spatiotopic buildup of saccade target representation depends on target size}, author = {Eckart Zimmermann}, doi = {10.1167/16.15.11}, year = {2016}, date = {2016-01-01}, journal = {Journal of Vision}, volume = {16}, number = {15}, pages = {11}, abstract = {How we maintain spatial stability across saccade eye movements is an open question in visual neuroscience. A phenomenon that has received much attention in the field is our seemingly poor ability to discriminate the direction of transsaccadic target displacements. We have recently shown that discrimination performance increases the longer the saccade target has been previewed before saccade execution (Zimmermann, Morrone, & Burr, 2013). We have argued that the spatial representation of briefly presented stimuli is weak but that a strong representation is needed for transsaccadic, i.e., spatiotopic localization. Another factor that modulates the representation of saccade targets is stimulus size. The representation of spatially extended targets is more noisy than that of point-like targets. Here, I show that theincreaseintranssaccadic displacement discrimination as a function of saccade target preview duration depends on target size. This effect was found for spatially extended targets—thus replicating the results of Zimmermann et al. (2013)— but not for point-like targets. An analysis of saccade parameters revealed that the constant error for reaching the saccade target was bigger for spatially extended than for point-like targets, consistent with weaker representation of bigger targets. These results show that transsaccadic displacement discrimination becomes accurate when saccade targets are spatially extended and presented longer, thus resembling closer stimuli in real-world environments.}, keywords = {}, pubstate = {published}, tppubtype = {article} } How we maintain spatial stability across saccade eye movements is an open question in visual neuroscience. A phenomenon that has received much attention in the field is our seemingly poor ability to discriminate the direction of transsaccadic target displacements. We have recently shown that discrimination performance increases the longer the saccade target has been previewed before saccade execution (Zimmermann, Morrone, & Burr, 2013). We have argued that the spatial representation of briefly presented stimuli is weak but that a strong representation is needed for transsaccadic, i.e., spatiotopic localization. Another factor that modulates the representation of saccade targets is stimulus size. The representation of spatially extended targets is more noisy than that of point-like targets. Here, I show that theincreaseintranssaccadic displacement discrimination as a function of saccade target preview duration depends on target size. This effect was found for spatially extended targets—thus replicating the results of Zimmermann et al. (2013)— but not for point-like targets. An analysis of saccade parameters revealed that the constant error for reaching the saccade target was bigger for spatially extended than for point-like targets, consistent with weaker representation of bigger targets. These results show that transsaccadic displacement discrimination becomes accurate when saccade targets are spatially extended and presented longer, thus resembling closer stimuli in real-world environments. |
Eckart Zimmermann; Concetta M Morrone; David C Burr Adaptation to size affects saccades with long but not short latencies Journal Article Journal of Vision, 16 (7), pp. 2, 2016. @article{Zimmermann2016a, title = {Adaptation to size affects saccades with long but not short latencies}, author = {Eckart Zimmermann and Concetta M Morrone and David C Burr}, doi = {10.1167/16.7.2}, year = {2016}, date = {2016-01-01}, journal = {Journal of Vision}, volume = {16}, number = {7}, pages = {2}, abstract = {Maintained exposure to a specific stimulus property— such as size, color, or motion—induces perceptual adaptation aftereffects, usually in the opposite direction to that of the adaptor. Here we studied how adaptation to size affects perceived position and visually guided action (saccadic eye movements) to that position. Subjects saccaded to the border of a diamond-shaped object after adaptation to a smaller diamond shape. For saccades in the normal latency range, amplitudes decreased, consistent with saccading to a larger object. Short-latency saccades, however, tended to be affected less by the adaptation, suggesting that they were only partly triggered by a signal representing the illusory target position. We also tested size perception after adaptation, followed by a mask stimulus at the probe location after various delays. Similar size adaptation magnitudes were found for all probe-mask delays. In agreement with earlier studies, these results suggest that the duration of the saccade latency period determines the reference frame that codes the probe location.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Maintained exposure to a specific stimulus property— such as size, color, or motion—induces perceptual adaptation aftereffects, usually in the opposite direction to that of the adaptor. Here we studied how adaptation to size affects perceived position and visually guided action (saccadic eye movements) to that position. Subjects saccaded to the border of a diamond-shaped object after adaptation to a smaller diamond shape. For saccades in the normal latency range, amplitudes decreased, consistent with saccading to a larger object. Short-latency saccades, however, tended to be affected less by the adaptation, suggesting that they were only partly triggered by a signal representing the illusory target position. We also tested size perception after adaptation, followed by a mask stimulus at the probe location after various delays. Similar size adaptation magnitudes were found for all probe-mask delays. In agreement with earlier studies, these results suggest that the duration of the saccade latency period determines the reference frame that codes the probe location. |
Eckart Zimmermann; Ralph Weidner; R O Abdollahi; Gereon R Fink Spatiotopic adaptation in visual areas Journal Article Journal of Neuroscience, 36 (37), pp. 9526–9534, 2016. @article{Zimmermann2016b, title = {Spatiotopic adaptation in visual areas}, author = {Eckart Zimmermann and Ralph Weidner and R O Abdollahi and Gereon R Fink}, doi = {10.1523/JNEUROSCI.0052-16.2016}, year = {2016}, date = {2016-01-01}, journal = {Journal of Neuroscience}, volume = {36}, number = {37}, pages = {9526--9534}, abstract = {The ability to perceive the visual world around us as spatially stable despite frequent eye movements is one of the long-standing mysteries of neuroscience. The existence of neural mechanisms processing spatiotopic information is indispensable for a successful interaction with the external world. However, how the brain handles spatiotopic information remains a matter of debate. We here combined behavioral and fMRI adaptation to investigate the coding of spatiotopic information in the human brain. Subjects were adapted by a prolonged presentation of a tilted grating. Thereafter, they performed a saccade followed by the brief presentation of a probe. This procedure allowed dissociating adaptation aftereffects at retinal and spatiotopic positions. We found significant behavioral and functional adaptation in both retinal and spatiotopic positions, indicating information transfer into a spatiotopic coordinate system. The brain regions involved were located in ventral visual areas V3, V4, and VO. Our findings suggest that spatiotopic representations involved in maintaining visual stability are constructed by dynamically remapping visual feature information between retinotopic regions within early visual areas.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The ability to perceive the visual world around us as spatially stable despite frequent eye movements is one of the long-standing mysteries of neuroscience. The existence of neural mechanisms processing spatiotopic information is indispensable for a successful interaction with the external world. However, how the brain handles spatiotopic information remains a matter of debate. We here combined behavioral and fMRI adaptation to investigate the coding of spatiotopic information in the human brain. Subjects were adapted by a prolonged presentation of a tilted grating. Thereafter, they performed a saccade followed by the brief presentation of a probe. This procedure allowed dissociating adaptation aftereffects at retinal and spatiotopic positions. We found significant behavioral and functional adaptation in both retinal and spatiotopic positions, indicating information transfer into a spatiotopic coordinate system. The brain regions involved were located in ventral visual areas V3, V4, and VO. Our findings suggest that spatiotopic representations involved in maintaining visual stability are constructed by dynamically remapping visual feature information between retinotopic regions within early visual areas. |
Eckart Zimmermann; Concetta M Morrone; P Binda Perception during double-step saccades Journal Article Scientific Reports, 8 , pp. 320, 2018. @article{Zimmermann2018, title = {Perception during double-step saccades}, author = {Eckart Zimmermann and Concetta M Morrone and P Binda}, doi = {10.1038/s41598-017-18554-w}, year = {2018}, date = {2018-01-01}, journal = {Scientific Reports}, volume = {8}, pages = {320}, publisher = {Springer US}, abstract = {How the visual system achieves perceptual stability across saccadic eye movements is a long-standing question in neuroscience. It has been proposed that an efference copy informs vision about upcoming saccades, and this might lead to shifting spatial coordinates and suppressing image motion. Here we ask whether these two aspects of visual stability are interdependent or may be dissociated under special conditions. We study a memory-guided double-step saccade task, where two saccades are executed in quick succession. Previous studies have led to the hypothesis that in this paradigm the two saccades are planned in parallel, with a single efference copy signal generated at the start of the double-step sequence, i.e. before the first saccade. In line with this hypothesis, we find that visual stability is impaired during the second saccade, which is consistent with (accurate) efference copy information being unavailable during the second saccade. However, we find that saccadic suppression is normal during the second saccade. Thus, the second saccade of a double-step sequence instantiates a dissociation between visual stability and saccadic suppression: stability is impaired even though suppression is strong.}, keywords = {}, pubstate = {published}, tppubtype = {article} } How the visual system achieves perceptual stability across saccadic eye movements is a long-standing question in neuroscience. It has been proposed that an efference copy informs vision about upcoming saccades, and this might lead to shifting spatial coordinates and suppressing image motion. Here we ask whether these two aspects of visual stability are interdependent or may be dissociated under special conditions. We study a memory-guided double-step saccade task, where two saccades are executed in quick succession. Previous studies have led to the hypothesis that in this paradigm the two saccades are planned in parallel, with a single efference copy signal generated at the start of the double-step sequence, i.e. before the first saccade. In line with this hypothesis, we find that visual stability is impaired during the second saccade, which is consistent with (accurate) efference copy information being unavailable during the second saccade. However, we find that saccadic suppression is normal during the second saccade. Thus, the second saccade of a double-step sequence instantiates a dissociation between visual stability and saccadic suppression: stability is impaired even though suppression is strong. |
Eckart Zimmermann Saccade suppression depends on context Journal Article eLife, 9 , pp. 1–16, 2020. @article{Zimmermann2020, title = {Saccade suppression depends on context}, author = {Eckart Zimmermann}, doi = {10.7554/eLife.49700}, year = {2020}, date = {2020-01-01}, journal = {eLife}, volume = {9}, pages = {1--16}, abstract = {Although our eyes are in constant movement, we remain unaware of the high-speed stimulation produced by the retinal displacement. Vision is drastically reduced at the time of saccades. Here, I investigated whether the reduction of the unwanted disturbance could be established through a saccade-contingent habituation to intra-saccadic displacements. In more than 100 context trials, participants were exposed either to an intra-saccadic or to a post-saccadic disturbance or to no disturbance at all. After induction of a specific context, I measured peri-saccadic suppression. Displacement discrimination thresholds of observers were high after participants were exposed to an intra-saccadic disturbance. However, after exposure to a post-saccadic disturbance or a context without any intra-saccadic stimulation, displacement discrimination improved such that observers were able to see shifts as during fixation. Saccade-contingent habituation might explain why we do not perceive trans-saccadic retinal stimulation during saccades.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Although our eyes are in constant movement, we remain unaware of the high-speed stimulation produced by the retinal displacement. Vision is drastically reduced at the time of saccades. Here, I investigated whether the reduction of the unwanted disturbance could be established through a saccade-contingent habituation to intra-saccadic displacements. In more than 100 context trials, participants were exposed either to an intra-saccadic or to a post-saccadic disturbance or to no disturbance at all. After induction of a specific context, I measured peri-saccadic suppression. Displacement discrimination thresholds of observers were high after participants were exposed to an intra-saccadic disturbance. However, after exposure to a post-saccadic disturbance or a context without any intra-saccadic stimulation, displacement discrimination improved such that observers were able to see shifts as during fixation. Saccade-contingent habituation might explain why we do not perceive trans-saccadic retinal stimulation during saccades. |
Eckart Zimmermann; Marta Ghio; Giulio Pergola; Benno Koch; Michael Schwarz; Christian Bellebaum Separate and overlapping functional roles for efference copies in the human thalamus Journal Article Neuropsychologia, 147 , pp. 1–9, 2020. @article{Zimmermann2020a, title = {Separate and overlapping functional roles for efference copies in the human thalamus}, author = {Eckart Zimmermann and Marta Ghio and Giulio Pergola and Benno Koch and Michael Schwarz and Christian Bellebaum}, doi = {10.1016/j.neuropsychologia.2020.107558}, year = {2020}, date = {2020-01-01}, journal = {Neuropsychologia}, volume = {147}, pages = {1--9}, publisher = {Elsevier Ltd}, abstract = {How the perception of space is generated from the multiple maps in the brain is still an unsolved mystery in neuroscience. A neural pathway ascending from the superior colliculus through the medio-dorsal (MD) nucleus of thalamus to the frontal eye field has been identified in monkeys that conveys efference copy information about the metrics of upcoming eye movements. Information sent through this pathway stabilizes vision across saccades. We investigated whether this motor plan information might also shape spatial perception even when no saccades are performed. We studied patients with medial or lateral thalamic lesions (likely involving either the MD or the ventrolateral (VL) nuclei). Patients performed a double-step task testing motor updating, a trans-saccadic localization task testing visual updating, and a localization task during fixation testing a general role of motor signals for visual space in the absence of eye movements. Single patients with medial or lateral thalamic lesions showed deficits in the double-step task, reflecting insufficient transfer of efference copy. However, only a patient with a medial lesion showed impaired performance in the trans-saccadic localization task, suggesting that different types of efference copies contribute to motor and visual updating. During fixation, the MD patient localized stationary stimuli more accurately than healthy controls, suggesting that patients compensate the deficit in visual prediction of saccades - induced by the thalamic lesion - by relying on stationary visual references. We conclude that partially separable efference copy signals contribute to motor and visual stability in company of purely visual signals that are equally effective in supporting trans-saccadic perception.}, keywords = {}, pubstate = {published}, tppubtype = {article} } How the perception of space is generated from the multiple maps in the brain is still an unsolved mystery in neuroscience. A neural pathway ascending from the superior colliculus through the medio-dorsal (MD) nucleus of thalamus to the frontal eye field has been identified in monkeys that conveys efference copy information about the metrics of upcoming eye movements. Information sent through this pathway stabilizes vision across saccades. We investigated whether this motor plan information might also shape spatial perception even when no saccades are performed. We studied patients with medial or lateral thalamic lesions (likely involving either the MD or the ventrolateral (VL) nuclei). Patients performed a double-step task testing motor updating, a trans-saccadic localization task testing visual updating, and a localization task during fixation testing a general role of motor signals for visual space in the absence of eye movements. Single patients with medial or lateral thalamic lesions showed deficits in the double-step task, reflecting insufficient transfer of efference copy. However, only a patient with a medial lesion showed impaired performance in the trans-saccadic localization task, suggesting that different types of efference copies contribute to motor and visual updating. During fixation, the MD patient localized stationary stimuli more accurately than healthy controls, suggesting that patients compensate the deficit in visual prediction of saccades - induced by the thalamic lesion - by relying on stationary visual references. We conclude that partially separable efference copy signals contribute to motor and visual stability in company of purely visual signals that are equally effective in supporting trans-saccadic perception. |
Marc Zirnsak; R G K Gerhards; Roozbeh Kiani; Markus Lappe; Fred H Hamker Anticipatory saccade target processing and the presaccadic transfer of visual features Journal Article Journal of Neuroscience, 31 (49), pp. 17887–17891, 2011. @article{Zirnsak2011, title = {Anticipatory saccade target processing and the presaccadic transfer of visual features}, author = {Marc Zirnsak and R G K Gerhards and Roozbeh Kiani and Markus Lappe and Fred H Hamker}, doi = {10.1523/JNEUROSCI.2465-11.2011}, year = {2011}, date = {2011-01-01}, journal = {Journal of Neuroscience}, volume = {31}, number = {49}, pages = {17887--17891}, abstract = {As we shift our gaze to explore the visual world, information enters cortex in a sequence of successive snapshots, interrupted by phases of blur. Our experience, in contrast, appears like a movie of a continuous stream of objects embedded in a stable world. This perception of stability across eye movements has been linked to changes in spatial sensitivity of visual neurons anticipating the upcoming saccade, often referred to as shifting receptive fields (Duhamel et al., 1992; Walker et al., 1995; Umeno and Goldberg, 1997; Nakamura and Colby, 2002). How exactly these receptive field dynamics contribute to perceptual stability is currently not clear. Anticipatory receptive field shifts toward the future, postsaccadic position may bridge the transient perisaccadic epoch (Sommer and Wurtz, 2006; Wurtz, 2008; Melcher and Colby, 2008). Alternatively, a presaccadic shift of receptive fields toward the saccade target area (Tolias et al., 2001) may serve to focus visual resources onto the most relevant objects in the postsaccadic scene (Hamker et al., 2008). In this view, shifts of feature detectors serve to facilitate the processing of the peripheral visual content before it is foveated. While this conception is consistent with previous observations on receptive field dynamics and on perisaccadic compression (Ross et al., 1997; Morrone et al., 1997; Kaiser and Lappe, 2004), it predicts that receptive fields beyond the saccade target shift toward the saccade target rather than in the direction of the saccade. We have tested this prediction in human observers via the presaccadic transfer of the tilt-aftereffect (Melcher, 2007).}, keywords = {}, pubstate = {published}, tppubtype = {article} } As we shift our gaze to explore the visual world, information enters cortex in a sequence of successive snapshots, interrupted by phases of blur. Our experience, in contrast, appears like a movie of a continuous stream of objects embedded in a stable world. This perception of stability across eye movements has been linked to changes in spatial sensitivity of visual neurons anticipating the upcoming saccade, often referred to as shifting receptive fields (Duhamel et al., 1992; Walker et al., 1995; Umeno and Goldberg, 1997; Nakamura and Colby, 2002). How exactly these receptive field dynamics contribute to perceptual stability is currently not clear. Anticipatory receptive field shifts toward the future, postsaccadic position may bridge the transient perisaccadic epoch (Sommer and Wurtz, 2006; Wurtz, 2008; Melcher and Colby, 2008). Alternatively, a presaccadic shift of receptive fields toward the saccade target area (Tolias et al., 2001) may serve to focus visual resources onto the most relevant objects in the postsaccadic scene (Hamker et al., 2008). In this view, shifts of feature detectors serve to facilitate the processing of the peripheral visual content before it is foveated. While this conception is consistent with previous observations on receptive field dynamics and on perisaccadic compression (Ross et al., 1997; Morrone et al., 1997; Kaiser and Lappe, 2004), it predicts that receptive fields beyond the saccade target shift toward the saccade target rather than in the direction of the saccade. We have tested this prediction in human observers via the presaccadic transfer of the tilt-aftereffect (Melcher, 2007). |
Wieske van Zoest; Mieke Donk; Jan Theeuwes The role of stimulus-driven and goal-driven control in saccadic visual selection Journal Article Journal of Experimental Psychology: Human Perception and Performance, 30 (4), pp. 746–759, 2004. @article{Zoest2004, title = {The role of stimulus-driven and goal-driven control in saccadic visual selection}, author = {Wieske van Zoest and Mieke Donk and Jan Theeuwes}, doi = {10.1037/0096-1523.30.4.746}, year = {2004}, date = {2004-01-01}, journal = {Journal of Experimental Psychology: Human Perception and Performance}, volume = {30}, number = {4}, pages = {746--759}, abstract = {Four experiments were conducted to investigate the role of stimulus-driven and goal-driven control in saccadic eye movements. Participants were required to make a speeded saccade toward a predefined target presented concurrently with multiple nontargets and possibly 1 distractor. Target and distractor were either equally salient (Experiments 1 and 2) or not (Experiments 3 and 4). The results uniformly demonstrated that fast eye movements were completely stimulus driven, whereas slower eye movements were goal driven. These results are in line with neither a bottom-up account nor a top-down notion of visual selection. Instead, they indicate that visual selection is the outcome of 2 independent processes, one stimulus driven and the other goal driven, operating in different time windows.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Four experiments were conducted to investigate the role of stimulus-driven and goal-driven control in saccadic eye movements. Participants were required to make a speeded saccade toward a predefined target presented concurrently with multiple nontargets and possibly 1 distractor. Target and distractor were either equally salient (Experiments 1 and 2) or not (Experiments 3 and 4). The results uniformly demonstrated that fast eye movements were completely stimulus driven, whereas slower eye movements were goal driven. These results are in line with neither a bottom-up account nor a top-down notion of visual selection. Instead, they indicate that visual selection is the outcome of 2 independent processes, one stimulus driven and the other goal driven, operating in different time windows. |
Wieske van Zoest; Mieke Donk Saccadic target selection as a function of time Journal Article Spatial Vision, 19 (1), pp. 61–76, 2006. @article{Zoest2006, title = {Saccadic target selection as a function of time}, author = {Wieske van Zoest and Mieke Donk}, doi = {10.1007/s10530-005-5106-0}, year = {2006}, date = {2006-01-01}, journal = {Spatial Vision}, volume = {19}, number = {1}, pages = {61--76}, abstract = {Recent evidence indicates that stimulus-driven and goal-directed control of visual selection operate independently and in different time windows (van Zoest et al., 2004). The present study further investigates how eye movements are affected by stimulus-driven and goal-directed control. Observers were presented with search displays consisting of one target, multiple non-targets and one distractor element. The task of observers was to make a fast eye movement to a target immediately following the offset of a central fixation point, an event that either co-occurred with or soon followed the presentation of the search display. Distractor saliency and target-distractor similarity were independently manipulated. The results demonstrated that the effect of distractor saliency was transient and only present for the fastest eye movements, whereas the effect of target-distractor similarity was sustained and present in all but the fastest eye movements. The results support an independent timing account of visual selection.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Recent evidence indicates that stimulus-driven and goal-directed control of visual selection operate independently and in different time windows (van Zoest et al., 2004). The present study further investigates how eye movements are affected by stimulus-driven and goal-directed control. Observers were presented with search displays consisting of one target, multiple non-targets and one distractor element. The task of observers was to make a fast eye movement to a target immediately following the offset of a central fixation point, an event that either co-occurred with or soon followed the presentation of the search display. Distractor saliency and target-distractor similarity were independently manipulated. The results demonstrated that the effect of distractor saliency was transient and only present for the fastest eye movements, whereas the effect of target-distractor similarity was sustained and present in all but the fastest eye movements. The results support an independent timing account of visual selection. |
Wieske van Zoest; Mieke Donk Awareness of the saccade goal in oculomotor selection: Your eyes go before you know Journal Article Consciousness and Cognition, 19 (4), pp. 861–871, 2010. @article{Zoest2010, title = {Awareness of the saccade goal in oculomotor selection: Your eyes go before you know}, author = {Wieske van Zoest and Mieke Donk}, doi = {10.1016/j.concog.2010.04.001}, year = {2010}, date = {2010-01-01}, journal = {Consciousness and Cognition}, volume = {19}, number = {4}, pages = {861--871}, publisher = {Elsevier Inc.}, abstract = {The aim of the present study was to investigate how saccadic selection relates to people's awareness of the saliency and identity of a saccade goal. Observers were instructed to make an eye movement to either the most salient line segment (Experiment 1) or the only right-tilted element (Experiment 2) in a visual search display. The display was masked contingent on the first eye movement and after each trial observers indicated whether or not they had correctly selected the target. Whereas people's awareness concerning the saliency of the saccade goal was generally low, their awareness concerning the identity was high. Observers' awareness of the saccade goal was not related to saccadic performance. Whereas saccadic selection consistently varied as a function of saccade latency, people's awareness concerning the saliency or identity of the saccade goal did not. The results suggest that saccadic selection is primarily driven by subconscious processes.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The aim of the present study was to investigate how saccadic selection relates to people's awareness of the saliency and identity of a saccade goal. Observers were instructed to make an eye movement to either the most salient line segment (Experiment 1) or the only right-tilted element (Experiment 2) in a visual search display. The display was masked contingent on the first eye movement and after each trial observers indicated whether or not they had correctly selected the target. Whereas people's awareness concerning the saliency of the saccade goal was generally low, their awareness concerning the identity was high. Observers' awareness of the saccade goal was not related to saccadic performance. Whereas saccadic selection consistently varied as a function of saccade latency, people's awareness concerning the saliency or identity of the saccade goal did not. The results suggest that saccadic selection is primarily driven by subconscious processes. |
Wieske van Zoest; Benedetta Heimler; Francesco Pavani The oculomotor salience of flicker, apparent motion and continuous motion in saccade trajectories Journal Article Experimental Brain Research, 235 , pp. 181–191, 2017. @article{Zoest2017, title = {The oculomotor salience of flicker, apparent motion and continuous motion in saccade trajectories}, author = {Wieske van Zoest and Benedetta Heimler and Francesco Pavani}, doi = {10.1007/s00221-016-4779-1}, year = {2017}, date = {2017-01-01}, journal = {Experimental Brain Research}, volume = {235}, pages = {181--191}, publisher = {Springer Berlin Heidelberg}, abstract = {The aim of the present study was to investigate the impact of dynamic distractors on the time-course of oculomotor selection using saccade trajectory deviations. Participants were instructed to make a speeded eye move- ment (pro-saccade) to a target presented above or below the fixation point while an irrelevant distractor was presented. Four types of distractors were varied within participants: (1) static, (2) flicker, (3) rotating apparent motion and (4) continuous motion. The eccentricity of the distractor was varied between participants. The results showed that sac- cadic trajectories curved towards distractors presented near the vertical midline; no reliable deviation was found for distractors presented further away from the vertical mid- line. Differences between the flickering and rotating dis- tractor were found when distractor eccentricity was small and these specific effects developed over time such that there was a clear differentiation between saccadic deviation based on apparent motion for long-latency saccades, but not short-latency saccades. The present results suggest that the influence on performance of apparent motion stimuli is relatively delayed and acts in a more sustained manner compared to the influence of salient static, flickering and continuous moving stimuli.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The aim of the present study was to investigate the impact of dynamic distractors on the time-course of oculomotor selection using saccade trajectory deviations. Participants were instructed to make a speeded eye move- ment (pro-saccade) to a target presented above or below the fixation point while an irrelevant distractor was presented. Four types of distractors were varied within participants: (1) static, (2) flicker, (3) rotating apparent motion and (4) continuous motion. The eccentricity of the distractor was varied between participants. The results showed that sac- cadic trajectories curved towards distractors presented near the vertical midline; no reliable deviation was found for distractors presented further away from the vertical mid- line. Differences between the flickering and rotating dis- tractor were found when distractor eccentricity was small and these specific effects developed over time such that there was a clear differentiation between saccadic deviation based on apparent motion for long-latency saccades, but not short-latency saccades. The present results suggest that the influence on performance of apparent motion stimuli is relatively delayed and acts in a more sustained manner compared to the influence of salient static, flickering and continuous moving stimuli. |