Eckart Zimmermann Visual mislocalization during double-step saccades Journal Article Frontiers in Systems Neuroscience, 9 (132), pp. 1–9, 2015. @article{Zimmermann2015, title = {Visual mislocalization during double-step saccades}, author = {Eckart Zimmermann}, doi = {10.3389/fnsys.2015.00132}, year = {2015}, date = {2015-01-01}, journal = {Frontiers in Systems Neuroscience}, volume = {9}, number = {132}, pages = {1--9}, abstract = {Visual objects presented briefly at the time of saccade onset appear compressed toward the saccade target. Compression strength depends on the presentation of a visual saccade target signal and is strongly reduced during the second saccade of a double-step saccade sequence (Zimmermann et al., 2014b). Here, I tested whether perisaccadic compression is linked to saccade planning by contrasting two double-step paradigms. In the same-direction double-step paradigm, subjects were required to perform two rightward 10° saccades successively. At various times around execution of the saccade sequence a probe dot was briefly flashed. Subjects had to localize the position of the probe dot after they had completed both saccades. I found compression of visual space only at the time of the first but not at the time of the second saccade. In the reverse-direction paradigm, subjects performed first a rightward 10° saccade followed by a leftward 10° saccade back to initial fixation. In this paradigm compression was found in similar magnitude during both saccades. Analysis of the saccade parameters did not reveal indications of saccade sequence preplanning in this paradigm. I therefore conclude that saccade planning, rather than saccade execution factors, is involved in perisaccadic compression.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Visual objects presented briefly at the time of saccade onset appear compressed toward the saccade target. Compression strength depends on the presentation of a visual saccade target signal and is strongly reduced during the second saccade of a double-step saccade sequence (Zimmermann et al., 2014b). Here, I tested whether perisaccadic compression is linked to saccade planning by contrasting two double-step paradigms. In the same-direction double-step paradigm, subjects were required to perform two rightward 10° saccades successively. At various times around execution of the saccade sequence a probe dot was briefly flashed. Subjects had to localize the position of the probe dot after they had completed both saccades. I found compression of visual space only at the time of the first but not at the time of the second saccade. In the reverse-direction paradigm, subjects performed first a rightward 10° saccade followed by a leftward 10° saccade back to initial fixation. In this paradigm compression was found in similar magnitude during both saccades. Analysis of the saccade parameters did not reveal indications of saccade sequence preplanning in this paradigm. I therefore conclude that saccade planning, rather than saccade execution factors, is involved in perisaccadic compression. |
Eckart Zimmermann; Concetta M Morrone; David C Burr Visual mislocalization during saccade sequences Journal Article Experimental Brain Research, 233 (2), pp. 577–585, 2015. @article{Zimmermann2015a, title = {Visual mislocalization during saccade sequences}, author = {Eckart Zimmermann and Concetta M Morrone and David C Burr}, doi = {10.1007/s00221-014-4138-z}, year = {2015}, date = {2015-01-01}, journal = {Experimental Brain Research}, volume = {233}, number = {2}, pages = {577--585}, abstract = {Visual objects briefly presented around the time of saccadic eye movements are perceived compressed towards the saccade target. Here, we investigated perisaccadic mislocalization with a double-step saccade paradigm, measuring localization of small probe dots briefly flashed at various times around the sequence of the two saccades. At onset of the first saccade, probe dots were mislocalized towards the first and, to a lesser extent, also towards the second saccade target. However, there was very little mislocalization at the onset of the second saccade. When we increased the presentation duration of the saccade targets prior to onset of the saccade sequence, perisaccadic mislocalization did occur at the onset of the second saccade.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Visual objects briefly presented around the time of saccadic eye movements are perceived compressed towards the saccade target. Here, we investigated perisaccadic mislocalization with a double-step saccade paradigm, measuring localization of small probe dots briefly flashed at various times around the sequence of the two saccades. At onset of the first saccade, probe dots were mislocalized towards the first and, to a lesser extent, also towards the second saccade target. However, there was very little mislocalization at the onset of the second saccade. When we increased the presentation duration of the saccade targets prior to onset of the saccade sequence, perisaccadic mislocalization did occur at the onset of the second saccade. |
Eckart Zimmermann; Florian Ostendorf; C J Ploner; Markus Lappe Impairment of saccade adaptation in a patient with a focal thalamic lesion Journal Article Journal of Neurophysiology, 113 (7), pp. 2351–2359, 2015. @article{Zimmermann2015b, title = {Impairment of saccade adaptation in a patient with a focal thalamic lesion}, author = {Eckart Zimmermann and Florian Ostendorf and C J Ploner and Markus Lappe}, doi = {10.1152/jn.00744.2014}, year = {2015}, date = {2015-01-01}, journal = {Journal of Neurophysiology}, volume = {113}, number = {7}, pages = {2351--2359}, abstract = {The frequent jumps of the eyeballs-called saccades-imply the need for a constant correction of motor errors. If systematic errors are detected in saccade landing, the saccade amplitude adapts to compensate for the error. In the laboratory, saccade adaptation can be studied by displacing the saccade target. Functional selectivity of adaptation for different saccade types suggests that adaptation occurs at multiple sites in the oculomotor system. Saccade motor learning might be the result of a comparison between a prediction of the saccade landing position and its actual postsaccadic location. To investigate whether a thalamic feedback pathway might carry such a prediction signal, we studied a patient with a lesion in the posterior ventrolateral thalamic nucleus. Saccade adaptation was tested for reactive saccades, which are performed to suddenly appearing targets, and for scanning saccades, which are performed to stationary targets. For reactive saccades, we found a clear impairment in adaptation retention ipsilateral to the lesioned side and a larger-than-normal adaptation on the contralesional side. For scanning saccades, adaptation was intact on both sides and not different from the control group. Our results provide the first lesion evidence that adaptation of reactive and scanning saccades relies on distinct feedback pathways from cerebellum to cortex. They further demonstrate that saccade adaptation in humans is not restricted to the cerebellum but also involves cortical areas. The paradoxically strong adaptation for outward target steps can be explained by stronger reliance on visual targeting errors when prediction error signaling is impaired.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The frequent jumps of the eyeballs-called saccades-imply the need for a constant correction of motor errors. If systematic errors are detected in saccade landing, the saccade amplitude adapts to compensate for the error. In the laboratory, saccade adaptation can be studied by displacing the saccade target. Functional selectivity of adaptation for different saccade types suggests that adaptation occurs at multiple sites in the oculomotor system. Saccade motor learning might be the result of a comparison between a prediction of the saccade landing position and its actual postsaccadic location. To investigate whether a thalamic feedback pathway might carry such a prediction signal, we studied a patient with a lesion in the posterior ventrolateral thalamic nucleus. Saccade adaptation was tested for reactive saccades, which are performed to suddenly appearing targets, and for scanning saccades, which are performed to stationary targets. For reactive saccades, we found a clear impairment in adaptation retention ipsilateral to the lesioned side and a larger-than-normal adaptation on the contralesional side. For scanning saccades, adaptation was intact on both sides and not different from the control group. Our results provide the first lesion evidence that adaptation of reactive and scanning saccades relies on distinct feedback pathways from cerebellum to cortex. They further demonstrate that saccade adaptation in humans is not restricted to the cerebellum but also involves cortical areas. The paradoxically strong adaptation for outward target steps can be explained by stronger reliance on visual targeting errors when prediction error signaling is impaired. |
Eckart Zimmermann Spatiotopic buildup of saccade target representation depends on target size Journal Article Journal of Vision, 16 (15), pp. 11, 2016. @article{Zimmermann2016, title = {Spatiotopic buildup of saccade target representation depends on target size}, author = {Eckart Zimmermann}, doi = {10.1167/16.15.11}, year = {2016}, date = {2016-01-01}, journal = {Journal of Vision}, volume = {16}, number = {15}, pages = {11}, abstract = {How we maintain spatial stability across saccade eye movements is an open question in visual neuroscience. A phenomenon that has received much attention in the field is our seemingly poor ability to discriminate the direction of transsaccadic target displacements. We have recently shown that discrimination performance increases the longer the saccade target has been previewed before saccade execution (Zimmermann, Morrone, & Burr, 2013). We have argued that the spatial representation of briefly presented stimuli is weak but that a strong representation is needed for transsaccadic, i.e., spatiotopic localization. Another factor that modulates the representation of saccade targets is stimulus size. The representation of spatially extended targets is more noisy than that of point-like targets. Here, I show that theincreaseintranssaccadic displacement discrimination as a function of saccade target preview duration depends on target size. This effect was found for spatially extended targets—thus replicating the results of Zimmermann et al. (2013)— but not for point-like targets. An analysis of saccade parameters revealed that the constant error for reaching the saccade target was bigger for spatially extended than for point-like targets, consistent with weaker representation of bigger targets. These results show that transsaccadic displacement discrimination becomes accurate when saccade targets are spatially extended and presented longer, thus resembling closer stimuli in real-world environments.}, keywords = {}, pubstate = {published}, tppubtype = {article} } How we maintain spatial stability across saccade eye movements is an open question in visual neuroscience. A phenomenon that has received much attention in the field is our seemingly poor ability to discriminate the direction of transsaccadic target displacements. We have recently shown that discrimination performance increases the longer the saccade target has been previewed before saccade execution (Zimmermann, Morrone, & Burr, 2013). We have argued that the spatial representation of briefly presented stimuli is weak but that a strong representation is needed for transsaccadic, i.e., spatiotopic localization. Another factor that modulates the representation of saccade targets is stimulus size. The representation of spatially extended targets is more noisy than that of point-like targets. Here, I show that theincreaseintranssaccadic displacement discrimination as a function of saccade target preview duration depends on target size. This effect was found for spatially extended targets—thus replicating the results of Zimmermann et al. (2013)— but not for point-like targets. An analysis of saccade parameters revealed that the constant error for reaching the saccade target was bigger for spatially extended than for point-like targets, consistent with weaker representation of bigger targets. These results show that transsaccadic displacement discrimination becomes accurate when saccade targets are spatially extended and presented longer, thus resembling closer stimuli in real-world environments. |
Eckart Zimmermann; Concetta M Morrone; David C Burr Adaptation to size affects saccades with long but not short latencies Journal Article Journal of Vision, 16 (7), pp. 2, 2016. @article{Zimmermann2016a, title = {Adaptation to size affects saccades with long but not short latencies}, author = {Eckart Zimmermann and Concetta M Morrone and David C Burr}, doi = {10.1167/16.7.2}, year = {2016}, date = {2016-01-01}, journal = {Journal of Vision}, volume = {16}, number = {7}, pages = {2}, abstract = {Maintained exposure to a specific stimulus property— such as size, color, or motion—induces perceptual adaptation aftereffects, usually in the opposite direction to that of the adaptor. Here we studied how adaptation to size affects perceived position and visually guided action (saccadic eye movements) to that position. Subjects saccaded to the border of a diamond-shaped object after adaptation to a smaller diamond shape. For saccades in the normal latency range, amplitudes decreased, consistent with saccading to a larger object. Short-latency saccades, however, tended to be affected less by the adaptation, suggesting that they were only partly triggered by a signal representing the illusory target position. We also tested size perception after adaptation, followed by a mask stimulus at the probe location after various delays. Similar size adaptation magnitudes were found for all probe-mask delays. In agreement with earlier studies, these results suggest that the duration of the saccade latency period determines the reference frame that codes the probe location.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Maintained exposure to a specific stimulus property— such as size, color, or motion—induces perceptual adaptation aftereffects, usually in the opposite direction to that of the adaptor. Here we studied how adaptation to size affects perceived position and visually guided action (saccadic eye movements) to that position. Subjects saccaded to the border of a diamond-shaped object after adaptation to a smaller diamond shape. For saccades in the normal latency range, amplitudes decreased, consistent with saccading to a larger object. Short-latency saccades, however, tended to be affected less by the adaptation, suggesting that they were only partly triggered by a signal representing the illusory target position. We also tested size perception after adaptation, followed by a mask stimulus at the probe location after various delays. Similar size adaptation magnitudes were found for all probe-mask delays. In agreement with earlier studies, these results suggest that the duration of the saccade latency period determines the reference frame that codes the probe location. |
Eckart Zimmermann; Ralph Weidner; R O Abdollahi; Gereon R Fink Spatiotopic adaptation in visual areas Journal Article Journal of Neuroscience, 36 (37), pp. 9526–9534, 2016. @article{Zimmermann2016b, title = {Spatiotopic adaptation in visual areas}, author = {Eckart Zimmermann and Ralph Weidner and R O Abdollahi and Gereon R Fink}, doi = {10.1523/JNEUROSCI.0052-16.2016}, year = {2016}, date = {2016-01-01}, journal = {Journal of Neuroscience}, volume = {36}, number = {37}, pages = {9526--9534}, abstract = {The ability to perceive the visual world around us as spatially stable despite frequent eye movements is one of the long-standing mysteries of neuroscience. The existence of neural mechanisms processing spatiotopic information is indispensable for a successful interaction with the external world. However, how the brain handles spatiotopic information remains a matter of debate. We here combined behavioral and fMRI adaptation to investigate the coding of spatiotopic information in the human brain. Subjects were adapted by a prolonged presentation of a tilted grating. Thereafter, they performed a saccade followed by the brief presentation of a probe. This procedure allowed dissociating adaptation aftereffects at retinal and spatiotopic positions. We found significant behavioral and functional adaptation in both retinal and spatiotopic positions, indicating information transfer into a spatiotopic coordinate system. The brain regions involved were located in ventral visual areas V3, V4, and VO. Our findings suggest that spatiotopic representations involved in maintaining visual stability are constructed by dynamically remapping visual feature information between retinotopic regions within early visual areas.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The ability to perceive the visual world around us as spatially stable despite frequent eye movements is one of the long-standing mysteries of neuroscience. The existence of neural mechanisms processing spatiotopic information is indispensable for a successful interaction with the external world. However, how the brain handles spatiotopic information remains a matter of debate. We here combined behavioral and fMRI adaptation to investigate the coding of spatiotopic information in the human brain. Subjects were adapted by a prolonged presentation of a tilted grating. Thereafter, they performed a saccade followed by the brief presentation of a probe. This procedure allowed dissociating adaptation aftereffects at retinal and spatiotopic positions. We found significant behavioral and functional adaptation in both retinal and spatiotopic positions, indicating information transfer into a spatiotopic coordinate system. The brain regions involved were located in ventral visual areas V3, V4, and VO. Our findings suggest that spatiotopic representations involved in maintaining visual stability are constructed by dynamically remapping visual feature information between retinotopic regions within early visual areas. |
Eckart Zimmermann; Ralph Weidner; Gereon R Fink Spatiotopic updating of visual feature information Journal Article Journal of Vision, 17 (12), pp. 6, 2017. @article{Zimmermann2017, title = {Spatiotopic updating of visual feature information}, author = {Eckart Zimmermann and Ralph Weidner and Gereon R Fink}, doi = {10.1167/17.12.6.doi}, year = {2017}, date = {2017-01-01}, journal = {Journal of Vision}, volume = {17}, number = {12}, pages = {6}, abstract = {Saccades shift the retina with high-speed motion. In order to compensate for the sudden displacement, the visuomotor system needs to combine saccade-related information and visual metrics. Many neurons in oculomotor but also in visual areas shift their receptive field shortly before the execution of a saccade (Duhamel, Colby, & Goldberg, 1992; Nakamura & Colby, 2002). These shifts supposedly enable the binding of information from before and after the saccade. It is a matter of current debate whether these shifts are merely location based (i.e., involve remapping of abstract spatial coordinates) or also comprise information about visual features. We have recently presented fMRI evidence for a feature-based remapping mechanism in visual areas V3, V4, and VO (Zimmermann, Weidner, Abdollahi, & Fink, 2016). In particular, we found fMRI adaptation in cortical regions representing a stimulus' retinotopic as well as its spatiotopic position. Here, we asked whether spatiotopic adaptation exists independently from retinotopic adaptation and which type of information is behaviorally more relevant after saccade execution. We first adapted at the saccade target location only and found a spatiotopic tilt aftereffect. Then, we simultaneously adapted both the fixation and the saccade target location but with opposite tilt orientations. As a result, adaptation from the fixation location was carried retinotopically to the saccade target position. The opposite tilt orientation at the retinotopic location altered the effects induced by spatiotopic adaptation. More precisely, it cancelled out spatiotopic adaptation at the saccade target location. We conclude that retinotopic and spatiotopic visual adaptation are independent effects.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Saccades shift the retina with high-speed motion. In order to compensate for the sudden displacement, the visuomotor system needs to combine saccade-related information and visual metrics. Many neurons in oculomotor but also in visual areas shift their receptive field shortly before the execution of a saccade (Duhamel, Colby, & Goldberg, 1992; Nakamura & Colby, 2002). These shifts supposedly enable the binding of information from before and after the saccade. It is a matter of current debate whether these shifts are merely location based (i.e., involve remapping of abstract spatial coordinates) or also comprise information about visual features. We have recently presented fMRI evidence for a feature-based remapping mechanism in visual areas V3, V4, and VO (Zimmermann, Weidner, Abdollahi, & Fink, 2016). In particular, we found fMRI adaptation in cortical regions representing a stimulus' retinotopic as well as its spatiotopic position. Here, we asked whether spatiotopic adaptation exists independently from retinotopic adaptation and which type of information is behaviorally more relevant after saccade execution. We first adapted at the saccade target location only and found a spatiotopic tilt aftereffect. Then, we simultaneously adapted both the fixation and the saccade target location but with opposite tilt orientations. As a result, adaptation from the fixation location was carried retinotopically to the saccade target position. The opposite tilt orientation at the retinotopic location altered the effects induced by spatiotopic adaptation. More precisely, it cancelled out spatiotopic adaptation at the saccade target location. We conclude that retinotopic and spatiotopic visual adaptation are independent effects. |
Eckart Zimmermann; Concetta M Morrone; P Binda Perception during double-step saccades Journal Article Scientific Reports, 8 , pp. 320, 2018. @article{Zimmermann2018, title = {Perception during double-step saccades}, author = {Eckart Zimmermann and Concetta M Morrone and P Binda}, doi = {10.1038/s41598-017-18554-w}, year = {2018}, date = {2018-01-01}, journal = {Scientific Reports}, volume = {8}, pages = {320}, publisher = {Springer US}, abstract = {How the visual system achieves perceptual stability across saccadic eye movements is a long-standing question in neuroscience. It has been proposed that an efference copy informs vision about upcoming saccades, and this might lead to shifting spatial coordinates and suppressing image motion. Here we ask whether these two aspects of visual stability are interdependent or may be dissociated under special conditions. We study a memory-guided double-step saccade task, where two saccades are executed in quick succession. Previous studies have led to the hypothesis that in this paradigm the two saccades are planned in parallel, with a single efference copy signal generated at the start of the double-step sequence, i.e. before the first saccade. In line with this hypothesis, we find that visual stability is impaired during the second saccade, which is consistent with (accurate) efference copy information being unavailable during the second saccade. However, we find that saccadic suppression is normal during the second saccade. Thus, the second saccade of a double-step sequence instantiates a dissociation between visual stability and saccadic suppression: stability is impaired even though suppression is strong.}, keywords = {}, pubstate = {published}, tppubtype = {article} } How the visual system achieves perceptual stability across saccadic eye movements is a long-standing question in neuroscience. It has been proposed that an efference copy informs vision about upcoming saccades, and this might lead to shifting spatial coordinates and suppressing image motion. Here we ask whether these two aspects of visual stability are interdependent or may be dissociated under special conditions. We study a memory-guided double-step saccade task, where two saccades are executed in quick succession. Previous studies have led to the hypothesis that in this paradigm the two saccades are planned in parallel, with a single efference copy signal generated at the start of the double-step sequence, i.e. before the first saccade. In line with this hypothesis, we find that visual stability is impaired during the second saccade, which is consistent with (accurate) efference copy information being unavailable during the second saccade. However, we find that saccadic suppression is normal during the second saccade. Thus, the second saccade of a double-step sequence instantiates a dissociation between visual stability and saccadic suppression: stability is impaired even though suppression is strong. |
Eckart Zimmermann Saccade suppression depends on context Journal Article eLife, 9 , pp. 1–16, 2020. @article{Zimmermann2020, title = {Saccade suppression depends on context}, author = {Eckart Zimmermann}, doi = {10.7554/eLife.49700}, year = {2020}, date = {2020-01-01}, journal = {eLife}, volume = {9}, pages = {1--16}, abstract = {Although our eyes are in constant movement, we remain unaware of the high-speed stimulation produced by the retinal displacement. Vision is drastically reduced at the time of saccades. Here, I investigated whether the reduction of the unwanted disturbance could be established through a saccade-contingent habituation to intra-saccadic displacements. In more than 100 context trials, participants were exposed either to an intra-saccadic or to a post-saccadic disturbance or to no disturbance at all. After induction of a specific context, I measured peri-saccadic suppression. Displacement discrimination thresholds of observers were high after participants were exposed to an intra-saccadic disturbance. However, after exposure to a post-saccadic disturbance or a context without any intra-saccadic stimulation, displacement discrimination improved such that observers were able to see shifts as during fixation. Saccade-contingent habituation might explain why we do not perceive trans-saccadic retinal stimulation during saccades.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Although our eyes are in constant movement, we remain unaware of the high-speed stimulation produced by the retinal displacement. Vision is drastically reduced at the time of saccades. Here, I investigated whether the reduction of the unwanted disturbance could be established through a saccade-contingent habituation to intra-saccadic displacements. In more than 100 context trials, participants were exposed either to an intra-saccadic or to a post-saccadic disturbance or to no disturbance at all. After induction of a specific context, I measured peri-saccadic suppression. Displacement discrimination thresholds of observers were high after participants were exposed to an intra-saccadic disturbance. However, after exposure to a post-saccadic disturbance or a context without any intra-saccadic stimulation, displacement discrimination improved such that observers were able to see shifts as during fixation. Saccade-contingent habituation might explain why we do not perceive trans-saccadic retinal stimulation during saccades. |
Eckart Zimmermann; Marta Ghio; Giulio Pergola; Benno Koch; Michael Schwarz; Christian Bellebaum Separate and overlapping functional roles for efference copies in the human thalamus Journal Article Neuropsychologia, 147 , pp. 1–9, 2020. @article{Zimmermann2020a, title = {Separate and overlapping functional roles for efference copies in the human thalamus}, author = {Eckart Zimmermann and Marta Ghio and Giulio Pergola and Benno Koch and Michael Schwarz and Christian Bellebaum}, doi = {10.1016/j.neuropsychologia.2020.107558}, year = {2020}, date = {2020-01-01}, journal = {Neuropsychologia}, volume = {147}, pages = {1--9}, publisher = {Elsevier Ltd}, abstract = {How the perception of space is generated from the multiple maps in the brain is still an unsolved mystery in neuroscience. A neural pathway ascending from the superior colliculus through the medio-dorsal (MD) nucleus of thalamus to the frontal eye field has been identified in monkeys that conveys efference copy information about the metrics of upcoming eye movements. Information sent through this pathway stabilizes vision across saccades. We investigated whether this motor plan information might also shape spatial perception even when no saccades are performed. We studied patients with medial or lateral thalamic lesions (likely involving either the MD or the ventrolateral (VL) nuclei). Patients performed a double-step task testing motor updating, a trans-saccadic localization task testing visual updating, and a localization task during fixation testing a general role of motor signals for visual space in the absence of eye movements. Single patients with medial or lateral thalamic lesions showed deficits in the double-step task, reflecting insufficient transfer of efference copy. However, only a patient with a medial lesion showed impaired performance in the trans-saccadic localization task, suggesting that different types of efference copies contribute to motor and visual updating. During fixation, the MD patient localized stationary stimuli more accurately than healthy controls, suggesting that patients compensate the deficit in visual prediction of saccades - induced by the thalamic lesion - by relying on stationary visual references. We conclude that partially separable efference copy signals contribute to motor and visual stability in company of purely visual signals that are equally effective in supporting trans-saccadic perception.}, keywords = {}, pubstate = {published}, tppubtype = {article} } How the perception of space is generated from the multiple maps in the brain is still an unsolved mystery in neuroscience. A neural pathway ascending from the superior colliculus through the medio-dorsal (MD) nucleus of thalamus to the frontal eye field has been identified in monkeys that conveys efference copy information about the metrics of upcoming eye movements. Information sent through this pathway stabilizes vision across saccades. We investigated whether this motor plan information might also shape spatial perception even when no saccades are performed. We studied patients with medial or lateral thalamic lesions (likely involving either the MD or the ventrolateral (VL) nuclei). Patients performed a double-step task testing motor updating, a trans-saccadic localization task testing visual updating, and a localization task during fixation testing a general role of motor signals for visual space in the absence of eye movements. Single patients with medial or lateral thalamic lesions showed deficits in the double-step task, reflecting insufficient transfer of efference copy. However, only a patient with a medial lesion showed impaired performance in the trans-saccadic localization task, suggesting that different types of efference copies contribute to motor and visual updating. During fixation, the MD patient localized stationary stimuli more accurately than healthy controls, suggesting that patients compensate the deficit in visual prediction of saccades - induced by the thalamic lesion - by relying on stationary visual references. We conclude that partially separable efference copy signals contribute to motor and visual stability in company of purely visual signals that are equally effective in supporting trans-saccadic perception. |
Josua Zimmermann; Dominik R Bach Impact of a reminder/extinction procedure on threat-conditioned pupil size and skin conductance responses Journal Article Learning & Memory, 27 (4), pp. 164–172, 2020. @article{Zimmermann2020b, title = {Impact of a reminder/extinction procedure on threat-conditioned pupil size and skin conductance responses}, author = {Josua Zimmermann and Dominik R Bach}, doi = {10.1101/lm.050211.119}, year = {2020}, date = {2020-01-01}, journal = {Learning & Memory}, volume = {27}, number = {4}, pages = {164--172}, abstract = {A reminder can render consolidated memory labile and susceptible to amnesic agents during a reconsolidation window. For the case of threat memory (also termed fear memory), it has been suggested that extinction training during this reconsolidation window has the same disruptive impact. This procedure could provide a powerful therapeutic principle for treatment of unwanted aversive memories. However, human research yielded contradictory results. Notably, all published positive replications quantified threat memory by conditioned skin conductance responses (SCR). Yet, other studies measuring SCR and/or fear-potentiated startle failed to observe an effect of a reminder/extinction procedure on the return of fear. Here we sought to shed light on this discrepancy by using a different autonomic response, namely, conditioned pupil dilation, in addition to SCR, in a replication of the original human study. N = 71 humans underwent a 3-d threat conditioning, reminder/extinction, and reinstatement, procedure with 2 CS+, of which one was reminded. Participants successfully learned the threat association on day 1, extinguished conditioned responding on day 2, and showed reinstatement on day 3. However, there was no difference in conditioned responding between the reminded and the nonreminded CS, neither in pupil size nor SCR. Thus, we found no evidence that a reminder trial before extinction prevents the return of threat-conditioned responding.}, keywords = {}, pubstate = {published}, tppubtype = {article} } A reminder can render consolidated memory labile and susceptible to amnesic agents during a reconsolidation window. For the case of threat memory (also termed fear memory), it has been suggested that extinction training during this reconsolidation window has the same disruptive impact. This procedure could provide a powerful therapeutic principle for treatment of unwanted aversive memories. However, human research yielded contradictory results. Notably, all published positive replications quantified threat memory by conditioned skin conductance responses (SCR). Yet, other studies measuring SCR and/or fear-potentiated startle failed to observe an effect of a reminder/extinction procedure on the return of fear. Here we sought to shed light on this discrepancy by using a different autonomic response, namely, conditioned pupil dilation, in addition to SCR, in a replication of the original human study. N = 71 humans underwent a 3-d threat conditioning, reminder/extinction, and reinstatement, procedure with 2 CS+, of which one was reminded. Participants successfully learned the threat association on day 1, extinguished conditioned responding on day 2, and showed reinstatement on day 3. However, there was no difference in conditioned responding between the reminded and the nonreminded CS, neither in pupil size nor SCR. Thus, we found no evidence that a reminder trial before extinction prevents the return of threat-conditioned responding. |
Artyom Zinchenko; Markus Conci; Johannes Hauser; Hermann J Müller; Thomas Geyer Distributed attention beats the down-side of statistical context learning in visual search Journal Article Journal of Vision, 20 (7), pp. 1–14, 2020. @article{Zinchenko2020, title = {Distributed attention beats the down-side of statistical context learning in visual search}, author = {Artyom Zinchenko and Markus Conci and Johannes Hauser and Hermann J Müller and Thomas Geyer}, doi = {10.1167/JOV.20.7.4}, year = {2020}, date = {2020-01-01}, journal = {Journal of Vision}, volume = {20}, number = {7}, pages = {1--14}, abstract = {Learnt target-distractor contexts guide visual search. However, updating a previously acquired target-distractor memory subsequent to a change of the target location has been found to be rather inefficient and slow. These results show that the imperviousness of contextual memory to incorporating relocated targets is particularly pronounced when observers adopt a narrow focus of attention to perform a rather difficult form-conjunction search task. By contrast, when they adopt a broad attentional distribution, context-based memories can be updated more readily because this mode promotes the acquisition of more global contextual representations that continue to provide effective cues even after target relocation.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Learnt target-distractor contexts guide visual search. However, updating a previously acquired target-distractor memory subsequent to a change of the target location has been found to be rather inefficient and slow. These results show that the imperviousness of contextual memory to incorporating relocated targets is particularly pronounced when observers adopt a narrow focus of attention to perform a rather difficult form-conjunction search task. By contrast, when they adopt a broad attentional distribution, context-based memories can be updated more readily because this mode promotes the acquisition of more global contextual representations that continue to provide effective cues even after target relocation. |
Artyom Zinchenko; Markus Conci; Thomas Töllner; Hermann J Müller; Thomas Geyer Automatic guidance (and misguidance) of visuospatial attention by acquired scene memory: Evidence from an N1pc polarity reversal Journal Article Psychological Science, 31 (12), pp. 1–13, 2020. @article{Zinchenko2020a, title = {Automatic guidance (and misguidance) of visuospatial attention by acquired scene memory: Evidence from an N1pc polarity reversal}, author = {Artyom Zinchenko and Markus Conci and Thomas Töllner and Hermann J Müller and Thomas Geyer}, doi = {10.1177/0956797620954815}, year = {2020}, date = {2020-01-01}, journal = {Psychological Science}, volume = {31}, number = {12}, pages = {1--13}, abstract = {Visual search is facilitated when the target is repeatedly encountered at a fixed position within an invariant (vs. randomly variable) distractor layout—that is, when the layout is learned and guides attention to the target, a phenomenon known as contextual cuing. Subsequently changing the target location within a learned layout abolishes contextual cuing, which is difficult to relearn. Here, we used lateralized event-related electroencephalogram (EEG) potentials to explore memory-based attentional guidance (N = 16). The results revealed reliable contextual cuing during initial learning and an associated EEG-amplitude increase for repeated layouts in attention-related components, starting with an early posterior negativity (N1pc, 80–180 ms). When the target was relocated to the opposite hemifield following learning, contextual cuing was effectively abolished, and the N1pc was reversed in polarity (indicative of persistent misguidance of attention to the original target location). Thus, once learned, repeated layouts trigger attentional-priority signals from memory that proactively interfere with contextual relearning after target relocation.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Visual search is facilitated when the target is repeatedly encountered at a fixed position within an invariant (vs. randomly variable) distractor layout—that is, when the layout is learned and guides attention to the target, a phenomenon known as contextual cuing. Subsequently changing the target location within a learned layout abolishes contextual cuing, which is difficult to relearn. Here, we used lateralized event-related electroencephalogram (EEG) potentials to explore memory-based attentional guidance (N = 16). The results revealed reliable contextual cuing during initial learning and an associated EEG-amplitude increase for repeated layouts in attention-related components, starting with an early posterior negativity (N1pc, 80–180 ms). When the target was relocated to the opposite hemifield following learning, contextual cuing was effectively abolished, and the N1pc was reversed in polarity (indicative of persistent misguidance of attention to the original target location). Thus, once learned, repeated layouts trigger attentional-priority signals from memory that proactively interfere with contextual relearning after target relocation. |
Marc Zirnsak; R G K Gerhards; Roozbeh Kiani; Markus Lappe; Fred H Hamker Anticipatory saccade target processing and the presaccadic transfer of visual features Journal Article Journal of Neuroscience, 31 (49), pp. 17887–17891, 2011. @article{Zirnsak2011, title = {Anticipatory saccade target processing and the presaccadic transfer of visual features}, author = {Marc Zirnsak and R G K Gerhards and Roozbeh Kiani and Markus Lappe and Fred H Hamker}, doi = {10.1523/JNEUROSCI.2465-11.2011}, year = {2011}, date = {2011-01-01}, journal = {Journal of Neuroscience}, volume = {31}, number = {49}, pages = {17887--17891}, abstract = {As we shift our gaze to explore the visual world, information enters cortex in a sequence of successive snapshots, interrupted by phases of blur. Our experience, in contrast, appears like a movie of a continuous stream of objects embedded in a stable world. This perception of stability across eye movements has been linked to changes in spatial sensitivity of visual neurons anticipating the upcoming saccade, often referred to as shifting receptive fields (Duhamel et al., 1992; Walker et al., 1995; Umeno and Goldberg, 1997; Nakamura and Colby, 2002). How exactly these receptive field dynamics contribute to perceptual stability is currently not clear. Anticipatory receptive field shifts toward the future, postsaccadic position may bridge the transient perisaccadic epoch (Sommer and Wurtz, 2006; Wurtz, 2008; Melcher and Colby, 2008). Alternatively, a presaccadic shift of receptive fields toward the saccade target area (Tolias et al., 2001) may serve to focus visual resources onto the most relevant objects in the postsaccadic scene (Hamker et al., 2008). In this view, shifts of feature detectors serve to facilitate the processing of the peripheral visual content before it is foveated. While this conception is consistent with previous observations on receptive field dynamics and on perisaccadic compression (Ross et al., 1997; Morrone et al., 1997; Kaiser and Lappe, 2004), it predicts that receptive fields beyond the saccade target shift toward the saccade target rather than in the direction of the saccade. We have tested this prediction in human observers via the presaccadic transfer of the tilt-aftereffect (Melcher, 2007).}, keywords = {}, pubstate = {published}, tppubtype = {article} } As we shift our gaze to explore the visual world, information enters cortex in a sequence of successive snapshots, interrupted by phases of blur. Our experience, in contrast, appears like a movie of a continuous stream of objects embedded in a stable world. This perception of stability across eye movements has been linked to changes in spatial sensitivity of visual neurons anticipating the upcoming saccade, often referred to as shifting receptive fields (Duhamel et al., 1992; Walker et al., 1995; Umeno and Goldberg, 1997; Nakamura and Colby, 2002). How exactly these receptive field dynamics contribute to perceptual stability is currently not clear. Anticipatory receptive field shifts toward the future, postsaccadic position may bridge the transient perisaccadic epoch (Sommer and Wurtz, 2006; Wurtz, 2008; Melcher and Colby, 2008). Alternatively, a presaccadic shift of receptive fields toward the saccade target area (Tolias et al., 2001) may serve to focus visual resources onto the most relevant objects in the postsaccadic scene (Hamker et al., 2008). In this view, shifts of feature detectors serve to facilitate the processing of the peripheral visual content before it is foveated. While this conception is consistent with previous observations on receptive field dynamics and on perisaccadic compression (Ross et al., 1997; Morrone et al., 1997; Kaiser and Lappe, 2004), it predicts that receptive fields beyond the saccade target shift toward the saccade target rather than in the direction of the saccade. We have tested this prediction in human observers via the presaccadic transfer of the tilt-aftereffect (Melcher, 2007). |
Hamed Zivari Adab; Rufin Vogels Practicing coarse orientation discrimination improves orientation signals in macaque cortical area V4 Journal Article Current Biology, 21 (19), pp. 1661–1666, 2011. @article{ZivariAdab2011, title = {Practicing coarse orientation discrimination improves orientation signals in macaque cortical area V4}, author = {Hamed {Zivari Adab} and Rufin Vogels}, doi = {10.1016/j.cub.2011.08.037}, year = {2011}, date = {2011-01-01}, journal = {Current Biology}, volume = {21}, number = {19}, pages = {1661--1666}, publisher = {Elsevier Ltd}, abstract = {Practice improves the performance in visual tasks, but mechanisms underlying this adult brain plasticity are unclear. Single-cell studies reported no [1], weak [2], or moderate [3, 4] perceptual learning-related changes in macaque visual areas V1 and V4, whereas none were found in middle temporal (MT) [5]. These conflicting results and modeling of human (e.g., [6, 7]) and monkey data [8] suggested that changes in the readout of visual cortical signals underlie perceptual learning, rather than changes in these signals. In the V4 learning studies, monkeys discriminated small differences in orientation, whereas in the MT study, the animals discriminated opponent motion directions. Analogous to the latter study, we trained monkeys to discriminate static orthogonal orientations masked by noise. V4 neurons showed robust increases in their capacity to discriminate the trained orientations during the course of the training. This effect was observed during discrimination and passive fixation but specifically for the trained orientations. The improvement in neural discrimination was due to decreased response variability and an increase of the difference between the mean responses for the two trained orientations. These findings demonstrate that perceptual learning in a coarse discrimination task indeed can change the response properties of a cortical sensory area.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Practice improves the performance in visual tasks, but mechanisms underlying this adult brain plasticity are unclear. Single-cell studies reported no [1], weak [2], or moderate [3, 4] perceptual learning-related changes in macaque visual areas V1 and V4, whereas none were found in middle temporal (MT) [5]. These conflicting results and modeling of human (e.g., [6, 7]) and monkey data [8] suggested that changes in the readout of visual cortical signals underlie perceptual learning, rather than changes in these signals. In the V4 learning studies, monkeys discriminated small differences in orientation, whereas in the MT study, the animals discriminated opponent motion directions. Analogous to the latter study, we trained monkeys to discriminate static orthogonal orientations masked by noise. V4 neurons showed robust increases in their capacity to discriminate the trained orientations during the course of the training. This effect was observed during discrimination and passive fixation but specifically for the trained orientations. The improvement in neural discrimination was due to decreased response variability and an increase of the difference between the mean responses for the two trained orientations. These findings demonstrate that perceptual learning in a coarse discrimination task indeed can change the response properties of a cortical sensory area. |
D Zoccolan Multiple object response normalization in monkey inferotemporal cortex Journal Article Journal of Neuroscience, 25 (36), pp. 8150–8164, 2005. @article{Zoccolan2005, title = {Multiple object response normalization in monkey inferotemporal cortex}, author = {D Zoccolan}, doi = {10.1523/JNEUROSCI.2058-05.2005}, year = {2005}, date = {2005-01-01}, journal = {Journal of Neuroscience}, volume = {25}, number = {36}, pages = {8150--8164}, abstract = {The highest stages of the visual ventral pathway are commonly assumed to provide robust representation of object identity by disregard- ingconfoundingfactorssuchas object position, size, illumination,andthe presence of other objects (clutter).However,whereasneuronal responses in monkey inferotemporal cortex (IT) can show robust tolerance to position and size changes, previous work shows that responses to preferred objects are usually reduced by the presence of nonpreferred objects. More broadly, we do not yet understand multiple object representation in IT. In this study, we systematically examined IT responses to pairs and triplets of objects in three passively viewing monkeys across a broad range of object effectiveness.Wefound that, at least under these limited clutter conditions, a large fraction of the response of each IT neuron to multiple objects is reliably predicted as the average of its responses to the constituent objects in isolation. That is, multiple object responses depend primarilyonthe relative effectiveness of the constituent objects, regardless of object identity. This average effect becomes virtually perfect when populations of IT neurons are pooled. Furthermore, the average effect cannot simply be explained by attentional shifts but behaves as a primarily feedforward response property. Together, our obser- vations are most consistent with mechanistic models in which IT neuronal outputs are normalized by summed synaptic drive into IT or spiking activity within IT and suggest that normalization mechanisms previously revealed at earlier visual areas are operating through- out the ventral visual stream.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The highest stages of the visual ventral pathway are commonly assumed to provide robust representation of object identity by disregard- ingconfoundingfactorssuchas object position, size, illumination,andthe presence of other objects (clutter).However,whereasneuronal responses in monkey inferotemporal cortex (IT) can show robust tolerance to position and size changes, previous work shows that responses to preferred objects are usually reduced by the presence of nonpreferred objects. More broadly, we do not yet understand multiple object representation in IT. In this study, we systematically examined IT responses to pairs and triplets of objects in three passively viewing monkeys across a broad range of object effectiveness.Wefound that, at least under these limited clutter conditions, a large fraction of the response of each IT neuron to multiple objects is reliably predicted as the average of its responses to the constituent objects in isolation. That is, multiple object responses depend primarilyonthe relative effectiveness of the constituent objects, regardless of object identity. This average effect becomes virtually perfect when populations of IT neurons are pooled. Furthermore, the average effect cannot simply be explained by attentional shifts but behaves as a primarily feedforward response property. Together, our obser- vations are most consistent with mechanistic models in which IT neuronal outputs are normalized by summed synaptic drive into IT or spiking activity within IT and suggest that normalization mechanisms previously revealed at earlier visual areas are operating through- out the ventral visual stream. |
Wieske van Zoest; Mieke Donk; Jan Theeuwes The role of stimulus-driven and goal-driven control in saccadic visual selection Journal Article Journal of Experimental Psychology: Human Perception and Performance, 30 (4), pp. 746–759, 2004. @article{Zoest2004, title = {The role of stimulus-driven and goal-driven control in saccadic visual selection}, author = {Wieske van Zoest and Mieke Donk and Jan Theeuwes}, doi = {10.1037/0096-1523.30.4.746}, year = {2004}, date = {2004-01-01}, journal = {Journal of Experimental Psychology: Human Perception and Performance}, volume = {30}, number = {4}, pages = {746--759}, abstract = {Four experiments were conducted to investigate the role of stimulus-driven and goal-driven control in saccadic eye movements. Participants were required to make a speeded saccade toward a predefined target presented concurrently with multiple nontargets and possibly 1 distractor. Target and distractor were either equally salient (Experiments 1 and 2) or not (Experiments 3 and 4). The results uniformly demonstrated that fast eye movements were completely stimulus driven, whereas slower eye movements were goal driven. These results are in line with neither a bottom-up account nor a top-down notion of visual selection. Instead, they indicate that visual selection is the outcome of 2 independent processes, one stimulus driven and the other goal driven, operating in different time windows.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Four experiments were conducted to investigate the role of stimulus-driven and goal-driven control in saccadic eye movements. Participants were required to make a speeded saccade toward a predefined target presented concurrently with multiple nontargets and possibly 1 distractor. Target and distractor were either equally salient (Experiments 1 and 2) or not (Experiments 3 and 4). The results uniformly demonstrated that fast eye movements were completely stimulus driven, whereas slower eye movements were goal driven. These results are in line with neither a bottom-up account nor a top-down notion of visual selection. Instead, they indicate that visual selection is the outcome of 2 independent processes, one stimulus driven and the other goal driven, operating in different time windows. |
Wieske van Zoest; Mieke Donk Saccadic target selection as a function of time Journal Article Spatial Vision, 19 (1), pp. 61–76, 2006. @article{Zoest2006, title = {Saccadic target selection as a function of time}, author = {Wieske van Zoest and Mieke Donk}, doi = {10.1007/s10530-005-5106-0}, year = {2006}, date = {2006-01-01}, journal = {Spatial Vision}, volume = {19}, number = {1}, pages = {61--76}, abstract = {Recent evidence indicates that stimulus-driven and goal-directed control of visual selection operate independently and in different time windows (van Zoest et al., 2004). The present study further investigates how eye movements are affected by stimulus-driven and goal-directed control. Observers were presented with search displays consisting of one target, multiple non-targets and one distractor element. The task of observers was to make a fast eye movement to a target immediately following the offset of a central fixation point, an event that either co-occurred with or soon followed the presentation of the search display. Distractor saliency and target-distractor similarity were independently manipulated. The results demonstrated that the effect of distractor saliency was transient and only present for the fastest eye movements, whereas the effect of target-distractor similarity was sustained and present in all but the fastest eye movements. The results support an independent timing account of visual selection.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Recent evidence indicates that stimulus-driven and goal-directed control of visual selection operate independently and in different time windows (van Zoest et al., 2004). The present study further investigates how eye movements are affected by stimulus-driven and goal-directed control. Observers were presented with search displays consisting of one target, multiple non-targets and one distractor element. The task of observers was to make a fast eye movement to a target immediately following the offset of a central fixation point, an event that either co-occurred with or soon followed the presentation of the search display. Distractor saliency and target-distractor similarity were independently manipulated. The results demonstrated that the effect of distractor saliency was transient and only present for the fastest eye movements, whereas the effect of target-distractor similarity was sustained and present in all but the fastest eye movements. The results support an independent timing account of visual selection. |
Wieske van Zoest; Mieke Donk Awareness of the saccade goal in oculomotor selection: Your eyes go before you know Journal Article Consciousness and Cognition, 19 (4), pp. 861–871, 2010. @article{Zoest2010, title = {Awareness of the saccade goal in oculomotor selection: Your eyes go before you know}, author = {Wieske van Zoest and Mieke Donk}, doi = {10.1016/j.concog.2010.04.001}, year = {2010}, date = {2010-01-01}, journal = {Consciousness and Cognition}, volume = {19}, number = {4}, pages = {861--871}, publisher = {Elsevier Inc.}, abstract = {The aim of the present study was to investigate how saccadic selection relates to people's awareness of the saliency and identity of a saccade goal. Observers were instructed to make an eye movement to either the most salient line segment (Experiment 1) or the only right-tilted element (Experiment 2) in a visual search display. The display was masked contingent on the first eye movement and after each trial observers indicated whether or not they had correctly selected the target. Whereas people's awareness concerning the saliency of the saccade goal was generally low, their awareness concerning the identity was high. Observers' awareness of the saccade goal was not related to saccadic performance. Whereas saccadic selection consistently varied as a function of saccade latency, people's awareness concerning the saliency or identity of the saccade goal did not. The results suggest that saccadic selection is primarily driven by subconscious processes.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The aim of the present study was to investigate how saccadic selection relates to people's awareness of the saliency and identity of a saccade goal. Observers were instructed to make an eye movement to either the most salient line segment (Experiment 1) or the only right-tilted element (Experiment 2) in a visual search display. The display was masked contingent on the first eye movement and after each trial observers indicated whether or not they had correctly selected the target. Whereas people's awareness concerning the saliency of the saccade goal was generally low, their awareness concerning the identity was high. Observers' awareness of the saccade goal was not related to saccadic performance. Whereas saccadic selection consistently varied as a function of saccade latency, people's awareness concerning the saliency or identity of the saccade goal did not. The results suggest that saccadic selection is primarily driven by subconscious processes. |
Wieske van Zoest; Amelia R Hunt Saccadic eye movements and perceptual judgments reveal a shared visual representation that is increasingly accurate over time Journal Article Vision Research, 51 (1), pp. 111–119, 2011. @article{Zoest2011, title = {Saccadic eye movements and perceptual judgments reveal a shared visual representation that is increasingly accurate over time}, author = {Wieske van Zoest and Amelia R Hunt}, doi = {10.1016/j.visres.2010.10.013}, year = {2011}, date = {2011-01-01}, journal = {Vision Research}, volume = {51}, number = {1}, pages = {111--119}, publisher = {Elsevier Ltd}, abstract = {Although there is evidence to suggest visual illusions affect perceptual judgments more than actions, many studies have failed to detect task-dependant dissociations. In two experiments we attempt to resolve the contradiction by exploring the time-course of visual illusion effects on both saccadic eye movements and perceptual judgments, using the Judd illusion. The results showed that, regardless of whether a saccadic response or a perceptual judgement was made, the illusory bias was larger when responses were based on less information, that is, when saccadic latencies were short, or display duration was brief. The time-course of the effect was similar for both the saccadic responses and perceptual judgements, suggesting that both modes may be driven by a shared visual representation. Changes in the strength of the illusion over time also highlight the importance of controlling for the latency of different response systems when evaluating possible dissociations between them.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Although there is evidence to suggest visual illusions affect perceptual judgments more than actions, many studies have failed to detect task-dependant dissociations. In two experiments we attempt to resolve the contradiction by exploring the time-course of visual illusion effects on both saccadic eye movements and perceptual judgments, using the Judd illusion. The results showed that, regardless of whether a saccadic response or a perceptual judgement was made, the illusory bias was larger when responses were based on less information, that is, when saccadic latencies were short, or display duration was brief. The time-course of the effect was similar for both the saccadic responses and perceptual judgements, suggesting that both modes may be driven by a shared visual representation. Changes in the strength of the illusion over time also highlight the importance of controlling for the latency of different response systems when evaluating possible dissociations between them. |
Wieske van Zoest; Mieke Donk; Stefan van der Stigchel Stimulus-salience and the time-course of saccade trajectory deviations Journal Article Journal of Vision, 12 (8), pp. 1–16, 2012. @article{Zoest2012, title = {Stimulus-salience and the time-course of saccade trajectory deviations}, author = {Wieske van Zoest and Mieke Donk and Stefan van der Stigchel}, doi = {10.1167/12.8.16}, year = {2012}, date = {2012-01-01}, journal = {Journal of Vision}, volume = {12}, number = {8}, pages = {1--16}, abstract = {The deviation of a saccade trajectory is a measure of the oculomotor competition evoked by a distractor. The aim of the present study was to investigate the impact of stimulus-salience on the time-course of saccade trajectory deviations to get a better insight into how stimulus-salience influences oculomotor competition over time. Two experiments were performed in which participants were required to make a vertical saccade to a target presented in an array of nontarget line elements and one additional distractor. The distractor varied in salience, where salience was defined by an orientation contrast relative to the surrounding nontargets. In Experiment 2, target-distractor similarity was additionally manipulated. In both Experiments 1 and 2, the results revealed that the eyes deviated towards the irrelevant distractor and did so more when the distractor was salient compared to when it was not salient. Critically, salience influenced performance only when people were fast to elicit an eye movement and had no effect when saccade latencies were long. Target-distractor similarity did not influence this pattern. These results show that the impact of salience in the visual system is transient.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The deviation of a saccade trajectory is a measure of the oculomotor competition evoked by a distractor. The aim of the present study was to investigate the impact of stimulus-salience on the time-course of saccade trajectory deviations to get a better insight into how stimulus-salience influences oculomotor competition over time. Two experiments were performed in which participants were required to make a vertical saccade to a target presented in an array of nontarget line elements and one additional distractor. The distractor varied in salience, where salience was defined by an orientation contrast relative to the surrounding nontargets. In Experiment 2, target-distractor similarity was additionally manipulated. In both Experiments 1 and 2, the results revealed that the eyes deviated towards the irrelevant distractor and did so more when the distractor was salient compared to when it was not salient. Critically, salience influenced performance only when people were fast to elicit an eye movement and had no effect when saccade latencies were long. Target-distractor similarity did not influence this pattern. These results show that the impact of salience in the visual system is transient. |
Wieske van Zoest; Dirk Kerzel The effects of saliency on manual reach trajectories and reach target selection Journal Article Vision Research, 113 , pp. 179–187, 2015. @article{Zoest2015, title = {The effects of saliency on manual reach trajectories and reach target selection}, author = {Wieske van Zoest and Dirk Kerzel}, doi = {10.1016/j.visres.2014.11.015}, year = {2015}, date = {2015-01-01}, journal = {Vision Research}, volume = {113}, pages = {179--187}, publisher = {Elsevier Ltd}, abstract = {Reaching trajectories curve toward salient distractors, reflecting the competing activation of reach plans toward target and distractor stimuli. We investigated whether the relative saliency of target and distractor influenced the curvature of the movement and the selection of the final endpoint of the reach. Participants were asked to reach a bar tilted to the right in a context of gray vertical bars. A bar tilted to the left served as distractor. Relative stimulus saliency was varied via color: either the distractor was red and the target was gray, or vice versa. Throughout, we observed that reach trajectories deviated toward the distractor. Surprisingly, relative saliency had no effect on the curvature of reach trajectories. Moreover, when we increased time pressure in separate experiments and analyzed the curvature as a function of reaction time, no influence of relative stimulus saliency was found, not even for the fastest reaction times. If anything, curvature decreased with strong time pressure. In contrast, reach target selection under strong time pressure was influenced by relative saliency: reaches with short reaction times were likely to go to the red distractor. The time course of reach target selection was comparable to saccadic target selection. Implications for the neural basis of trajectory deviations and target selection in manual and eye movements are discussed.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Reaching trajectories curve toward salient distractors, reflecting the competing activation of reach plans toward target and distractor stimuli. We investigated whether the relative saliency of target and distractor influenced the curvature of the movement and the selection of the final endpoint of the reach. Participants were asked to reach a bar tilted to the right in a context of gray vertical bars. A bar tilted to the left served as distractor. Relative stimulus saliency was varied via color: either the distractor was red and the target was gray, or vice versa. Throughout, we observed that reach trajectories deviated toward the distractor. Surprisingly, relative saliency had no effect on the curvature of reach trajectories. Moreover, when we increased time pressure in separate experiments and analyzed the curvature as a function of reaction time, no influence of relative stimulus saliency was found, not even for the fastest reaction times. If anything, curvature decreased with strong time pressure. In contrast, reach target selection under strong time pressure was influenced by relative saliency: reaches with short reaction times were likely to go to the red distractor. The time course of reach target selection was comparable to saccadic target selection. Implications for the neural basis of trajectory deviations and target selection in manual and eye movements are discussed. |
Wieske van Zoest; Benedetta Heimler; Francesco Pavani The oculomotor salience of flicker, apparent motion and continuous motion in saccade trajectories Journal Article Experimental Brain Research, 235 , pp. 181–191, 2017. @article{Zoest2017, title = {The oculomotor salience of flicker, apparent motion and continuous motion in saccade trajectories}, author = {Wieske van Zoest and Benedetta Heimler and Francesco Pavani}, doi = {10.1007/s00221-016-4779-1}, year = {2017}, date = {2017-01-01}, journal = {Experimental Brain Research}, volume = {235}, pages = {181--191}, publisher = {Springer Berlin Heidelberg}, abstract = {The aim of the present study was to investigate the impact of dynamic distractors on the time-course of oculomotor selection using saccade trajectory deviations. Participants were instructed to make a speeded eye move- ment (pro-saccade) to a target presented above or below the fixation point while an irrelevant distractor was presented. Four types of distractors were varied within participants: (1) static, (2) flicker, (3) rotating apparent motion and (4) continuous motion. The eccentricity of the distractor was varied between participants. The results showed that sac- cadic trajectories curved towards distractors presented near the vertical midline; no reliable deviation was found for distractors presented further away from the vertical mid- line. Differences between the flickering and rotating dis- tractor were found when distractor eccentricity was small and these specific effects developed over time such that there was a clear differentiation between saccadic deviation based on apparent motion for long-latency saccades, but not short-latency saccades. The present results suggest that the influence on performance of apparent motion stimuli is relatively delayed and acts in a more sustained manner compared to the influence of salient static, flickering and continuous moving stimuli.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The aim of the present study was to investigate the impact of dynamic distractors on the time-course of oculomotor selection using saccade trajectory deviations. Participants were instructed to make a speeded eye move- ment (pro-saccade) to a target presented above or below the fixation point while an irrelevant distractor was presented. Four types of distractors were varied within participants: (1) static, (2) flicker, (3) rotating apparent motion and (4) continuous motion. The eccentricity of the distractor was varied between participants. The results showed that sac- cadic trajectories curved towards distractors presented near the vertical midline; no reliable deviation was found for distractors presented further away from the vertical mid- line. Differences between the flickering and rotating dis- tractor were found when distractor eccentricity was small and these specific effects developed over time such that there was a clear differentiation between saccadic deviation based on apparent motion for long-latency saccades, but not short-latency saccades. The present results suggest that the influence on performance of apparent motion stimuli is relatively delayed and acts in a more sustained manner compared to the influence of salient static, flickering and continuous moving stimuli. |
Nahid Zokaei; Alexander G Board; Sanjay G Manohar; Anna C Nobre Modulation of the pupillary response by the content of visual working memory Journal Article Proceedings of the National Academy of Sciences, 115 (45), pp. 22802–22810, 2019. @article{Zokaei2019, title = {Modulation of the pupillary response by the content of visual working memory}, author = {Nahid Zokaei and Alexander G Board and Sanjay G Manohar and Anna C Nobre}, doi = {10.1073/pnas.1909959116}, year = {2019}, date = {2019-10-01}, journal = {Proceedings of the National Academy of Sciences}, volume = {115}, number = {45}, pages = {22802--22810}, abstract = {Studies of selective attention during perception have revealed modulation of the pupillary response according to the brightness of task-relevant (attended) vs. -irrelevant (unattended) stimuli within a visual display. As a strong test of top-down modulation of the pupil response by selective attention, we asked whether changes in pupil diameter follow internal shifts of attention to memoranda of visual stimuli of different brightness maintained in working memory, in the absence of any visual stimulation. Across 3 studies, we reveal dilation of the pupil when participants orient attention to the memorandum of a dark grating relative to that of a bright grating. The effect occurs even when the attention-orienting cue is independent of stimulus brightness, and even when stimulus brightness is merely incidental and not required for the working-memory task of judging stimulus orientation. Furthermore, relative dilation and constriction of the pupil occurred dynamically and followed the changing temporal expectation that 1 or the other stimulus would be probed across the retention delay. The results provide surprising and consistent evidence that pupil responses are under top-down control by cognitive factors, even when there is no direct adaptive gain for such modulation, since no visual stimuli were presented or anticipated. The results also strengthen the view of sensory recruitment during working memory, suggesting even activation of sensory receptors. The thought-provoking corollary to our findings is that the pupils provide a reliable measure of what is in the focus of mind, thus giving a different meaning to old proverbs about the eyes being a window to the mind.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Studies of selective attention during perception have revealed modulation of the pupillary response according to the brightness of task-relevant (attended) vs. -irrelevant (unattended) stimuli within a visual display. As a strong test of top-down modulation of the pupil response by selective attention, we asked whether changes in pupil diameter follow internal shifts of attention to memoranda of visual stimuli of different brightness maintained in working memory, in the absence of any visual stimulation. Across 3 studies, we reveal dilation of the pupil when participants orient attention to the memorandum of a dark grating relative to that of a bright grating. The effect occurs even when the attention-orienting cue is independent of stimulus brightness, and even when stimulus brightness is merely incidental and not required for the working-memory task of judging stimulus orientation. Furthermore, relative dilation and constriction of the pupil occurred dynamically and followed the changing temporal expectation that 1 or the other stimulus would be probed across the retention delay. The results provide surprising and consistent evidence that pupil responses are under top-down control by cognitive factors, even when there is no direct adaptive gain for such modulation, since no visual stimuli were presented or anticipated. The results also strengthen the view of sensory recruitment during working memory, suggesting even activation of sensory receptors. The thought-provoking corollary to our findings is that the pupils provide a reliable measure of what is in the focus of mind, thus giving a different meaning to old proverbs about the eyes being a window to the mind. |
Joshua Zonca; Giorgio Coricelli; Luca Polonio Gaze data reveal individual differences in relational representation processes Journal Article Journal of Experimental Psychology: Learning, Memory, and Cognition, 46 (2), pp. 257–279, 2020. @article{Zonca2020, title = {Gaze data reveal individual differences in relational representation processes}, author = {Joshua Zonca and Giorgio Coricelli and Luca Polonio}, doi = {10.1037/xlm0000723}, year = {2020}, date = {2020-01-01}, journal = {Journal of Experimental Psychology: Learning, Memory, and Cognition}, volume = {46}, number = {2}, pages = {257--279}, publisher = {American Psychological Association Inc.}, abstract = {In our everyday life, we often need to anticipate the potential occurrence of events and their consequences. In this context, the way we represent contingencies can determine our ability to adapt to the environment. However, it is not clear how agents encode and organize available knowledge about the future to react to possible states of the world. In the present study, we investigated the process of contingency representation with three eye-tracking experiments. In Experiment 1, we introduced a novel relational-inference task in which participants had to learn and represent conditional rules regulating the occurrence of interdependent future events. A cluster analysis on early gaze data revealed the existence of 2 distinct types of encoders. A group of (sophisticated) participants built exhaustive contingency models that explicitly linked states with each of their potential consequences. Another group of (unsophisticated) participants simply learned binary conditional rules without exploring the underlying relational complexity. Analyses of individual cognitive measures revealed that cognitive reflection is associated with the emergence of either sophisticated or unsophisticated representation behavior. In Experiment 2, we observed that unsophisticated participants switched toward the sophisticated strategy after having received information about its existence, suggesting that representation behavior was modulated by strategy generation mechanisms. In Experiment 3, we showed that the heterogeneity in representation strategy emerges also in conditional reasoning with verbal sequences, indicating the existence of a general disposition in building either sophisticated or unsophisticated models of contingencies.}, keywords = {}, pubstate = {published}, tppubtype = {article} } In our everyday life, we often need to anticipate the potential occurrence of events and their consequences. In this context, the way we represent contingencies can determine our ability to adapt to the environment. However, it is not clear how agents encode and organize available knowledge about the future to react to possible states of the world. In the present study, we investigated the process of contingency representation with three eye-tracking experiments. In Experiment 1, we introduced a novel relational-inference task in which participants had to learn and represent conditional rules regulating the occurrence of interdependent future events. A cluster analysis on early gaze data revealed the existence of 2 distinct types of encoders. A group of (sophisticated) participants built exhaustive contingency models that explicitly linked states with each of their potential consequences. Another group of (unsophisticated) participants simply learned binary conditional rules without exploring the underlying relational complexity. Analyses of individual cognitive measures revealed that cognitive reflection is associated with the emergence of either sophisticated or unsophisticated representation behavior. In Experiment 2, we observed that unsophisticated participants switched toward the sophisticated strategy after having received information about its existence, suggesting that representation behavior was modulated by strategy generation mechanisms. In Experiment 3, we showed that the heterogeneity in representation strategy emerges also in conditional reasoning with verbal sequences, indicating the existence of a general disposition in building either sophisticated or unsophisticated models of contingencies. |
Regine Zopf; Marina Butko; Alexandra Woolgar; Mark A Williams; Anina N Rich Representing the location of manipulable objects in shape-selective occipitotemporal cortex: Beyond retinotopic reference frames? Journal Article Cortex, 106 , pp. 132–150, 2018. @article{Zopf2018, title = {Representing the location of manipulable objects in shape-selective occipitotemporal cortex: Beyond retinotopic reference frames?}, author = {Regine Zopf and Marina Butko and Alexandra Woolgar and Mark A Williams and Anina N Rich}, doi = {10.1016/j.cortex.2018.05.009}, year = {2018}, date = {2018-01-01}, journal = {Cortex}, volume = {106}, pages = {132--150}, abstract = {When interacting with objects, we have to represent their location relative to our bodies. To facilitate bodily reactions, location may be encoded in the brain not just with respect to the retina (retinotopic reference frame), but also in relation to the head, trunk or arm (collectively spatiotopic reference frames). While spatiotopic reference frames for location encoding can be found in brain areas for action planning, such as parietal areas, there is debate about the existence of spatiotopic reference frames in higher-level occipitotemporal visual areas. In an extensive multi-voxel pattern analysis (MVPA) fMRI study using faces, headless bodies and scenes stimuli, Golomb and Kanwisher (2012) did not find evidence for spatiotopic reference frames in shape-selective occipitotemporal cortex. This finding is important for theories of how stimulus location is encoded in the brain. It is possible, however, that their failure to find spatiotopic reference frames is related to their stimuli: we typically do not manipulate faces, headless bodies or scenes. It is plausible that we only represent body-centred location when viewing objects that are typically manipulated. Here, we tested for object location encoding in shape-selective occipitotemporal cortex using manipulable object stimuli (balls and cups) in a MVPA fMRI study. We employed Bayesian analyses to determine sample size and evaluate the sensitivity of our data to test the hypothesis that location can be encoded in a spatiotopic reference frame in shape-selective occipitotemporal cortex over the null hypothesis of no spatiotopic location encoding. We found strong evidence for retinotopic location encoding consistent with previous findings that retinotopic reference frames are common neural representations of object location. In contrast, when testing for spatiotopic encoding, we found evidence that object location information for small manipulable objects is not decodable in relation to the body in shape-selective occipitotemporal cortex. Post-hoc exploratory analyses suggested that spatiotopic aspects might modulate retinotopic location encoding.}, keywords = {}, pubstate = {published}, tppubtype = {article} } When interacting with objects, we have to represent their location relative to our bodies. To facilitate bodily reactions, location may be encoded in the brain not just with respect to the retina (retinotopic reference frame), but also in relation to the head, trunk or arm (collectively spatiotopic reference frames). While spatiotopic reference frames for location encoding can be found in brain areas for action planning, such as parietal areas, there is debate about the existence of spatiotopic reference frames in higher-level occipitotemporal visual areas. In an extensive multi-voxel pattern analysis (MVPA) fMRI study using faces, headless bodies and scenes stimuli, Golomb and Kanwisher (2012) did not find evidence for spatiotopic reference frames in shape-selective occipitotemporal cortex. This finding is important for theories of how stimulus location is encoded in the brain. It is possible, however, that their failure to find spatiotopic reference frames is related to their stimuli: we typically do not manipulate faces, headless bodies or scenes. It is plausible that we only represent body-centred location when viewing objects that are typically manipulated. Here, we tested for object location encoding in shape-selective occipitotemporal cortex using manipulable object stimuli (balls and cups) in a MVPA fMRI study. We employed Bayesian analyses to determine sample size and evaluate the sensitivity of our data to test the hypothesis that location can be encoded in a spatiotopic reference frame in shape-selective occipitotemporal cortex over the null hypothesis of no spatiotopic location encoding. We found strong evidence for retinotopic location encoding consistent with previous findings that retinotopic reference frames are common neural representations of object location. In contrast, when testing for spatiotopic encoding, we found evidence that object location information for small manipulable objects is not decodable in relation to the body in shape-selective occipitotemporal cortex. Post-hoc exploratory analyses suggested that spatiotopic aspects might modulate retinotopic location encoding. |
Eirini Zormpa; Antje S Meyer; Laurel E Brehm Slow naming of pictures facilitates memory for their names Journal Article Psychonomic Bulletin & Review, 26 , pp. 1675–1682, 2019. @article{Zormpa2019, title = {Slow naming of pictures facilitates memory for their names}, author = {Eirini Zormpa and Antje S Meyer and Laurel E Brehm}, doi = {10.3758/s13423-019-01620-x}, year = {2019}, date = {2019-01-01}, journal = {Psychonomic Bulletin & Review}, volume = {26}, pages = {1675--1682}, publisher = {Psychonomic Bulletin & Review}, abstract = {Speakers remember their own utterances better than those of their interlocutors, suggesting that language production is beneficial to memory. This may be partly explained by a generation effect: The act of generating a word is known to lead to a memory advantage (Slamecka & Graf, 1978). In earlier work, we showed a generation effect for recognition of images (Zormpa, Brehm, Hoedemaker, & Meyer, 2019). Here, we tested whether the recognition of their names would also benefit from name generation. Testing whether picture naming improves memory for words was our primary aim, as it serves to clarify whether the representations affected by generation are visual or conceptual/lexical. A secondary aim was to assess the influence of processing time on memory. Fifty-one participants named pictures in three conditions: after hearing the picture name (identity condition), backward speech, or an unrelated word. A day later, recognition memory was tested in a yes/no task. Memory in the backward speech and unrelated conditions, which required generation, was superior to memory in the identity condition, which did not require generation. The time taken by participants for naming was a good predictor of memory, such that words that took longer to be retrieved were remembered better. Importantly, that was the case only when generation was required: In the no-generation (identity) condition, processing time was not related to recognition memory performance. This work has shown that generation affects conceptual/lexical representations, making an important contribution to the understanding of the relationship between memory and language.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Speakers remember their own utterances better than those of their interlocutors, suggesting that language production is beneficial to memory. This may be partly explained by a generation effect: The act of generating a word is known to lead to a memory advantage (Slamecka & Graf, 1978). In earlier work, we showed a generation effect for recognition of images (Zormpa, Brehm, Hoedemaker, & Meyer, 2019). Here, we tested whether the recognition of their names would also benefit from name generation. Testing whether picture naming improves memory for words was our primary aim, as it serves to clarify whether the representations affected by generation are visual or conceptual/lexical. A secondary aim was to assess the influence of processing time on memory. Fifty-one participants named pictures in three conditions: after hearing the picture name (identity condition), backward speech, or an unrelated word. A day later, recognition memory was tested in a yes/no task. Memory in the backward speech and unrelated conditions, which required generation, was superior to memory in the identity condition, which did not require generation. The time taken by participants for naming was a good predictor of memory, such that words that took longer to be retrieved were remembered better. Importantly, that was the case only when generation was required: In the no-generation (identity) condition, processing time was not related to recognition memory performance. This work has shown that generation affects conceptual/lexical representations, making an important contribution to the understanding of the relationship between memory and language. |
Heng Zou; Hermann J Muller; Zhuanghua Shi Non-spatial sounds regulate eye movements and enhance visual search Journal Article Journal of Vision, 12 (5), pp. 2–2, 2012. @article{Zou2012, title = {Non-spatial sounds regulate eye movements and enhance visual search}, author = {Heng Zou and Hermann J Muller and Zhuanghua Shi}, doi = {10.1167/12.5.2}, year = {2012}, date = {2012-01-01}, journal = {Journal of Vision}, volume = {12}, number = {5}, pages = {2--2}, abstract = {Spatially uninformative sounds can enhance visual search when the sounds are synchronized with color changes of the visual target, a phenomenon referred to as "pip-and-pop" effect (van der Burg, Olivers, Bronkhorst, & Theeuwes, 2008). The present study investigated the relationship of this effect to changes in oculomotor scanning behavior induced by the sounds. The results revealed sound events to increase fixation durations upon their occurrence and to decrease the mean number of saccades. More specifically, spatially uninformative sounds facilitated the orientation of ocular scanning away from already scanned display regions not containing a target (Experiment 1) and enhanced search performance even on target-absent trials (Experiment 2). Facilitation was also observed when the sounds were presented 100 ms prior to the target or at random (Experiment 3). These findings suggest that non-spatial sounds cause a general freezing effect on oculomotor scanning behavior, an effect which in turn benefits visual search performance by temporally and spatially extended information sampling.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Spatially uninformative sounds can enhance visual search when the sounds are synchronized with color changes of the visual target, a phenomenon referred to as "pip-and-pop" effect (van der Burg, Olivers, Bronkhorst, & Theeuwes, 2008). The present study investigated the relationship of this effect to changes in oculomotor scanning behavior induced by the sounds. The results revealed sound events to increase fixation durations upon their occurrence and to decrease the mean number of saccades. More specifically, spatially uninformative sounds facilitated the orientation of ocular scanning away from already scanned display regions not containing a target (Experiment 1) and enhanced search performance even on target-absent trials (Experiment 2). Facilitation was also observed when the sounds were presented 100 ms prior to the target or at random (Experiment 3). These findings suggest that non-spatial sounds cause a general freezing effect on oculomotor scanning behavior, an effect which in turn benefits visual search performance by temporally and spatially extended information sampling. |
Tianlong Zu; John Hutson; Lester C Loschky; Sanjay N Rebello Using eye movements to measure intrinsic, extraneous, and germane load in a multimedia learning environment Journal Article Journal of Educational Psychology, 112 (7), pp. 1338–1352, 2020. @article{Zu2020, title = {Using eye movements to measure intrinsic, extraneous, and germane load in a multimedia learning environment}, author = {Tianlong Zu and John Hutson and Lester C Loschky and Sanjay N Rebello}, doi = {10.1037/edu0000441}, year = {2020}, date = {2020-01-01}, journal = {Journal of Educational Psychology}, volume = {112}, number = {7}, pages = {1338--1352}, abstract = {In a previous study, DeLeeuw and Mayer (2008) found support for the triarchic model of cognitive load (Sweller, Van Merrienboer, & Paas, 1998, 2019) by showing that three different metrics could be used to independently measure 3 hypothesized types of cognitive load: intrinsic, extraneous, and germane. However, 2 of the 3 metrics that the authors used were intrusive in nature because learning had to be stopped momentarily to complete the measures. The current study extends the design of DeLeeuw and Mayer (2008) by investigating whether learners' eye movement behavior can be used to measure the three proposed types of cognitive load without interrupting learning. During a 1-hr experiment, we presented a multimedia lesson explaining the mechanism of electric motors to participants who had low prior knowledge of this topic. First, we replicated the main results of DeLeeuw and Mayer (2008), providing further support for the triarchic structure of cognitive load. Second, we identified eye movement measures that differentiated the three types of cognitive load. These findings were independent of participants' working memory capacity. Together, these results provide further evidence for the triarchic nature of cognitive load (Sweller et al., 1998, 2019), and are a first step toward online measures of cognitive load that could potentially be implemented into computer assisted learning technologies.}, keywords = {}, pubstate = {published}, tppubtype = {article} } In a previous study, DeLeeuw and Mayer (2008) found support for the triarchic model of cognitive load (Sweller, Van Merrienboer, & Paas, 1998, 2019) by showing that three different metrics could be used to independently measure 3 hypothesized types of cognitive load: intrinsic, extraneous, and germane. However, 2 of the 3 metrics that the authors used were intrusive in nature because learning had to be stopped momentarily to complete the measures. The current study extends the design of DeLeeuw and Mayer (2008) by investigating whether learners' eye movement behavior can be used to measure the three proposed types of cognitive load without interrupting learning. During a 1-hr experiment, we presented a multimedia lesson explaining the mechanism of electric motors to participants who had low prior knowledge of this topic. First, we replicated the main results of DeLeeuw and Mayer (2008), providing further support for the triarchic structure of cognitive load. Second, we identified eye movement measures that differentiated the three types of cognitive load. These findings were independent of participants' working memory capacity. Together, these results provide further evidence for the triarchic nature of cognitive load (Sweller et al., 1998, 2019), and are a first step toward online measures of cognitive load that could potentially be implemented into computer assisted learning technologies. |
Vladislav I Zubov; Tatiana E Petrova Lexically or grammatically adapted texts: What is easier to process for secondary school children? Journal Article Procedia Computer Science, 176 , pp. 2117–2124, 2020. @article{Zubov2020, title = {Lexically or grammatically adapted texts: What is easier to process for secondary school children?}, author = {Vladislav I Zubov and Tatiana E Petrova}, doi = {10.1016/j.procs.2020.09.248}, year = {2020}, date = {2020-01-01}, journal = {Procedia Computer Science}, volume = {176}, pages = {2117--2124}, publisher = {Elsevier B.V.}, abstract = {This article presents the results of an eye-tracking experiment on Russian language material, exploring the reading process in secondary school children with general speech underdevelopment. The objective of the study is to reveal what type of a text is better to use to make the reading and comprehension easier: lexically adapted text or grammatically adapted text? The data from Russian-speaking participants from the compulsory school (experimental group) and 28 secondary school children with normal speech development (control group) indicate that both types of adaptation proved to be efficient for recalling the information from the text. Though, we revealed that in teenagers with language disorders in anamnesis lower perceptual processes are partially compensated (parameters of eye movements), but higher comprehension processes remain affected.}, keywords = {}, pubstate = {published}, tppubtype = {article} } This article presents the results of an eye-tracking experiment on Russian language material, exploring the reading process in secondary school children with general speech underdevelopment. The objective of the study is to reveal what type of a text is better to use to make the reading and comprehension easier: lexically adapted text or grammatically adapted text? The data from Russian-speaking participants from the compulsory school (experimental group) and 28 secondary school children with normal speech development (control group) indicate that both types of adaptation proved to be efficient for recalling the information from the text. Though, we revealed that in teenagers with language disorders in anamnesis lower perceptual processes are partially compensated (parameters of eye movements), but higher comprehension processes remain affected. |
Sandrine Zufferey; Willem M Mak; Liesbeth Degand; Ted J M Sanders Advanced learners' comprehension of discourse connectives: The role of L1 transfer across on-line and off-line tasks Journal Article Second Language Research, 31 (3), pp. 389–411, 2015. @article{Zufferey2015, title = {Advanced learners' comprehension of discourse connectives: The role of L1 transfer across on-line and off-line tasks}, author = {Sandrine Zufferey and Willem M Mak and Liesbeth Degand and Ted J M Sanders}, doi = {10.1177/0267658315573349}, year = {2015}, date = {2015-01-01}, journal = {Second Language Research}, volume = {31}, number = {3}, pages = {389--411}, abstract = {Discourse connectives are important indicators of textual coherence, and mastering them is an essential part of acquiring a language. In this article, we compare advanced learners' sensitivity to the meaning conveyed by connectives in an off-line grammaticality judgment task and an on-line reading experiment using eye-tracking. We also assess the influence of first language (L1) transfer by comparing learners' comprehension of two non-native-like semantic uses of connectives in English, often produced by learners due to transfer from French and Dutch. Our results indicate that in an off-line task transfer is an important factor accounting for French-and Dutch-speaking learners' non-native-like comprehension of connectives. During on-line processing, however, learners are as sensitive as native speakers to the meaning conveyed by connectives. These results raise intriguing questions regarding explicit vs. implicit knowledge in language learners.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Discourse connectives are important indicators of textual coherence, and mastering them is an essential part of acquiring a language. In this article, we compare advanced learners' sensitivity to the meaning conveyed by connectives in an off-line grammaticality judgment task and an on-line reading experiment using eye-tracking. We also assess the influence of first language (L1) transfer by comparing learners' comprehension of two non-native-like semantic uses of connectives in English, often produced by learners due to transfer from French and Dutch. Our results indicate that in an off-line task transfer is an important factor accounting for French-and Dutch-speaking learners' non-native-like comprehension of connectives. During on-line processing, however, learners are as sensitive as native speakers to the meaning conveyed by connectives. These results raise intriguing questions regarding explicit vs. implicit knowledge in language learners. |
Wietske Zuiderbaan; Ben M Harvey; Serge O Dumoulin Modeling center – surround configurations in population receptive fields using fMRI Journal Article Journal of Vision, 12 (3), pp. 1–15, 2012. @article{Zuiderbaan2012, title = {Modeling center – surround configurations in population receptive fields using fMRI}, author = {Wietske Zuiderbaan and Ben M Harvey and Serge O Dumoulin}, doi = {10.1167/12.3.10.Introduction}, year = {2012}, date = {2012-01-01}, journal = {Journal of Vision}, volume = {12}, number = {3}, pages = {1--15}, abstract = {Antagonistic center–surround configurations are a central organizational principle of our visual system. In visual cortex, stimulation outside the classical receptive field can decrease neural activity and also decrease functional Magnetic Resonance Imaging (fMRI) signal amplitudes. Decreased fMRI amplitudes below baseline—0% contrast—are often referred to as “negative” responses. Using neural model-based fMRI data analyses, we can estimate the region of visual space to which each cortical location responds, i.e., the population receptive field (pRF). Current models of the pRF do not account for a center–surround organization or negative fMRI responses. Here, we extend the pRF model by adding surround suppression. Where the conventional model uses a circular symmetric Gaussian function to describe the pRF, the new model uses a circular symmetric difference-of-Gaussians (DoG) function. The DoG model allows the pRF analysis to capture fMRI signals below baseline and surround suppression. Comparing the fits of the models, an increased variance explained is found for the DoG model. This improvement was predominantly present in V1/2/3 and decreased in later visual areas. The improvement of the fits was particularly striking in the parts of the fMRI signal below baseline. Estimates for the surround size of the pRF show an increase with eccentricity and over visual areas V1/2/3. For the suppression index, which is based on the ratio between the volumes of both Gaussians, we show a decrease over visual areas V1 and V2. Using non-invasive fMRI techniques, this method gives the possibility to examine assumptions about center–surround receptive fields in human subjects.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Antagonistic center–surround configurations are a central organizational principle of our visual system. In visual cortex, stimulation outside the classical receptive field can decrease neural activity and also decrease functional Magnetic Resonance Imaging (fMRI) signal amplitudes. Decreased fMRI amplitudes below baseline—0% contrast—are often referred to as “negative” responses. Using neural model-based fMRI data analyses, we can estimate the region of visual space to which each cortical location responds, i.e., the population receptive field (pRF). Current models of the pRF do not account for a center–surround organization or negative fMRI responses. Here, we extend the pRF model by adding surround suppression. Where the conventional model uses a circular symmetric Gaussian function to describe the pRF, the new model uses a circular symmetric difference-of-Gaussians (DoG) function. The DoG model allows the pRF analysis to capture fMRI signals below baseline and surround suppression. Comparing the fits of the models, an increased variance explained is found for the DoG model. This improvement was predominantly present in V1/2/3 and decreased in later visual areas. The improvement of the fits was particularly striking in the parts of the fMRI signal below baseline. Estimates for the surround size of the pRF show an increase with eccentricity and over visual areas V1/2/3. For the suppression index, which is based on the ratio between the volumes of both Gaussians, we show a decrease over visual areas V1 and V2. Using non-invasive fMRI techniques, this method gives the possibility to examine assumptions about center–surround receptive fields in human subjects. |
Jan Zwickel; Hermann J Müller Eye movements as a means to evaluate and improve robots Journal Article International Journal of Social Robotics, 1 (4), pp. 357–366, 2009. @article{Zwickel2009, title = {Eye movements as a means to evaluate and improve robots}, author = {Jan Zwickel and Hermann J Müller}, doi = {10.1007/s12369-009-0033-3}, year = {2009}, date = {2009-01-01}, journal = {International Journal of Social Robotics}, volume = {1}, number = {4}, pages = {357--366}, abstract = {Abstract With an increase in their capabilities, robots start to play a role in everyday settings. This necessitates a step from a robot-centered (i.e., teaching humans to adapt to robots) to a more human-centered approach (where robots integrate naturally into human activities). Achieving this will increase the effectiveness of robot usage (e.g., shortening the time required for learning), reduce errors, and increase user acceptance. Robotic camera control will play an important role for a more natural and easier-to-interpret behavior, owing to the central importance of gaze in human communication. This study is intended to provide a first step towards improving camera control by a better understanding of human gaze behavior in social situations. To this end, we registered the eye movements of humans watching different types of movies. In all movies, the same two triangles moved around in a self-propelled fashion. However, crucially, some of the movies elicited the attribution of mental states to the triangles, while others did not. This permitted us to directly distinguish eye movement patterns relating to the attribution of mental states in (perceived) social situations, from the patterns in non-social situations. We argue that a better understanding of what characterizes human gaze patterns in social situations will help shape robotic behavior, make it more natural for humans to communicate with robots, and establish joint attention (to certain objects) between humans and robots. In addition, a better understanding of human gaze in social situations will provide a measure for evaluating whether robots are perceived as social agents rather than non-intentional machines. This could help decide which behaviors a robot should display in order to be perceived as a social interaction partner.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Abstract With an increase in their capabilities, robots start to play a role in everyday settings. This necessitates a step from a robot-centered (i.e., teaching humans to adapt to robots) to a more human-centered approach (where robots integrate naturally into human activities). Achieving this will increase the effectiveness of robot usage (e.g., shortening the time required for learning), reduce errors, and increase user acceptance. Robotic camera control will play an important role for a more natural and easier-to-interpret behavior, owing to the central importance of gaze in human communication. This study is intended to provide a first step towards improving camera control by a better understanding of human gaze behavior in social situations. To this end, we registered the eye movements of humans watching different types of movies. In all movies, the same two triangles moved around in a self-propelled fashion. However, crucially, some of the movies elicited the attribution of mental states to the triangles, while others did not. This permitted us to directly distinguish eye movement patterns relating to the attribution of mental states in (perceived) social situations, from the patterns in non-social situations. We argue that a better understanding of what characterizes human gaze patterns in social situations will help shape robotic behavior, make it more natural for humans to communicate with robots, and establish joint attention (to certain objects) between humans and robots. In addition, a better understanding of human gaze in social situations will provide a measure for evaluating whether robots are perceived as social agents rather than non-intentional machines. This could help decide which behaviors a robot should display in order to be perceived as a social interaction partner. |
Jan Zwickel; Melissa L -H Võ How the presence of persons biases eye movements Journal Article Psychonomic Bulletin & Review, 17 (2), pp. 257–262, 2010. @article{Zwickel2010, title = {How the presence of persons biases eye movements}, author = {Jan Zwickel and Melissa L -H V{õ}}, doi = {10.3758/PBR.17.2.257}, year = {2010}, date = {2010-01-01}, journal = {Psychonomic Bulletin & Review}, volume = {17}, number = {2}, pages = {257--262}, abstract = {We investigated modulation of gaze behavior of observers viewing complex scenes that included a person. To assess spontaneous orientation-following, and in contrast to earlier studies, we did not make the person salient via instruction or low-level saliency. Still, objects that were referred to by the orientation of the person were visited earlier, more often, and longer than when they were not referred to. Analysis of fixation sequences showed that the number of saccades to the cued and uncued objects differed only for saccades that started from the head region, but not for saccades starting from a control object or from a body region. We therefore argue that viewing a person leads to an increase in spontaneous following of the person's viewing direction even when the person plays no role in scene understanding and is not made prominent.}, keywords = {}, pubstate = {published}, tppubtype = {article} } We investigated modulation of gaze behavior of observers viewing complex scenes that included a person. To assess spontaneous orientation-following, and in contrast to earlier studies, we did not make the person salient via instruction or low-level saliency. Still, objects that were referred to by the orientation of the person were visited earlier, more often, and longer than when they were not referred to. Analysis of fixation sequences showed that the number of saccades to the cued and uncued objects differed only for saccades that started from the head region, but not for saccades starting from a control object or from a body region. We therefore argue that viewing a person leads to an increase in spontaneous following of the person's viewing direction even when the person plays no role in scene understanding and is not made prominent. |
Jan Zwickel; Mathias Hegele; Marc Grosjean Ocular tracking of biological and nonbiological motion: The effect of instructed agency Journal Article Psychonomic Bulletin & Review, 19 (1), pp. 52–57, 2012. @article{Zwickel2012, title = {Ocular tracking of biological and nonbiological motion: The effect of instructed agency}, author = {Jan Zwickel and Mathias Hegele and Marc Grosjean}, doi = {10.3758/s13423-011-0193-7}, year = {2012}, date = {2012-01-01}, journal = {Psychonomic Bulletin & Review}, volume = {19}, number = {1}, pages = {52--57}, abstract = {Recent findings suggest that visuomotor performance is modulated by people's beliefs about the agency (e.g., animate vs. inanimate) behind the events they perceive. This study investigated the effect of instructed agency on ocular tracking of point-light motions with biological and nonbiological velocity profiles. The motions followed either a relatively simple (ellipse) or a more complex (scribble) trajectory, and agency was manipulated by informing the participants that the motions they saw were either human or computer generated. In line with previous findings, tracking performance was better for biological than for nonbiological motions, and this effect was particularly pronounced for the simpler (elliptical) motions. The biological advantage was also larger for the human than for the computer instruction condition, but only for a measure that captured the predictive component of smooth pursuit. These results suggest that ocular tracking is influenced by the internal forward model people choose to adopt.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Recent findings suggest that visuomotor performance is modulated by people's beliefs about the agency (e.g., animate vs. inanimate) behind the events they perceive. This study investigated the effect of instructed agency on ocular tracking of point-light motions with biological and nonbiological velocity profiles. The motions followed either a relatively simple (ellipse) or a more complex (scribble) trajectory, and agency was manipulated by informing the participants that the motions they saw were either human or computer generated. In line with previous findings, tracking performance was better for biological than for nonbiological motions, and this effect was particularly pronounced for the simpler (elliptical) motions. The biological advantage was also larger for the human than for the computer instruction condition, but only for a measure that captured the predictive component of smooth pursuit. These results suggest that ocular tracking is influenced by the internal forward model people choose to adopt. |
Ariel Zylberberg; Pablo Barttfeld; Mariano Sigman The construction of confidence in a perceptual decision Journal Article Frontiers in Integrative Neuroscience, 6 (September), pp. 1–10, 2012. @article{Zylberberg2012, title = {The construction of confidence in a perceptual decision}, author = {Ariel Zylberberg and Pablo Barttfeld and Mariano Sigman}, doi = {10.3389/fnint.2012.00079}, year = {2012}, date = {2012-01-01}, journal = {Frontiers in Integrative Neuroscience}, volume = {6}, number = {September}, pages = {1--10}, abstract = {Decision-making involves the selection of one out of many possible courses of action. A decision may bear on other decisions, as when humans seek a second medical opinion before undergoing a risky surgical intervention. These "meta-decisions" are mediated by confidence judgments-the degree to which decision-makers consider that a choice is likely to be correct. We studied how subjective confidence is constructed from noisy sensory evidence. The psychophysical kernels used to convert sensory information into choice and confidence decisions were precisely reconstructed measuring the impact of small fluctuations in sensory input. This is shown in two independent experiments in which human participants made a decision about the direction of motion of a set of randomly moving dots, or compared the brightness of a group of fluctuating bars, followed by a confidence report. The results of both experiments converged to show that: (1) confidence was influenced by evidence during a short window of time at the initial moments of the decision, and (2) confidence was influenced by evidence for the selected choice but was virtually blind to evidence for the non-selected choice. Our findings challenge classical models of subjective confidence-which posit that the difference of evidence in favor of each choice is the seed of the confidence signal.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Decision-making involves the selection of one out of many possible courses of action. A decision may bear on other decisions, as when humans seek a second medical opinion before undergoing a risky surgical intervention. These "meta-decisions" are mediated by confidence judgments-the degree to which decision-makers consider that a choice is likely to be correct. We studied how subjective confidence is constructed from noisy sensory evidence. The psychophysical kernels used to convert sensory information into choice and confidence decisions were precisely reconstructed measuring the impact of small fluctuations in sensory input. This is shown in two independent experiments in which human participants made a decision about the direction of motion of a set of randomly moving dots, or compared the brightness of a group of fluctuating bars, followed by a confidence report. The results of both experiments converged to show that: (1) confidence was influenced by evidence during a short window of time at the initial moments of the decision, and (2) confidence was influenced by evidence for the selected choice but was virtually blind to evidence for the non-selected choice. Our findings challenge classical models of subjective confidence-which posit that the difference of evidence in favor of each choice is the seed of the confidence signal. |
Ariel Zylberberg; Manuel Oliva; Mariano Sigman Pupil dilation: A fingerprint of temporal selection during the "Attentional Blink" Journal Article Frontiers in Psychology, 3 (AUG), pp. 1–6, 2012. @article{Zylberberg2012a, title = {Pupil dilation: A fingerprint of temporal selection during the "Attentional Blink"}, author = {Ariel Zylberberg and Manuel Oliva and Mariano Sigman}, doi = {10.3389/fpsyg.2012.00316}, year = {2012}, date = {2012-01-01}, journal = {Frontiers in Psychology}, volume = {3}, number = {AUG}, pages = {1--6}, abstract = {Pupil dilation indexes cognitive events of behavioral relevance, like the storage of information to memory and the deployment of attention. Yet, given the slow temporal response of the pupil dilation, it is not known from previous studies whether the pupil can index cognitive events in the short time scale of ∼100 ms. Here we measured the size of the pupil in the Attentional Blink (AB) experiment, a classic demonstration of attentional limitations in processing rapidly presented stimuli. In the AB, two targets embedded in a sequence have to be reported and the second stimulus is often missed if presented between 200 and 500 ms after the first. We show that pupil dilation can be used as a marker of cognitive processing in AB, revealing both the timing and amount of cognitive processing. Specifically, we found that in the time range where the AB is known to occur: (i) the pupil dilation was delayed, mimicking the pattern of response times in the Psychological Refractory Period (PRP) paradigm, (ii) the amplitude of the pupil was reduced relative to that of larger lags, even for correctly identified targets, and (iii) the amplitude of the pupil was smaller for missed than for correctly reported targets. These results support two-stage theories of the Attentional Blink where a second processing stage is delayed inside the interference regime, and indicate that the pupil dilation can be used as a marker of cognitive processing in the time scale of ∼100 ms. Furthermore, given the known relation between the pupil dilation and the activity of the locus coeruleus, our results also support theories that link the serial stage to the action of a specific neuromodulator, norepinephrine.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Pupil dilation indexes cognitive events of behavioral relevance, like the storage of information to memory and the deployment of attention. Yet, given the slow temporal response of the pupil dilation, it is not known from previous studies whether the pupil can index cognitive events in the short time scale of ∼100 ms. Here we measured the size of the pupil in the Attentional Blink (AB) experiment, a classic demonstration of attentional limitations in processing rapidly presented stimuli. In the AB, two targets embedded in a sequence have to be reported and the second stimulus is often missed if presented between 200 and 500 ms after the first. We show that pupil dilation can be used as a marker of cognitive processing in AB, revealing both the timing and amount of cognitive processing. Specifically, we found that in the time range where the AB is known to occur: (i) the pupil dilation was delayed, mimicking the pattern of response times in the Psychological Refractory Period (PRP) paradigm, (ii) the amplitude of the pupil was reduced relative to that of larger lags, even for correctly identified targets, and (iii) the amplitude of the pupil was smaller for missed than for correctly reported targets. These results support two-stage theories of the Attentional Blink where a second processing stage is delayed inside the interference regime, and indicate that the pupil dilation can be used as a marker of cognitive processing in the time scale of ∼100 ms. Furthermore, given the known relation between the pupil dilation and the activity of the locus coeruleus, our results also support theories that link the serial stage to the action of a specific neuromodulator, norepinephrine. |
Ariel Zylberberg; Daniel M Wolpert; Michael N Shadlen Counterfactual reasoning underlies the learning of priors in decision making Journal Article Neuron, 99 (5), pp. 1083–1097, 2018. @article{Zylberberg2018, title = {Counterfactual reasoning underlies the learning of priors in decision making}, author = {Ariel Zylberberg and Daniel M Wolpert and Michael N Shadlen}, doi = {10.1016/j.neuron.2018.07.035}, year = {2018}, date = {2018-01-01}, journal = {Neuron}, volume = {99}, number = {5}, pages = {1083--1097}, publisher = {The Authors}, abstract = {Accurate decisions require knowledge of prior probabilities (e.g., prevalence or base rate), but it is unclear how prior probabilities are learned in the absence of a teacher. We hypothesized that humans could learn base rates from experience making decisions, even without feedback. Participants made difficult decisions about the direction of dynamic random dot motion. Across blocks of 15–42 trials, the base rate favoring left or right varied. Participants were not informed of the base rate or choice accuracy, yet they gradually biased their choices and thereby increased accuracy and confidence in their decisions. They achieved this by updating knowledge of base rate after each decision, using a counterfactual representation of confidence that simulates a neutral prior. The strategy is consistent with Bayesian updating of belief and suggests that humans represent both true confidence, which incorporates the evolving belief of the prior, and counterfactual confidence, which discounts the prior. Zylberberg et al. show that human decision makers can learn environmental biases from sequences of difficult decisions, without feedback about accuracy, by calculating the belief that the decisions would have been correct in an unbiased environment—a form of counterfactual confidence.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Accurate decisions require knowledge of prior probabilities (e.g., prevalence or base rate), but it is unclear how prior probabilities are learned in the absence of a teacher. We hypothesized that humans could learn base rates from experience making decisions, even without feedback. Participants made difficult decisions about the direction of dynamic random dot motion. Across blocks of 15–42 trials, the base rate favoring left or right varied. Participants were not informed of the base rate or choice accuracy, yet they gradually biased their choices and thereby increased accuracy and confidence in their decisions. They achieved this by updating knowledge of base rate after each decision, using a counterfactual representation of confidence that simulates a neutral prior. The strategy is consistent with Bayesian updating of belief and suggests that humans represent both true confidence, which incorporates the evolving belief of the prior, and counterfactual confidence, which discounts the prior. Zylberberg et al. show that human decision makers can learn environmental biases from sequences of difficult decisions, without feedback about accuracy, by calculating the belief that the decisions would have been correct in an unbiased environment—a form of counterfactual confidence. |
所有EyeLink论文,按第一作者名字首字母排序。您可以使用关键字搜索,比如Visual Search,Smooth Pursuit,Parkinsons等。您也可以按作者姓名搜索。如需要查阅各个研究领域的EyeLink论文,请查看各解决方案网页。如果您发现我们漏掉了部分EyeLink论文,请邮件联系!