Nicola C Anderson; Mieke Donk; Martijn Meeter The influence of a scene preview on eye movement behavior in natural scenes Journal Article Psychonomic Bulletin & Review, 23 , pp. 1794–1801, 2016. @article{Anderson2016, title = {The influence of a scene preview on eye movement behavior in natural scenes}, author = {Nicola C Anderson and Mieke Donk and Martijn Meeter}, doi = {10.3758/s13423-016-1035-4}, year = {2016}, date = {2016-01-01}, journal = {Psychonomic Bulletin & Review}, volume = {23}, pages = {1794--1801}, publisher = {Psychonomic Bulletin & Review}, abstract = {Rich contextual and semantic information can be extracted from only a brief presentation of a natural scene. This is presumed to be activated quickly enough to guide initial eye movements into a scene. However, early, short-latency eye movements in natural scenes have been shown to be dependent on the salience distribution across the image (Anderson, Ort, Kruijne, Meeter, & Donk, 2015). In the present work, we manipulated the salience distribution across a natural scene by changing the global contrast. We showed participants a brief real or nonsense preview of the scene and examined the time-course of eye movement guidance. A real preview decreased the latency and increased the amplitude of initial saccades into the image, suggesting that the preview allowed observers to obtain additional contextual information that would otherwise not be available. However, the preview did not completely override the initial tendency for short-latency saccades to be guided by the underlying salience distribution of the image. We discuss these findings in the context of oculomotor selection based on the integration of contextual information and low-level features in a natural scene.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Rich contextual and semantic information can be extracted from only a brief presentation of a natural scene. This is presumed to be activated quickly enough to guide initial eye movements into a scene. However, early, short-latency eye movements in natural scenes have been shown to be dependent on the salience distribution across the image (Anderson, Ort, Kruijne, Meeter, & Donk, 2015). In the present work, we manipulated the salience distribution across a natural scene by changing the global contrast. We showed participants a brief real or nonsense preview of the scene and examined the time-course of eye movement guidance. A real preview decreased the latency and increased the amplitude of initial saccades into the image, suggesting that the preview allowed observers to obtain additional contextual information that would otherwise not be available. However, the preview did not completely override the initial tendency for short-latency saccades to be guided by the underlying salience distribution of the image. We discuss these findings in the context of oculomotor selection based on the integration of contextual information and low-level features in a natural scene. |
Elaine J Anderson; Marc S Tibber; Samuel D Schwarzkopf; Sukhwinder S Shergill; Emilio Fernandez-Egea; Geraint Rees; Steven C Dakin Visual population receptive fields in people with schizophrenia have reduced inhibitory surrounds Journal Article Journal of Neuroscience, 37 (6), pp. 1546–1556, 2017. @article{Anderson2017, title = {Visual population receptive fields in people with schizophrenia have reduced inhibitory surrounds}, author = {Elaine J Anderson and Marc S Tibber and Samuel D Schwarzkopf and Sukhwinder S Shergill and Emilio Fernandez-Egea and Geraint Rees and Steven C Dakin}, doi = {10.1523/JNEUROSCI.3620-15.2016}, year = {2017}, date = {2017-01-01}, journal = {Journal of Neuroscience}, volume = {37}, number = {6}, pages = {1546--1556}, abstract = {People with schizophrenia (SZ) experience abnormal visual perception on a range of visual tasks, which have been linked to abnormal synaptic transmission and an imbalance between cortical excitation and inhibition. However, differences in the underlying architecture of visual cortex neurons, which might explain these visual anomalies, have yet to be reportedin vivoHere, we probed the neural basis of these deficits using fMRI and population receptive field (pRF) mapping to infer properties of visually responsive neurons in people with SZ. We employed a difference-of-Gaussian model to capture the center-surround configuration of the pRF, providing critical information about the spatial scale of the pRFs inhibitory surround. Our analysis reveals that SZ is associated with reduced pRF size in early retinotopic visual cortex, as well as a reduction in size and depth of the inhibitory surround in V1, V2, and V4. We consider how reduced inhibition might explain the diverse range of visual deficits reported in SZ.}, keywords = {}, pubstate = {published}, tppubtype = {article} } People with schizophrenia (SZ) experience abnormal visual perception on a range of visual tasks, which have been linked to abnormal synaptic transmission and an imbalance between cortical excitation and inhibition. However, differences in the underlying architecture of visual cortex neurons, which might explain these visual anomalies, have yet to be reportedin vivoHere, we probed the neural basis of these deficits using fMRI and population receptive field (pRF) mapping to infer properties of visually responsive neurons in people with SZ. We employed a difference-of-Gaussian model to capture the center-surround configuration of the pRF, providing critical information about the spatial scale of the pRFs inhibitory surround. Our analysis reveals that SZ is associated with reduced pRF size in early retinotopic visual cortex, as well as a reduction in size and depth of the inhibitory surround in V1, V2, and V4. We consider how reduced inhibition might explain the diverse range of visual deficits reported in SZ. |
Nicola C Anderson; Mieke Donk Salient object changes influence overt attentional prioritization and object-based targeting in natural scenes Journal Article PLoS ONE, 12 (2), pp. e0172132, 2017. @article{Anderson2017a, title = {Salient object changes influence overt attentional prioritization and object-based targeting in natural scenes}, author = {Nicola C Anderson and Mieke Donk}, doi = {10.1371/journal.pone.0172132}, year = {2017}, date = {2017-01-01}, journal = {PLoS ONE}, volume = {12}, number = {2}, pages = {e0172132}, abstract = {A change to an object in natural scenes attracts attention when it occurs during a fixation. However, when a change occurs during a saccade, and is masked by saccadic suppression, it typically does not capture the gaze in a bottom- up manner. In the present work, we investigated how the type and direction of salient changes to objects affect the prioritization and targeting of objects in natural scenes. We asked observers to look around a scene in preparation for a later memory test. After a period of time, an object in the scene was increased or decreased in salience either during a fixation (with a transient signal) or during a saccade (without transient signal), or it was not changed at all. Changes that were made during a fixation attracted the eyes both when the change involved an increase and a decrease in salience. However, changes that were made during a saccade only captured the eyes when the change was an increase in salience, relative to the baseline no-change condition. These results suggest that the prioritization of object changes can be influenced by the underlying salience of the changed object. In addition, object changes that occurred with a transient signal (which is itself a salient signal) resulted in more central object targeting. Taken together, our results suggest that salient signals in a natural scene are an important component in both object prioritization and targeting in natural scene viewing, insofar as they align with object locations.}, keywords = {}, pubstate = {published}, tppubtype = {article} } A change to an object in natural scenes attracts attention when it occurs during a fixation. However, when a change occurs during a saccade, and is masked by saccadic suppression, it typically does not capture the gaze in a bottom- up manner. In the present work, we investigated how the type and direction of salient changes to objects affect the prioritization and targeting of objects in natural scenes. We asked observers to look around a scene in preparation for a later memory test. After a period of time, an object in the scene was increased or decreased in salience either during a fixation (with a transient signal) or during a saccade (without transient signal), or it was not changed at all. Changes that were made during a fixation attracted the eyes both when the change involved an increase and a decrease in salience. However, changes that were made during a saccade only captured the eyes when the change was an increase in salience, relative to the baseline no-change condition. These results suggest that the prioritization of object changes can be influenced by the underlying salience of the changed object. In addition, object changes that occurred with a transient signal (which is itself a salient signal) resulted in more central object targeting. Taken together, our results suggest that salient signals in a natural scene are an important component in both object prioritization and targeting in natural scene viewing, insofar as they align with object locations. |
Brian A Anderson; Haena Kim On the representational nature of value-driven spatial attentional biases Journal Article Journal of Neurophysiology, 120 (5), pp. 2654–2658, 2018. @article{Anderson2018a, title = {On the representational nature of value-driven spatial attentional biases}, author = {Brian A Anderson and Haena Kim}, doi = {10.1152/jn.00489.2018}, year = {2018}, date = {2018-01-01}, journal = {Journal of Neurophysiology}, volume = {120}, number = {5}, pages = {2654--2658}, abstract = {Reward learning biases attention toward both reward-associated objects and reward-associated regions of space. The relationship between objects and space in the value-based control of attention, as well as the contextual specificity of space-reward pairings, remains unclear. In the present study, using a free-viewing task, we provide evidence of overt attentional biases toward previously rewarded regions of texture scenes that lack objects. When scrutinizing a texture scene, participants look more frequently toward, and spend a longer amount of time looking at, regions that they have repeatedly oriented to in the past as a result of performance feedback. These biases were scene specific, such that different spatial contexts produced different patterns of habitual spatial orienting. Our findings indicate that reinforcement learning can modify looking behavior via a representation that is purely spatial in nature in a context-specific manner.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Reward learning biases attention toward both reward-associated objects and reward-associated regions of space. The relationship between objects and space in the value-based control of attention, as well as the contextual specificity of space-reward pairings, remains unclear. In the present study, using a free-viewing task, we provide evidence of overt attentional biases toward previously rewarded regions of texture scenes that lack objects. When scrutinizing a texture scene, participants look more frequently toward, and spend a longer amount of time looking at, regions that they have repeatedly oriented to in the past as a result of performance feedback. These biases were scene specific, such that different spatial contexts produced different patterns of habitual spatial orienting. Our findings indicate that reinforcement learning can modify looking behavior via a representation that is purely spatial in nature in a context-specific manner. |
Brian A Anderson; Haena Kim Mechanisms of value-learning in the guidance of spatial attention Journal Article Cognition, 178 , pp. 26–36, 2018. @article{Anderson2018b, title = {Mechanisms of value-learning in the guidance of spatial attention}, author = {Brian A Anderson and Haena Kim}, doi = {10.1016/j.cognition.2018.05.005}, year = {2018}, date = {2018-01-01}, journal = {Cognition}, volume = {178}, pages = {26--36}, publisher = {Elsevier}, abstract = {The role of associative reward learning in the guidance of feature-based attention is well established. The extent to which reward learning can modulate spatial attention has been much more controversial. At least one demonstration of a persistent spatial attention bias following space-based associative reward learning has been reported. At the same time, multiple other experiments have been published failing to demonstrate enduring attentional biases towards locations at which a target, if found, yields high reward. This is in spite of evidence that participants use reward structures to inform their decisions where to search, leading some to suggest that, unlike feature-based attention, spatial attention may be impervious to the influence of learning from reward structures. Here, we demonstrate a robust bias towards regions of a scene that participants were previously rewarded for selecting. This spatial bias relies on representations that are anchored to the configuration of objects within a scene. The observed bias appears to be driven specifically by reinforcement learning, and can be observed with equal strength following non-reward corrective feedback. The time course of the bias is consistent with a transient shift of attention, rather than a strategic search pattern, and is evident in eye movement patterns during free viewing. Taken together, our findings reconcile previously conflicting reports and offer an integrative account of how learning from feedback shapes the spatial attention system.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The role of associative reward learning in the guidance of feature-based attention is well established. The extent to which reward learning can modulate spatial attention has been much more controversial. At least one demonstration of a persistent spatial attention bias following space-based associative reward learning has been reported. At the same time, multiple other experiments have been published failing to demonstrate enduring attentional biases towards locations at which a target, if found, yields high reward. This is in spite of evidence that participants use reward structures to inform their decisions where to search, leading some to suggest that, unlike feature-based attention, spatial attention may be impervious to the influence of learning from reward structures. Here, we demonstrate a robust bias towards regions of a scene that participants were previously rewarded for selecting. This spatial bias relies on representations that are anchored to the configuration of objects within a scene. The observed bias appears to be driven specifically by reinforcement learning, and can be observed with equal strength following non-reward corrective feedback. The time course of the bias is consistent with a transient shift of attention, rather than a strategic search pattern, and is evident in eye movement patterns during free viewing. Taken together, our findings reconcile previously conflicting reports and offer an integrative account of how learning from feedback shapes the spatial attention system. |
Brian A Anderson; Haena Kim Test–retest reliability of value-driven attentional capture Journal Article Behavior Research Methods, 51 (2), pp. 720–726, 2019. @article{Anderson2019a, title = {Test–retest reliability of value-driven attentional capture}, author = {Brian A Anderson and Haena Kim}, doi = {10.3758/s13428-018-1079-7}, year = {2019}, date = {2019-01-01}, journal = {Behavior Research Methods}, volume = {51}, number = {2}, pages = {720--726}, publisher = {Behavior Research Methods}, abstract = {Attention is biased toward learned predictors of reward. The degree to which attention is automatically drawn to arbitrary reward cues has been linked to a variety of psychopathologies, including drug dependence, HIV-risk behaviors, depressive symptoms, and attention deficit/hyperactivity disorder. In the context of addiction specifically, attentional biases toward drug cues have been related to drug craving and treatment outcomes. Given the potential role of value-based attention in psychopathology, the ability to quantify the magnitude of such bias before and after a treatment intervention in order to assess treatment-related changes in attention allocation would be desirable. However, the test–retest reliability of value-driven attentional capture by arbitrary reward cues has not been established. In the present study, we show that an oculomotor measure of value-driven attentional capture produces highly robust test–retest reliability for a behavioral assessment, whereas the response time (RT) measure more commonly used in the attentional bias literature does not. Our findings provide methodological support for the ability to obtain a reliable measure of susceptibility to value-driven attentional capture at multiple points in time, and they highlight a limitation of RT-based measures that should inform the use of attentional-bias tasks as an assessment tool.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Attention is biased toward learned predictors of reward. The degree to which attention is automatically drawn to arbitrary reward cues has been linked to a variety of psychopathologies, including drug dependence, HIV-risk behaviors, depressive symptoms, and attention deficit/hyperactivity disorder. In the context of addiction specifically, attentional biases toward drug cues have been related to drug craving and treatment outcomes. Given the potential role of value-based attention in psychopathology, the ability to quantify the magnitude of such bias before and after a treatment intervention in order to assess treatment-related changes in attention allocation would be desirable. However, the test–retest reliability of value-driven attentional capture by arbitrary reward cues has not been established. In the present study, we show that an oculomotor measure of value-driven attentional capture produces highly robust test–retest reliability for a behavioral assessment, whereas the response time (RT) measure more commonly used in the attentional bias literature does not. Our findings provide methodological support for the ability to obtain a reliable measure of susceptibility to value-driven attentional capture at multiple points in time, and they highlight a limitation of RT-based measures that should inform the use of attentional-bias tasks as an assessment tool. |
Brian A Anderson; Haena Kim On the relationship between value-driven and stimulus-driven attentional capture Journal Article Attention, Perception, and Psychophysics, 81 (3), pp. 607–613, 2019. @article{Anderson2019b, title = {On the relationship between value-driven and stimulus-driven attentional capture}, author = {Brian A Anderson and Haena Kim}, doi = {10.3758/s13414-019-01670-2}, year = {2019}, date = {2019-01-01}, journal = {Attention, Perception, and Psychophysics}, volume = {81}, number = {3}, pages = {607--613}, abstract = {Reward history, physical salience, and task relevance all influence the degree to which a stimulus competes for attention, reflecting value-driven, stimulus-driven, and goal-contingent attentional capture, respectively. Theories of value-driven attention have likened reward cues to physically salient stimuli, positing that reward cues are preferentially processed in early visual areas as a result of value-modulated plasticity in the visual system. Such theories predict a strong coupling between value-driven and stimulus-driven attentional capture across individuals. In the present study, we directly test this hypothesis, and demonstrate a robust correlation between value-driven and stimulus-driven attentional capture. Our findings suggest substantive overlap in the mechanisms of competition underlying the attentional priority of reward cues and physically salient stimuli.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Reward history, physical salience, and task relevance all influence the degree to which a stimulus competes for attention, reflecting value-driven, stimulus-driven, and goal-contingent attentional capture, respectively. Theories of value-driven attention have likened reward cues to physically salient stimuli, positing that reward cues are preferentially processed in early visual areas as a result of value-modulated plasticity in the visual system. Such theories predict a strong coupling between value-driven and stimulus-driven attentional capture across individuals. In the present study, we directly test this hypothesis, and demonstrate a robust correlation between value-driven and stimulus-driven attentional capture. Our findings suggest substantive overlap in the mechanisms of competition underlying the attentional priority of reward cues and physically salient stimuli. |
Brian A Anderson Using aversive conditioning with near-real-time feedback to shape eye movements during naturalistic viewing Journal Article Behavior Research Methods, pp. 1–10, 2020. @article{Anderson2020, title = {Using aversive conditioning with near-real-time feedback to shape eye movements during naturalistic viewing}, author = {Brian A Anderson}, doi = {10.3758/s13428-020-01476-3}, year = {2020}, date = {2020-01-01}, journal = {Behavior Research Methods}, pages = {1--10}, publisher = {Behavior Research Methods}, abstract = {Strategically shaping patterns of eye movements through training has manifold promising applications, with the potential to improve the speed and efficiency of visual search, improve the ability of humans to extract information from complex displays, and help correct disordered eye movement patterns. However, training how a person moves their eyes when viewing an image or scene is notoriously difficult, with typical approaches relying on explicit instruction and strategy, which have notable limitations. The present study introduces a novel approach to eye movement training using aversive conditioning with near-real-time feedback. Participants viewed indoor scenes (eight scenes presented over 48 trials) with the goal of remembering those scenes for a later memory test. During viewing, saccades meeting specific amplitude and direction criteria probabilistically triggered an aversive electric shock, which was felt within 50 ms after the eliciting eye movement, allowing for a close temporal coupling between an oculomotor behavior and the feedback intended to shape it. Results demonstrate a bias against performing an initial saccade in the direction paired with shock (Experiment 1) or generally of the amplitude paired with shock (Experiment 2), an effect that operates without apparent awareness of the relationship between shocks and saccades, persists into extinction, and generalizes to the viewing of novel images. The present study serves as a proof of concept concerning the implementation of near-real-time feedback in eye movement training.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Strategically shaping patterns of eye movements through training has manifold promising applications, with the potential to improve the speed and efficiency of visual search, improve the ability of humans to extract information from complex displays, and help correct disordered eye movement patterns. However, training how a person moves their eyes when viewing an image or scene is notoriously difficult, with typical approaches relying on explicit instruction and strategy, which have notable limitations. The present study introduces a novel approach to eye movement training using aversive conditioning with near-real-time feedback. Participants viewed indoor scenes (eight scenes presented over 48 trials) with the goal of remembering those scenes for a later memory test. During viewing, saccades meeting specific amplitude and direction criteria probabilistically triggered an aversive electric shock, which was felt within 50 ms after the eliciting eye movement, allowing for a close temporal coupling between an oculomotor behavior and the feedback intended to shape it. Results demonstrate a bias against performing an initial saccade in the direction paired with shock (Experiment 1) or generally of the amplitude paired with shock (Experiment 2), an effect that operates without apparent awareness of the relationship between shocks and saccades, persists into extinction, and generalizes to the viewing of novel images. The present study serves as a proof of concept concerning the implementation of near-real-time feedback in eye movement training. |
Brian A Anderson; Andy Jeesu Kim Selection history-driven signal suppression Journal Article Visual Cognition, 28 (2), pp. 112–118, 2020. @article{Anderson2020a, title = {Selection history-driven signal suppression}, author = {Brian A Anderson and Andy Jeesu Kim}, doi = {10.1080/13506285.2020.1727599}, year = {2020}, date = {2020-01-01}, journal = {Visual Cognition}, volume = {28}, number = {2}, pages = {112--118}, publisher = {Taylor & Francis}, abstract = {The control of attention is influenced by current goals, physical salience, and selection history. Under certain conditions, physically salient stimuli can be strategically suppressed below baseline levels, facilitating visual search for a target. It is unclear whether such signal suppression is a broad mechanism of selective information processing that extends to other sources of attentional priority evoked by task-irrelevant stimuli, or whether it is particular to physically salient perceptual signals. Using eye movements, in the present study we highlight a case where a former-target-colour distractor facilitates search for a target on a large percentage of trials. Our findings provide evidence that the principle of signal suppression extends to other sources of attentional priority beyond physical salience, and that selection history can be leveraged to strategically guide attention away from a stimulus.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The control of attention is influenced by current goals, physical salience, and selection history. Under certain conditions, physically salient stimuli can be strategically suppressed below baseline levels, facilitating visual search for a target. It is unclear whether such signal suppression is a broad mechanism of selective information processing that extends to other sources of attentional priority evoked by task-irrelevant stimuli, or whether it is particular to physically salient perceptual signals. Using eye movements, in the present study we highlight a case where a former-target-colour distractor facilitates search for a target on a large percentage of trials. Our findings provide evidence that the principle of signal suppression extends to other sources of attentional priority beyond physical salience, and that selection history can be leveraged to strategically guide attention away from a stimulus. |
Brian A Anderson; Mark K Britton On the automaticity of attentional orienting to threatening stimuli Journal Article Emotion, 20 (6), pp. 1109–1112, 2020. @article{Anderson2020b, title = {On the automaticity of attentional orienting to threatening stimuli}, author = {Brian A Anderson and Mark K Britton}, doi = {10.1037/emo0000596}, year = {2020}, date = {2020-01-01}, journal = {Emotion}, volume = {20}, number = {6}, pages = {1109--1112}, abstract = {Attention is biased toward stimuli that have been associated with aversive outcomes in the past. This bias has previously been interpreted as reflecting automatic orienting toward threat signals. However, in many prior studies, either the threatening stimulus provided valuable predictive information, signaling the possibility of an otherwise unavoidable punishment and thereby allowing participants to brace themselves, or the aversive event could be avoided with fast and accurate task performance. Under these conditions, monitoring for threat could be viewed as an adaptive strategy. In the present study, fixating a color stimulus immediately resulted in a shock on some trials, providing a direct incentive not to look at the stimulus. Nevertheless, this contingency resulted in participants fixating the shock-associated stimulus more frequently than a neutral distractor matched for physical salience. Our findings demonstrate that threatening stimuli are automatically attended even when attending such stimuli is actually responsible for triggering the aversive event, providing compelling evidence for automaticity.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Attention is biased toward stimuli that have been associated with aversive outcomes in the past. This bias has previously been interpreted as reflecting automatic orienting toward threat signals. However, in many prior studies, either the threatening stimulus provided valuable predictive information, signaling the possibility of an otherwise unavoidable punishment and thereby allowing participants to brace themselves, or the aversive event could be avoided with fast and accurate task performance. Under these conditions, monitoring for threat could be viewed as an adaptive strategy. In the present study, fixating a color stimulus immediately resulted in a shock on some trials, providing a direct incentive not to look at the stimulus. Nevertheless, this contingency resulted in participants fixating the shock-associated stimulus more frequently than a neutral distractor matched for physical salience. Our findings demonstrate that threatening stimuli are automatically attended even when attending such stimuli is actually responsible for triggering the aversive event, providing compelling evidence for automaticity. |
Brian A Anderson; Haena Kim; Mark K Britton; Andy Jeesu Kim Measuring attention to reward as an individual trait: the value-driven attention questionnaire (VDAQ) Journal Article Psychological Research, 84 (8), pp. 2122–2137, 2020. @article{Anderson2020c, title = {Measuring attention to reward as an individual trait: the value-driven attention questionnaire (VDAQ)}, author = {Brian A Anderson and Haena Kim and Mark K Britton and Andy Jeesu Kim}, doi = {10.1007/s00426-019-01212-3}, year = {2020}, date = {2020-01-01}, journal = {Psychological Research}, volume = {84}, number = {8}, pages = {2122--2137}, publisher = {Springer Berlin Heidelberg}, abstract = {Reward history is a powerful determinant of what we pay attention to. This influence of reward on attention varies substantially across individuals, being related to a variety of personality variables and clinical conditions. Currently, the ability to measure and quantify attention-to-reward is restricted to the use of psychophysical laboratory tasks, which limits research into the construct in a variety of ways. In the present study, we introduce a questionnaire designed to provide a brief and accessible means of assessing attention-to-reward. Scores on the questionnaire correlate with other measures known to be related to attention-to-reward and predict performance on multiple laboratory tasks measuring the construct. In demonstrating this relationship, we also provide evidence that attention-to-reward as measured in the lab, an automatic and implicit bias in information processing, is related to overt behaviors and motivations in everyday life as assessed via the questionnaire. Variation in scores on the questionnaire is additionally associated with a distinct biomarker in brain connectivity, and the questionnaire exhibits acceptable test–retest reliability. Overall, the Value-Driven Attention Questionnaire (VDAQ) provides a useful proxy-measure of attention-to-reward that is much more accessible than typical laboratory assessments.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Reward history is a powerful determinant of what we pay attention to. This influence of reward on attention varies substantially across individuals, being related to a variety of personality variables and clinical conditions. Currently, the ability to measure and quantify attention-to-reward is restricted to the use of psychophysical laboratory tasks, which limits research into the construct in a variety of ways. In the present study, we introduce a questionnaire designed to provide a brief and accessible means of assessing attention-to-reward. Scores on the questionnaire correlate with other measures known to be related to attention-to-reward and predict performance on multiple laboratory tasks measuring the construct. In demonstrating this relationship, we also provide evidence that attention-to-reward as measured in the lab, an automatic and implicit bias in information processing, is related to overt behaviors and motivations in everyday life as assessed via the questionnaire. Variation in scores on the questionnaire is additionally associated with a distinct biomarker in brain connectivity, and the questionnaire exhibits acceptable test–retest reliability. Overall, the Value-Driven Attention Questionnaire (VDAQ) provides a useful proxy-measure of attention-to-reward that is much more accessible than typical laboratory assessments. |
Ariana R Andrei; Sorin A Pojoga; Roger Janz; Valentin Dragoi Integration of cortical population signals for visual perception Journal Article Nature Communications, 10 , pp. 3832, 2019. @article{Andrei2019, title = {Integration of cortical population signals for visual perception}, author = {Ariana R Andrei and Sorin A Pojoga and Roger Janz and Valentin Dragoi}, doi = {10.1038/s41467-019-11736-2}, year = {2019}, date = {2019-12-01}, journal = {Nature Communications}, volume = {10}, pages = {3832}, publisher = {Nature Publishing Group}, abstract = {Visual stimuli evoke heterogeneous responses across nearby neural populations. These signals must be locally integrated to contribute to perception, but the principles underlying this process are unknown. Here, we exploit the systematic organization of orientation preference in macaque primary visual cortex (V1) and perform causal manipulations to examine the limits of signal integration. Optogenetic stimulation and visual stimuli are used to simultaneously drive two neural populations with overlapping receptive fields. We report that optogenetic stimulation raises firing rates uniformly across conditions, but improves the detection of visual stimuli only when activating cells that are preferentially-tuned to the visual stimulus. Further, we show that changes in correlated variability are exclusively present when the optogenetically and visually-activated populations are functionally-proximal, suggesting that correlation changes represent a hallmark of signal integration. Our results demonstrate that information from functionally-proximal neurons is pooled for perception, but functionally-distal signals remain independent. Primary visual cortical neurons exhibit diverse responses to visual stimuli yet how these signals are integrated during visual perception is not well understood. Here, the authors show that optogenetic stimulation of neurons situated near the visually‐driven population leads to improved orientation detection in monkeys through changes in correlated variability.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Visual stimuli evoke heterogeneous responses across nearby neural populations. These signals must be locally integrated to contribute to perception, but the principles underlying this process are unknown. Here, we exploit the systematic organization of orientation preference in macaque primary visual cortex (V1) and perform causal manipulations to examine the limits of signal integration. Optogenetic stimulation and visual stimuli are used to simultaneously drive two neural populations with overlapping receptive fields. We report that optogenetic stimulation raises firing rates uniformly across conditions, but improves the detection of visual stimuli only when activating cells that are preferentially-tuned to the visual stimulus. Further, we show that changes in correlated variability are exclusively present when the optogenetically and visually-activated populations are functionally-proximal, suggesting that correlation changes represent a hallmark of signal integration. Our results demonstrate that information from functionally-proximal neurons is pooled for perception, but functionally-distal signals remain independent. Primary visual cortical neurons exhibit diverse responses to visual stimuli yet how these signals are integrated during visual perception is not well understood. Here, the authors show that optogenetic stimulation of neurons situated near the visually‐driven population leads to improved orientation detection in monkeys through changes in correlated variability. |
Sally Andrews; Aaron Veldre Wrapping up sentence comprehension: The role of task demands and individual differences Journal Article Scientific Studies of Reading, pp. 1–18, 2020. @article{Andrews2020, title = {Wrapping up sentence comprehension: The role of task demands and individual differences}, author = {Sally Andrews and Aaron Veldre}, doi = {10.1080/10888438.2020.1817028}, year = {2020}, date = {2020-01-01}, journal = {Scientific Studies of Reading}, pages = {1--18}, publisher = {Routledge}, abstract = {This study used wrap-up effects on eye movements to assess the relationship between online reading behavior and comprehension. Participants, assessed on measures of reading, vocabulary, and spelling, read short passages that manipulated whether a syntactic boundary was unmarked by punctuation, weakly marked by a comma, or strongly marked by a period. Comprehension demands were manipulated by presenting questions after either 25% or 100% of passages. Wrap-up effects at punctuation boundaries manifested principally in rereading of earlier text and were more marked in lower proficiency readers. High comprehension load was associated with longer total reading time but had little impact on wrap-up effects. The relationship between eye movements and comprehension accuracy suggested that poor comprehension was associated with a shallower reading strategy under low comprehension demands. The implications of these findings for understanding how the processes involved in self-regulating comprehension are modulated by reading proficiency and comprehension goals are discussed.}, keywords = {}, pubstate = {published}, tppubtype = {article} } This study used wrap-up effects on eye movements to assess the relationship between online reading behavior and comprehension. Participants, assessed on measures of reading, vocabulary, and spelling, read short passages that manipulated whether a syntactic boundary was unmarked by punctuation, weakly marked by a comma, or strongly marked by a period. Comprehension demands were manipulated by presenting questions after either 25% or 100% of passages. Wrap-up effects at punctuation boundaries manifested principally in rereading of earlier text and were more marked in lower proficiency readers. High comprehension load was associated with longer total reading time but had little impact on wrap-up effects. The relationship between eye movements and comprehension accuracy suggested that poor comprehension was associated with a shallower reading strategy under low comprehension demands. The implications of these findings for understanding how the processes involved in self-regulating comprehension are modulated by reading proficiency and comprehension goals are discussed. |
Efsun Annac; Mathias Pointner; Patrick H Khader; Hermann J Müller; Xuelian Zang; Thomas Geyer Recognition of incidentally learned visual search arrays is supported by fixational eye movements Journal Article Journal of Experimental Psychology: Learning, Memory, and Cognition, 45 (12), pp. 2147–2164, 2019. @article{Annac2019, title = {Recognition of incidentally learned visual search arrays is supported by fixational eye movements}, author = {Efsun Annac and Mathias Pointner and Patrick H Khader and Hermann J Müller and Xuelian Zang and Thomas Geyer}, doi = {10.1037/xlm0000702}, year = {2019}, date = {2019-01-01}, journal = {Journal of Experimental Psychology: Learning, Memory, and Cognition}, volume = {45}, number = {12}, pages = {2147--2164}, abstract = {Repeated encounter of abstract target-distractor letter arrangements leads to improved visual search for such displays. This contextual-cueing effect is attributed to incidental learning of display configurations. Whether observers can consciously access the memory underlying the cueing effect is still a controversial issue. The current study uses a novel recognition task and eyetracking to tackle this question. Experiment 1 investigated observers' ability to recognize or "generate" the display quadrant of the target in a previous search array when the target was now substituted by distractor element as well as where observers' eye fixations would fall while they freely viewed the recognition display, examining the link between the fixation pattern and explicit recognition judgments. Experiment 2 tested whether eye fixations would serve a critical role for explicit retrieval from context memory. Experiment 3 asked whether eye fixations of the target region are critical for context-based facilitation of search reaction times to manifest. The results revealed longer fixational dwell times in the target quadrant for learned relative to foil displays. Further, explicit recognition was enhanced, and above chance level, when observers were made to fixate the target quadrant as compared to when they were prevented from doing so. However, the manifestation of contextual cueing of visual search did itself not require fixations of the target quadrant. Moreover, contextual-cueing of search reaction times was significantly correlated with both fixational dwell times and observers' explicit generation performance. The results argue in favor of contextual cueing of visual search being the result of a single, explicit, memory system, though it could nevertheless receive support from separable-automatic versus controlled-retrieval processes. Fixational eye movements, that is, the directed overt allocation of visual attention, provide an interface between these processes in context cueing.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Repeated encounter of abstract target-distractor letter arrangements leads to improved visual search for such displays. This contextual-cueing effect is attributed to incidental learning of display configurations. Whether observers can consciously access the memory underlying the cueing effect is still a controversial issue. The current study uses a novel recognition task and eyetracking to tackle this question. Experiment 1 investigated observers' ability to recognize or "generate" the display quadrant of the target in a previous search array when the target was now substituted by distractor element as well as where observers' eye fixations would fall while they freely viewed the recognition display, examining the link between the fixation pattern and explicit recognition judgments. Experiment 2 tested whether eye fixations would serve a critical role for explicit retrieval from context memory. Experiment 3 asked whether eye fixations of the target region are critical for context-based facilitation of search reaction times to manifest. The results revealed longer fixational dwell times in the target quadrant for learned relative to foil displays. Further, explicit recognition was enhanced, and above chance level, when observers were made to fixate the target quadrant as compared to when they were prevented from doing so. However, the manifestation of contextual cueing of visual search did itself not require fixations of the target quadrant. Moreover, contextual-cueing of search reaction times was significantly correlated with both fixational dwell times and observers' explicit generation performance. The results argue in favor of contextual cueing of visual search being the result of a single, explicit, memory system, though it could nevertheless receive support from separable-automatic versus controlled-retrieval processes. Fixational eye movements, that is, the directed overt allocation of visual attention, provide an interface between these processes in context cueing. |
Ulrich Ansorge; Heinz-Werner Priess; Dirk Kerzel Effects of relevant and irrelevant color singletons on inhibition of return and attentional capture Journal Article Attention, Perception, and Psychophysics, 75 (8), pp. 1687–1702, 2013. @article{Ansorge2013, title = {Effects of relevant and irrelevant color singletons on inhibition of return and attentional capture}, author = {Ulrich Ansorge and Heinz-Werner Priess and Dirk Kerzel}, doi = {10.3758/s13414-013-0521-2}, year = {2013}, date = {2013-01-01}, journal = {Attention, Perception, and Psychophysics}, volume = {75}, number = {8}, pages = {1687--1702}, abstract = {We tested whether color singletons lead to saccadic and manual inhibition of return (IOR; i.e., slower responses at cued locations) and whether IOR depended on the relevance of the color singletons. The target display was preceded by a nonpredictive cue display. In three experiments, half of the cues were response-relevant, because participants had to perform a discrimination task at the cued location. With the exception of Experiment 2, none of the cue colors matched the target color. We observed saccadic IOR after color singletons, which was greater for slow than for fast responses. Furthermore, when the relevant cue color matched the target color, we observed attentional capture (i.e., faster responses at cued locations) with rapid responses, but IOR with slower responses, which provides evidence for attentional deallocation. When the cue display was completely response-irrelevant in two additional experiments, we did not find evidence for IOR. Instead, we found attentional capture when the cue color matched the target color. Also, attentional capture was greater for rapid responses and with short cue-target intervals. Thus, IOR emerges when cues are relevant and do not match the target color, whereas attentional capture emerges with relevant and irrelevant cues that match the target color.}, keywords = {}, pubstate = {published}, tppubtype = {article} } We tested whether color singletons lead to saccadic and manual inhibition of return (IOR; i.e., slower responses at cued locations) and whether IOR depended on the relevance of the color singletons. The target display was preceded by a nonpredictive cue display. In three experiments, half of the cues were response-relevant, because participants had to perform a discrimination task at the cued location. With the exception of Experiment 2, none of the cue colors matched the target color. We observed saccadic IOR after color singletons, which was greater for slow than for fast responses. Furthermore, when the relevant cue color matched the target color, we observed attentional capture (i.e., faster responses at cued locations) with rapid responses, but IOR with slower responses, which provides evidence for attentional deallocation. When the cue display was completely response-irrelevant in two additional experiments, we did not find evidence for IOR. Instead, we found attentional capture when the cue color matched the target color. Also, attentional capture was greater for rapid responses and with short cue-target intervals. Thus, IOR emerges when cues are relevant and do not match the target color, whereas attentional capture emerges with relevant and irrelevant cues that match the target color. |
Katharina Anton-Erxleben; Katrin Herrmann; Marisa Carrasco Independent Effects of Adaptation and Attention on Perceived Speed Journal Article Psychological Science, 24 (2), pp. 150–159, 2013. @article{AntonErxleben2013, title = {Independent Effects of Adaptation and Attention on Perceived Speed}, author = {Katharina Anton-Erxleben and Katrin Herrmann and Marisa Carrasco}, doi = {10.1177/0956797612449178}, year = {2013}, date = {2013-01-01}, journal = {Psychological Science}, volume = {24}, number = {2}, pages = {150--159}, abstract = {Adaptation and attention are two mechanisms by which sensory systems manage limited bioenergetic resources: Whereas adaptation decreases sensitivity to stimuli just encountered, attention increases sensitivity to behaviorally relevant stimuli. In the visual system, these changes in sensitivity are accompanied by a change in the appearance of different stimulus dimensions, such as speed. Adaptation causes an underestimation of speed, whereas attention leads to an overestimation of speed. In the two experiments reported here, we investigated whether the effects of these mechanisms interact and how they affect the appearance of stimulus features. We tested the effects of adaptation and the subsequent allocation of attention on perceived speed. A quickly moving adaptor decreased the perceived speed of subsequent stimuli, whereas a slow adaptor did not alter perceived speed. Attention increased perceived speed regardless of the adaptation effect, which indicates that adaptation and attention affect perceived speed independently. Moreover, the finding that attention can alter perceived speed after adaptation indicates that adaptation is not merely a by-product of neuronal fatigue.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Adaptation and attention are two mechanisms by which sensory systems manage limited bioenergetic resources: Whereas adaptation decreases sensitivity to stimuli just encountered, attention increases sensitivity to behaviorally relevant stimuli. In the visual system, these changes in sensitivity are accompanied by a change in the appearance of different stimulus dimensions, such as speed. Adaptation causes an underestimation of speed, whereas attention leads to an overestimation of speed. In the two experiments reported here, we investigated whether the effects of these mechanisms interact and how they affect the appearance of stimulus features. We tested the effects of adaptation and the subsequent allocation of attention on perceived speed. A quickly moving adaptor decreased the perceived speed of subsequent stimuli, whereas a slow adaptor did not alter perceived speed. Attention increased perceived speed regardless of the adaptation effect, which indicates that adaptation and attention affect perceived speed independently. Moreover, the finding that attention can alter perceived speed after adaptation indicates that adaptation is not merely a by-product of neuronal fatigue. |
Evan G Antzoulatos; Earl K Miller Synchronous beta rhythms of frontoparietal networks support only behaviorally relevant representations Journal Article eLife, 5 (NOVEMBER2016), pp. 1–22, 2016. @article{Antzoulatos2016, title = {Synchronous beta rhythms of frontoparietal networks support only behaviorally relevant representations}, author = {Evan G Antzoulatos and Earl K Miller}, doi = {10.7554/eLife.17822}, year = {2016}, date = {2016-01-01}, journal = {eLife}, volume = {5}, number = {NOVEMBER2016}, pages = {1--22}, abstract = {Categorization has been associated with distributed networks of the primate brain, including the prefrontal (PFC) and posterior parietal cortices (PPC). Although category-selective spiking in PFC and PPC has been established, the frequency-dependent dynamic interactions of frontoparietal networks are largely unexplored. We trained monkeys to perform a delayed-match-to-spatial-category task while recording spikes and local field potentials from the PFC and PPC with multiple electrodes. We found category-selective beta- and delta-band synchrony between and within the areas. However, in addition to the categories, delta synchrony and spiking activity also reflected irrelevant stimulus dimensions. By contrast, beta synchrony only conveyed information about the task-relevant categories. Further, category-selective PFC neurons were synchronized with PPC beta oscillations, while neurons that carried irrelevant information were not. These results suggest that long-range beta-band synchrony could act as a filter that only supports neural representations of the variables relevant to the task at hand.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Categorization has been associated with distributed networks of the primate brain, including the prefrontal (PFC) and posterior parietal cortices (PPC). Although category-selective spiking in PFC and PPC has been established, the frequency-dependent dynamic interactions of frontoparietal networks are largely unexplored. We trained monkeys to perform a delayed-match-to-spatial-category task while recording spikes and local field potentials from the PFC and PPC with multiple electrodes. We found category-selective beta- and delta-band synchrony between and within the areas. However, in addition to the categories, delta synchrony and spiking activity also reflected irrelevant stimulus dimensions. By contrast, beta synchrony only conveyed information about the task-relevant categories. Further, category-selective PFC neurons were synchronized with PPC beta oscillations, while neurons that carried irrelevant information were not. These results suggest that long-range beta-band synchrony could act as a filter that only supports neural representations of the variables relevant to the task at hand. |
Paul L Aparicio; Elias B Issa; James J DiCarlo Neurophysiological organization of the middle face patch in macaque inferior temporal cortex Journal Article Journal of Neuroscience, 36 (50), pp. 12729–12745, 2016. @article{Aparicio2016, title = {Neurophysiological organization of the middle face patch in macaque inferior temporal cortex}, author = {Paul L Aparicio and Elias B Issa and James J DiCarlo}, doi = {10.1523/JNEUROSCI.0237-16.2016}, year = {2016}, date = {2016-01-01}, journal = {Journal of Neuroscience}, volume = {36}, number = {50}, pages = {12729--12745}, abstract = {While early cortical visual areas contain fine scale spatial organization of neuronal properties such as orientation preference, the spatial organization of higher-level visual areas is less well understood. The fMRI demonstration of face preferring regions in human ventral cortex (FFA, OFA) and monkey inferior temporal cortex ("face patches") raises the question of how neural selectivity for faces is organized. Here, we targeted hundreds of spatially registered neural recordings to the largest fMRI-identified face selective region in monkeys, the middle face patch (MFP) and show that the MFP contains a graded enrichment of face preferring neurons. At its center, as much as 93% of the sites we sampled responded twice as strongly to faces than to non-face objects. We estimate the maximum neurophysiological size of the MFP to be ∼6 mm in diameter, consistent with its previously reported size under fMRI. Importantly, face selectivity in the MFP varied strongly even between neighboring sites. Additionally, extremely face selective sites were ∼50x more likely to be present inside the MFP than outside. These results provide the first direct quantification of the size and neural composition of the MFP by showing that the cortical tissue localized to the fMRI defined region consists of a very high fraction of face preferring sites near its center, and a monotonic decrease in that fraction along any radial spatial axis.}, keywords = {}, pubstate = {published}, tppubtype = {article} } While early cortical visual areas contain fine scale spatial organization of neuronal properties such as orientation preference, the spatial organization of higher-level visual areas is less well understood. The fMRI demonstration of face preferring regions in human ventral cortex (FFA, OFA) and monkey inferior temporal cortex ("face patches") raises the question of how neural selectivity for faces is organized. Here, we targeted hundreds of spatially registered neural recordings to the largest fMRI-identified face selective region in monkeys, the middle face patch (MFP) and show that the MFP contains a graded enrichment of face preferring neurons. At its center, as much as 93% of the sites we sampled responded twice as strongly to faces than to non-face objects. We estimate the maximum neurophysiological size of the MFP to be ∼6 mm in diameter, consistent with its previously reported size under fMRI. Importantly, face selectivity in the MFP varied strongly even between neighboring sites. Additionally, extremely face selective sites were ∼50x more likely to be present inside the MFP than outside. These results provide the first direct quantification of the size and neural composition of the MFP by showing that the cortical tissue localized to the fMRI defined region consists of a very high fraction of face preferring sites near its center, and a monotonic decrease in that fraction along any radial spatial axis. |
Jens K Apel; Gavin F Revie; Angelo Cangelosi; Rob Ellis; Jeremy Goslin; Martin H Fischer Attention deployment during memorizing and executing complex instructions Journal Article Experimental Brain Research, 214 (2), pp. 249–259, 2011. @article{Apel2011, title = {Attention deployment during memorizing and executing complex instructions}, author = {Jens K Apel and Gavin F Revie and Angelo Cangelosi and Rob Ellis and Jeremy Goslin and Martin H Fischer}, doi = {10.1007/s00221-011-2827-4}, year = {2011}, date = {2011-01-01}, journal = {Experimental Brain Research}, volume = {214}, number = {2}, pages = {249--259}, abstract = {We investigated the mental rehearsal of complex action instructions by recording spontaneous eye movements of healthy adults as they looked at objects on a monitor. Participants heard consecutive instructions, each of the form "move [object] to [location]". Instructions were only to be executed after a go signal, by manipulating all objects successively with a mouse. Participants re-inspected previously mentioned objects already while listening to further instructions. This rehearsal behavior broke down after 4 instructions, coincident with participants' instruction span, as determined from subsequent execution accuracy. These results suggest that spontaneous eye movements while listening to instructions predict their successful execution.}, keywords = {}, pubstate = {published}, tppubtype = {article} } We investigated the mental rehearsal of complex action instructions by recording spontaneous eye movements of healthy adults as they looked at objects on a monitor. Participants heard consecutive instructions, each of the form "move [object] to [location]". Instructions were only to be executed after a go signal, by manipulating all objects successively with a mouse. Participants re-inspected previously mentioned objects already while listening to further instructions. This rehearsal behavior broke down after 4 instructions, coincident with participants' instruction span, as determined from subsequent execution accuracy. These results suggest that spontaneous eye movements while listening to instructions predict their successful execution. |
Keith S Apfelbaum; Bob McMurray Learning during processing: Word learning doesn't wait for word recognition to finish Journal Article Cognitive Science, 41 , pp. 706–747, 2017. @article{Apfelbaum2017, title = {Learning during processing: Word learning doesn't wait for word recognition to finish}, author = {Keith S Apfelbaum and Bob McMurray}, doi = {10.1111/cogs.12401}, year = {2017}, date = {2017-01-01}, journal = {Cognitive Science}, volume = {41}, pages = {706--747}, abstract = {Previous research on associative learning has uncovered detailed aspects of the process, includ- ing what types of things are learned, how they are learned, and where in the brain such learning occurs. However, perceptual processes, such as stimulus recognition and identification, take time to unfold. Previous studies of learning have not addressed when, during the course of these dynamic recognition processes, learned representations are formed and updated. If learned representations are formed and updated while recognition is ongoing, the result of learning may incorporate spurious, partial information. For example, during word recognition, words take time to be identified, and competing words are often active in parallel. If learning proceeds before this competition resolves, representations may be influenced by the preliminary activations present at the time of learning. In three experiments using word learning as a model domain, we provide evidence that learning reflects the ongoing dynamics of auditory and visual processing during a learn- ing event. These results show that learning can occur before stimulus recognition processes are complete; learning does not wait for ongoing perceptual processing to complete.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Previous research on associative learning has uncovered detailed aspects of the process, includ- ing what types of things are learned, how they are learned, and where in the brain such learning occurs. However, perceptual processes, such as stimulus recognition and identification, take time to unfold. Previous studies of learning have not addressed when, during the course of these dynamic recognition processes, learned representations are formed and updated. If learned representations are formed and updated while recognition is ongoing, the result of learning may incorporate spurious, partial information. For example, during word recognition, words take time to be identified, and competing words are often active in parallel. If learning proceeds before this competition resolves, representations may be influenced by the preliminary activations present at the time of learning. In three experiments using word learning as a model domain, we provide evidence that learning reflects the ongoing dynamics of auditory and visual processing during a learn- ing event. These results show that learning can occur before stimulus recognition processes are complete; learning does not wait for ongoing perceptual processing to complete. |
Eduardo A Aponte; Dario Schöbi; Klaas E Stephan; Jakob Heinzle The stochastic early reaction, inhibition, and late action (SERIA) model for antisaccades Journal Article PLoS computational biology, 13 (8), pp. e1005692, 2017. @article{Aponte2017, title = {The stochastic early reaction, inhibition, and late action (SERIA) model for antisaccades}, author = {Eduardo A Aponte and Dario Schöbi and Klaas E Stephan and Jakob Heinzle}, doi = {10.1371/journal.pcbi.1005692}, year = {2017}, date = {2017-01-01}, journal = {PLoS computational biology}, volume = {13}, number = {8}, pages = {e1005692}, abstract = {The antisaccade task is a classic paradigm used to study the voluntary control of eye movements. It requires participants to suppress a reactive eye movement to a visual target and to concurrently initiate a saccade in the opposite direction. Although several models have been proposed to explain error rates and reaction times in this task, no formal model comparison has yet been performed. Here, we describe a Bayesian modeling approach to the antisaccade task that allows us to formally compare different models on the basis of their evidence. First, we provide a formal likelihood function of actions (pro- and antisaccades) and reaction times based on previously published models. Second, we introduce the Stochastic Early Reaction, Inhibition, and late Action model (SERIA), a novel model postulating two different mechanisms that interact in the antisaccade task: an early GO/NO-GO race decision process and a late GO/GO decision process. Third, we apply these models to a data set from an experiment with three mixed blocks of pro- and antisaccade trials. Bayesian model comparison demonstrates that the SERIA model explains the data better than competing models that do not incorporate a late decision process. Moreover, we show that the early decision process postulated by the SERIA model is, to a large extent, insensitive to the cue presented in a single trial. Finally, we use parameter estimates to demonstrate that changes in reaction time and error rate due to the probability of a trial type (pro- or antisaccade) are best explained by faster or slower inhibition and the probability of generating late voluntary prosaccades.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The antisaccade task is a classic paradigm used to study the voluntary control of eye movements. It requires participants to suppress a reactive eye movement to a visual target and to concurrently initiate a saccade in the opposite direction. Although several models have been proposed to explain error rates and reaction times in this task, no formal model comparison has yet been performed. Here, we describe a Bayesian modeling approach to the antisaccade task that allows us to formally compare different models on the basis of their evidence. First, we provide a formal likelihood function of actions (pro- and antisaccades) and reaction times based on previously published models. Second, we introduce the Stochastic Early Reaction, Inhibition, and late Action model (SERIA), a novel model postulating two different mechanisms that interact in the antisaccade task: an early GO/NO-GO race decision process and a late GO/GO decision process. Third, we apply these models to a data set from an experiment with three mixed blocks of pro- and antisaccade trials. Bayesian model comparison demonstrates that the SERIA model explains the data better than competing models that do not incorporate a late decision process. Moreover, we show that the early decision process postulated by the SERIA model is, to a large extent, insensitive to the cue presented in a single trial. Finally, we use parameter estimates to demonstrate that changes in reaction time and error rate due to the probability of a trial type (pro- or antisaccade) are best explained by faster or slower inhibition and the probability of generating late voluntary prosaccades. |
Eduardo A Aponte; Dominic G Tschan; Klaas E Stephan; Jakob Heinzle Inhibition failures and late errors in the antisaccade task: Influence of cue delay Journal Article Journal of Neurophysiology, 120 (6), pp. 3001–3016, 2018. @article{Aponte2018, title = {Inhibition failures and late errors in the antisaccade task: Influence of cue delay}, author = {Eduardo A Aponte and Dominic G Tschan and Klaas E Stephan and Jakob Heinzle}, doi = {10.1152/jn.00240.2018}, year = {2018}, date = {2018-01-01}, journal = {Journal of Neurophysiology}, volume = {120}, number = {6}, pages = {3001--3016}, abstract = {In the antisaccade task participants are required to saccade in the opposite direction of a peripheral visual cue (PVC). This paradigm is often used to investigate inhibition of reflexive responses as well as voluntary response generation. However, it is not clear to what extent different versions of this task probe the same underlying processes. Here, we explored with the Stochastic Early Reaction, Inhibition, and late Action (SERIA) model how the delay between task cue and PVC affects reaction time (RT) and error rate (ER) when pro- and antisaccade trials are randomly interleaved. Specifically, we contrasted a condition in which the task cue was presented before the PVC with a condition in which the PVC served also as task cue. Summary statistics indicate that ERs and RTs are reduced and contextual effects largely removed when the task is signaled before the PVC appears. The SERIA model accounts for RT and ER in both conditions and better so than other candidate models. Modeling demonstrates that voluntary pro- and antisaccades are frequent in both conditions. Moreover, early task cue presentation results in better control of reflexive saccades, leading to fewer fast antisaccade errors and more rapid correct prosaccades. Finally, high-latency errors are shown to be prevalent in both conditions. In summary, SERIA provides an explanation for the differences in the delayed and nondelayed antisaccade task.}, keywords = {}, pubstate = {published}, tppubtype = {article} } In the antisaccade task participants are required to saccade in the opposite direction of a peripheral visual cue (PVC). This paradigm is often used to investigate inhibition of reflexive responses as well as voluntary response generation. However, it is not clear to what extent different versions of this task probe the same underlying processes. Here, we explored with the Stochastic Early Reaction, Inhibition, and late Action (SERIA) model how the delay between task cue and PVC affects reaction time (RT) and error rate (ER) when pro- and antisaccade trials are randomly interleaved. Specifically, we contrasted a condition in which the task cue was presented before the PVC with a condition in which the PVC served also as task cue. Summary statistics indicate that ERs and RTs are reduced and contextual effects largely removed when the task is signaled before the PVC appears. The SERIA model accounts for RT and ER in both conditions and better so than other candidate models. Modeling demonstrates that voluntary pro- and antisaccades are frequent in both conditions. Moreover, early task cue presentation results in better control of reflexive saccades, leading to fewer fast antisaccade errors and more rapid correct prosaccades. Finally, high-latency errors are shown to be prevalent in both conditions. In summary, SERIA provides an explanation for the differences in the delayed and nondelayed antisaccade task. |
Eduardo A Aponte; Klaas E Stephan; Jakob Heinzle Switch costs in inhibitory control and voluntary behaviour: A computational study of the antisaccade task Journal Article European Journal of Neuroscience, 50 (7), pp. 3205–3220, 2019. @article{Aponte2019, title = {Switch costs in inhibitory control and voluntary behaviour: A computational study of the antisaccade task}, author = {Eduardo A Aponte and Klaas E Stephan and Jakob Heinzle}, doi = {10.1111/ejn.14435}, year = {2019}, date = {2019-01-01}, journal = {European Journal of Neuroscience}, volume = {50}, number = {7}, pages = {3205--3220}, abstract = {An integral aspect of human cognition is the ability to inhibit stimulus-driven, habitual responses, in favour of complex, voluntary actions. In addition, humans can also alternate between different tasks. This comes at the cost of degraded performance when compared to repeating the same task, a phenomenon called the “task-switch cost.” While task switching and inhibitory control have been studied extensively, the interaction between them has received relatively little attention. Here, we used the SERIA model, a computational model of antisaccade behaviour, to draw a bridge between them. We investigated task switching in two versions of the mixed antisaccade task, in which participants are cued to saccade either in the same or in the opposite direction to a peripheral stimulus. SERIA revealed that stopping a habitual action leads to increased inhibitory control that persists onto the next trial, independently of the upcoming trial type. Moreover, switching between tasks induces slower and less accurate voluntary responses compared to repeat trials. However, this only occurs when participants lack the time to prepare the correct response. Altogether, SERIA demonstrates that there is a reconfiguration cost associated with switching between voluntary actions. In addition, the enhanced inhibition that follows antisaccade but not prosaccade trials explains asymmetric switch costs. In conclusion, SERIA offers a novel model of task switching that unifies previous theoretical accounts by distinguishing between inhibitory control and voluntary action generation and could help explain similar phenomena in paradigms beyond the antisaccade task.}, keywords = {}, pubstate = {published}, tppubtype = {article} } An integral aspect of human cognition is the ability to inhibit stimulus-driven, habitual responses, in favour of complex, voluntary actions. In addition, humans can also alternate between different tasks. This comes at the cost of degraded performance when compared to repeating the same task, a phenomenon called the “task-switch cost.” While task switching and inhibitory control have been studied extensively, the interaction between them has received relatively little attention. Here, we used the SERIA model, a computational model of antisaccade behaviour, to draw a bridge between them. We investigated task switching in two versions of the mixed antisaccade task, in which participants are cued to saccade either in the same or in the opposite direction to a peripheral stimulus. SERIA revealed that stopping a habitual action leads to increased inhibitory control that persists onto the next trial, independently of the upcoming trial type. Moreover, switching between tasks induces slower and less accurate voluntary responses compared to repeat trials. However, this only occurs when participants lack the time to prepare the correct response. Altogether, SERIA demonstrates that there is a reconfiguration cost associated with switching between voluntary actions. In addition, the enhanced inhibition that follows antisaccade but not prosaccade trials explains asymmetric switch costs. In conclusion, SERIA offers a novel model of task switching that unifies previous theoretical accounts by distinguishing between inhibitory control and voluntary action generation and could help explain similar phenomena in paradigms beyond the antisaccade task. |
Ayelet Arazi; Nitzan Censor; Ilan Dinstein Neural variability quenching predicts individual perceptual abilities Journal Article Journal of Neuroscience, 37 (1), pp. 97–109, 2017. @article{Arazi2017a, title = {Neural variability quenching predicts individual perceptual abilities}, author = {Ayelet Arazi and Nitzan Censor and Ilan Dinstein}, doi = {10.1523/JNEUROSCI.1671-16.2017}, year = {2017}, date = {2017-01-01}, journal = {Journal of Neuroscience}, volume = {37}, number = {1}, pages = {97--109}, abstract = {Neural activity during repeated presentations ofa sensory stimulus exhibits considerable trial-by-trial variability. Previous studies have reported that trial-by-trial neural variability is reduced (quenched) by the presentation of a stimulus. However, the functional significance and behavioral relevance ofvariability quenching and the potential physiological mechanisms that may drive it have been studied only rarely. Here, we recorded neural activity with EEG as subjects performed a two-interval forced-choice contrast discrimination task. Trial-by-trial neural variability was quenched by⬃40% after the presentation ofthe stimulus relative to the variability apparent before stimulus presentation, yet there were large differences in the magnitude ofvariability quenching across subjects. Individual magnitudes of quenching predicted individual discrimination capabilities such that subjects who exhibited larger quenching had smaller contrast discrimination thresholds and steeper psychometric function slopes. Furthermore, the magnitude ofvariability quenching was strongly correlated with a reduction in broadband EEGpower after stimulus presentation. Our results suggest that neural variability quenching is achieved by reducing the amplitude of broadband neural oscillations after sensory input, which yields relatively more reproducible cortical activity across trials and enables superior perceptual abilities in individuals who quench more.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Neural activity during repeated presentations ofa sensory stimulus exhibits considerable trial-by-trial variability. Previous studies have reported that trial-by-trial neural variability is reduced (quenched) by the presentation of a stimulus. However, the functional significance and behavioral relevance ofvariability quenching and the potential physiological mechanisms that may drive it have been studied only rarely. Here, we recorded neural activity with EEG as subjects performed a two-interval forced-choice contrast discrimination task. Trial-by-trial neural variability was quenched by⬃40% after the presentation ofthe stimulus relative to the variability apparent before stimulus presentation, yet there were large differences in the magnitude ofvariability quenching across subjects. Individual magnitudes of quenching predicted individual discrimination capabilities such that subjects who exhibited larger quenching had smaller contrast discrimination thresholds and steeper psychometric function slopes. Furthermore, the magnitude ofvariability quenching was strongly correlated with a reduction in broadband EEGpower after stimulus presentation. Our results suggest that neural variability quenching is achieved by reducing the amplitude of broadband neural oscillations after sensory input, which yields relatively more reproducible cortical activity across trials and enables superior perceptual abilities in individuals who quench more. |
Ayelet Arazi; Yaffa Yeshurun; Ilan Dinstein Neural variability is quenched by attention Journal Article Journal of Neuroscience, 39 (30), pp. 5975–5985, 2019. @article{Arazi2019, title = {Neural variability is quenched by attention}, author = {Ayelet Arazi and Yaffa Yeshurun and Ilan Dinstein}, doi = {10.1523/JNEUROSCI.0355-19.2019}, year = {2019}, date = {2019-01-01}, journal = {Journal of Neuroscience}, volume = {39}, number = {30}, pages = {5975--5985}, abstract = {Attention can be subdivided into several components, including alertness and spatial attention. It is believed that the behavioral benefits of attention, such as increased accuracy and faster reaction times, are generated by an increase in neural activity and a decrease in neural variability, which enhance the signal-to-noise ratio of task-relevant neural populations. However, empirical evidence regarding attention-related changes in neural variability in humans is extremely rare. Here we used EEG to demonstrate that trial-by-trial neural variability was reduced by visual cues that modulated alertness and spatial attention. Reductions in neural variability were specific to the visual system and larger in the contralateral hemisphere of the attended visual field. Subjects with higher initial levels of neural variability and larger decreases in variability exhibited greater behavioral benefits from attentional cues. These findings demonstrate that both alertness and spatial attention modulate neural variability and highlight the importance of reducing/quenching neural variability for attaining the behavioral benefits of attention.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Attention can be subdivided into several components, including alertness and spatial attention. It is believed that the behavioral benefits of attention, such as increased accuracy and faster reaction times, are generated by an increase in neural activity and a decrease in neural variability, which enhance the signal-to-noise ratio of task-relevant neural populations. However, empirical evidence regarding attention-related changes in neural variability in humans is extremely rare. Here we used EEG to demonstrate that trial-by-trial neural variability was reduced by visual cues that modulated alertness and spatial attention. Reductions in neural variability were specific to the visual system and larger in the contralateral hemisphere of the attended visual field. Subjects with higher initial levels of neural variability and larger decreases in variability exhibited greater behavioral benefits from attentional cues. These findings demonstrate that both alertness and spatial attention modulate neural variability and highlight the importance of reducing/quenching neural variability for attaining the behavioral benefits of attention. |
Yael Arbel; Emily Feeley; Xinyi He The effect of feedback on attention allocation in category learning: An eye tracking study Journal Article Frontiers in Psychology, 11 , pp. 1–8, 2020. @article{Arbel2020, title = {The effect of feedback on attention allocation in category learning: An eye tracking study}, author = {Yael Arbel and Emily Feeley and Xinyi He}, doi = {10.3389/fpsyg.2020.559334}, year = {2020}, date = {2020-01-01}, journal = {Frontiers in Psychology}, volume = {11}, pages = {1--8}, abstract = {It has been suggested that category learning involves changes in attention allocation based on the relevance of input to the classification. Using eye-gaze measures, Rehder and Hoffman studied changes in attention allocation during category learning in a 5–4 category structure paradigm with four features of varying diagnosticity levels. In this paradigm, participants are tasked with classifying creatures into two groups through trial and error guided by feedback. While learners' eye-gaze patterns have been studied as a function of feature diagnosticity levels throughout the learning process, they have not been evaluated in relation to performance and feedback. The present study borrowed and modified Rehder and Hoffman's category paradigm and evaluated eye-gaze behavior as a function of the diagnosticity level of features, and the valence (positive vs. negative) of the preceding feedback during learning. Our results support Rehder and Hoffman's observations that gaze on the low diagnosticity feature decreased from the beginning to the end of the task. When change in eye gaze behavior was evaluated in relation to feedback, change in fixation probability was found to be greater following negative feedback. The results indicate that in a category task that includes performance feedback, learning strategies as indicated by changes in selective attention to features are affected to some degree by the valence of the feedback on a preceding trial.}, keywords = {}, pubstate = {published}, tppubtype = {article} } It has been suggested that category learning involves changes in attention allocation based on the relevance of input to the classification. Using eye-gaze measures, Rehder and Hoffman studied changes in attention allocation during category learning in a 5–4 category structure paradigm with four features of varying diagnosticity levels. In this paradigm, participants are tasked with classifying creatures into two groups through trial and error guided by feedback. While learners' eye-gaze patterns have been studied as a function of feature diagnosticity levels throughout the learning process, they have not been evaluated in relation to performance and feedback. The present study borrowed and modified Rehder and Hoffman's category paradigm and evaluated eye-gaze behavior as a function of the diagnosticity level of features, and the valence (positive vs. negative) of the preceding feedback during learning. Our results support Rehder and Hoffman's observations that gaze on the low diagnosticity feature decreased from the beginning to the end of the task. When change in eye gaze behavior was evaluated in relation to feedback, change in fixation probability was found to be greater following negative feedback. The results indicate that in a category task that includes performance feedback, learning strategies as indicated by changes in selective attention to features are affected to some degree by the valence of the feedback on a preceding trial. |
Fabrice Arcizet; Richard J Krauzlis Covert spatial selection in primate basal ganglia Journal Article PLoS Biology, 16 (10), pp. e2005930, 2018. @article{Arcizet2018, title = {Covert spatial selection in primate basal ganglia}, author = {Fabrice Arcizet and Richard J Krauzlis}, doi = {10.1371/journal.pbio.2005930}, year = {2018}, date = {2018-01-01}, journal = {PLoS Biology}, volume = {16}, number = {10}, pages = {e2005930}, abstract = {The basal ganglia are important for action selection. They are also implicated in perceptual and cognitive functions that seem far removed from motor control. Here, we tested whether the role of the basal ganglia in selection extends to nonmotor aspects of behavior by recording neuronal activity in the caudate nucleus while animals performed a covert spatial attention task. We found that caudate neurons strongly select the spatial location of the relevant stimulus throughout the task even in the absence of any overt action. This spatially selective activity was dependent on task and visual conditions and could be dissociated from goal-directed actions. Caudate activity was also sufficient to correctly identify every epoch in the covert attention task. These results provide a novel perspective on mechanisms of attention by demonstrating that the basal ganglia are involved in spatial selection and tracking of behavioral states even in the absence of overt orienting movements.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The basal ganglia are important for action selection. They are also implicated in perceptual and cognitive functions that seem far removed from motor control. Here, we tested whether the role of the basal ganglia in selection extends to nonmotor aspects of behavior by recording neuronal activity in the caudate nucleus while animals performed a covert spatial attention task. We found that caudate neurons strongly select the spatial location of the relevant stimulus throughout the task even in the absence of any overt action. This spatially selective activity was dependent on task and visual conditions and could be dissociated from goal-directed actions. Caudate activity was also sufficient to correctly identify every epoch in the covert attention task. These results provide a novel perspective on mechanisms of attention by demonstrating that the basal ganglia are involved in spatial selection and tracking of behavioral states even in the absence of overt orienting movements. |
Scott P Ardoin; Katherine S Binder; Andrea M Zawoyski; Tori E Foster Examining the maintenance and generalization effects of repeated practice: A comparison of three interventions Journal Article Journal of School Psychology, 68 , pp. 1–18, 2018. @article{Ardoin2018, title = {Examining the maintenance and generalization effects of repeated practice: A comparison of three interventions}, author = {Scott P Ardoin and Katherine S Binder and Andrea M Zawoyski and Tori E Foster}, doi = {10.1016/j.jsp.2017.12.002}, year = {2018}, date = {2018-01-01}, journal = {Journal of School Psychology}, volume = {68}, pages = {1--18}, abstract = {Repeated reading (RR) procedures are consistent with the procedures recommended by Haring and Eaton's (1978) Instructional Hierarchy (IH) for promoting students' fluent responding to newly learned stimuli. It is therefore not surprising that an extensive body of literature exists, which supports RR as an effective practice for promoting students' reading fluency of practiced passages. Less clear, however, is the extent to which RR helps students read the words practiced in an intervention passage when those same words are presented in a new passage. The current study employed randomized control design procedures to examine the maintenance and generalization effects of three interventions that were designed based upon Haring and Eaton's (1978) IH. Across four days, students either practiced reading (a) the same passage seven times (RR+RR), (b) one passage four times and three passages each once (RR+Guided Wide Reading [GWR]), or (c) seven passages each once (GWR+GWR). Students participated in the study across 2 weeks, with intervention being provided on a different passage set each week. All passages practiced within a week, regardless of condition, contained four target low frequency and four high frequency words. Across the 130 students for whom data were analyzed, results indicated that increased opportunities to practice words led to greater maintenance effects when passages were read seven days later but revealed minimal differences across conditions in students' reading of target words presented within a generalization passage.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Repeated reading (RR) procedures are consistent with the procedures recommended by Haring and Eaton's (1978) Instructional Hierarchy (IH) for promoting students' fluent responding to newly learned stimuli. It is therefore not surprising that an extensive body of literature exists, which supports RR as an effective practice for promoting students' reading fluency of practiced passages. Less clear, however, is the extent to which RR helps students read the words practiced in an intervention passage when those same words are presented in a new passage. The current study employed randomized control design procedures to examine the maintenance and generalization effects of three interventions that were designed based upon Haring and Eaton's (1978) IH. Across four days, students either practiced reading (a) the same passage seven times (RR+RR), (b) one passage four times and three passages each once (RR+Guided Wide Reading [GWR]), or (c) seven passages each once (GWR+GWR). Students participated in the study across 2 weeks, with intervention being provided on a different passage set each week. All passages practiced within a week, regardless of condition, contained four target low frequency and four high frequency words. Across the 130 students for whom data were analyzed, results indicated that increased opportunities to practice words led to greater maintenance effects when passages were read seven days later but revealed minimal differences across conditions in students' reading of target words presented within a generalization passage. |
Ana Beatriz Arêas Da Luz Fontes; Ana Isabel Schwartz Bilingual access of homonym meanings: Individual differences in bilingual access of homonym meanings Journal Article Bilingualism, 18 (4), pp. 639–656, 2015. @article{AreasDaLuzFontes2015, title = {Bilingual access of homonym meanings: Individual differences in bilingual access of homonym meanings}, author = {Ana Beatriz {Ar{ê}as Da Luz Fontes} and Ana Isabel Schwartz}, doi = {10.1017/S1366728914000509}, year = {2015}, date = {2015-01-01}, journal = {Bilingualism}, volume = {18}, number = {4}, pages = {639--656}, abstract = {The goal of the present study was to identify the cognitive processes that underlie lexical ambiguity resolution in a second language (L2). We examined which cognitive factors predict the efficiency in accessing subordinate meanings of L2 homonyms in a sample of highly-proficient, Spanish-English bilinguals. The predictive ability of individual differences in (1) homonym processing in the L1, (2) working memory capacity and (3) sensitivity to cross-language form overlap were examined. In two experiments, participants were presented with cognate and noncognate homonyms as either a prime in a lexical decision task (Experiment 1) or embedded in a sentence (Experiment 2). In both experiments speed and accuracy in accessing subordinate meanings in the L1 was the strongest predictor of speed and accuracy in accessing subordinate meanings in the L2. Sensitivity to cross-language form overlap predicted performance in lexical decision while working memory capacity predicted processing in sentence comprehension.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The goal of the present study was to identify the cognitive processes that underlie lexical ambiguity resolution in a second language (L2). We examined which cognitive factors predict the efficiency in accessing subordinate meanings of L2 homonyms in a sample of highly-proficient, Spanish-English bilinguals. The predictive ability of individual differences in (1) homonym processing in the L1, (2) working memory capacity and (3) sensitivity to cross-language form overlap were examined. In two experiments, participants were presented with cognate and noncognate homonyms as either a prime in a lexical decision task (Experiment 1) or embedded in a sentence (Experiment 2). In both experiments speed and accuracy in accessing subordinate meanings in the L1 was the strongest predictor of speed and accuracy in accessing subordinate meanings in the L2. Sensitivity to cross-language form overlap predicted performance in lexical decision while working memory capacity predicted processing in sentence comprehension. |
Joseph Arizpe; Dwight J Kravitz; Galit Yovel; Chris I Baker Start position strongly influences fixation patterns during face processing: Difficulties with eye movements as a measure of information use Journal Article PLoS ONE, 7 (2), pp. e31106, 2012. @article{Arizpe2012, title = {Start position strongly influences fixation patterns during face processing: Difficulties with eye movements as a measure of information use}, author = {Joseph Arizpe and Dwight J Kravitz and Galit Yovel and Chris I Baker}, doi = {10.1371/journal.pone.0031106}, year = {2012}, date = {2012-01-01}, journal = {PLoS ONE}, volume = {7}, number = {2}, pages = {e31106}, abstract = {Fixation patterns are thought to reflect cognitive processing and, thus, index the most informative stimulus features for task performance. During face recognition, initial fixations to the center of the nose have been taken to indicate this location is optimal for information extraction. However, the use of fixations as a marker for information use rests on the assumption that fixation patterns are predominantly determined by stimulus and task, despite the fact that fixations are also influenced by visuo-motor factors. Here, we tested the effect of starting position on fixation patterns during a face recognition task with upright and inverted faces. While we observed differences in fixations between upright and inverted faces, likely reflecting differences in cognitive processing, there was also a strong effect of start position. Over the first five saccades, fixation patterns across start positions were only coarsely similar, with most fixations around the eyes. Importantly, however, the precise fixation pattern was highly dependent on start position with a strong tendency toward facial features furthest from the start position. For example, the often-reported tendency toward the left over right eye was reversed for the left starting position. Further, delayed initial saccades for central versus peripheral start positions suggest greater information processing prior to the initial saccade, highlighting the experimental bias introduced by the commonly used center start position. Finally, the precise effect of face inversion on fixation patterns was also dependent on start position. These results demonstrate the importance of a non-stimulus, non-task factor in determining fixation patterns. The patterns observed likely reflect a complex combination of visuo-motor effects and simple sampling strategies as well as cognitive factors. These different factors are very difficult to tease apart and therefore great caution must be applied when interpreting absolute fixation locations as indicative of information use, particularly at a fine spatial scale.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Fixation patterns are thought to reflect cognitive processing and, thus, index the most informative stimulus features for task performance. During face recognition, initial fixations to the center of the nose have been taken to indicate this location is optimal for information extraction. However, the use of fixations as a marker for information use rests on the assumption that fixation patterns are predominantly determined by stimulus and task, despite the fact that fixations are also influenced by visuo-motor factors. Here, we tested the effect of starting position on fixation patterns during a face recognition task with upright and inverted faces. While we observed differences in fixations between upright and inverted faces, likely reflecting differences in cognitive processing, there was also a strong effect of start position. Over the first five saccades, fixation patterns across start positions were only coarsely similar, with most fixations around the eyes. Importantly, however, the precise fixation pattern was highly dependent on start position with a strong tendency toward facial features furthest from the start position. For example, the often-reported tendency toward the left over right eye was reversed for the left starting position. Further, delayed initial saccades for central versus peripheral start positions suggest greater information processing prior to the initial saccade, highlighting the experimental bias introduced by the commonly used center start position. Finally, the precise effect of face inversion on fixation patterns was also dependent on start position. These results demonstrate the importance of a non-stimulus, non-task factor in determining fixation patterns. The patterns observed likely reflect a complex combination of visuo-motor effects and simple sampling strategies as well as cognitive factors. These different factors are very difficult to tease apart and therefore great caution must be applied when interpreting absolute fixation locations as indicative of information use, particularly at a fine spatial scale. |
Joseph M Arizpe; Vincent Walsh; Chris I Baker Characteristic visuomotor influences on eye-movement patterns to faces and other high level stimuli Journal Article Frontiers in Psychology, 6 , pp. 1–14, 2015. @article{Arizpe2015, title = {Characteristic visuomotor influences on eye-movement patterns to faces and other high level stimuli}, author = {Joseph M Arizpe and Vincent Walsh and Chris I Baker}, doi = {10.3389/fpsyg.2015.01027}, year = {2015}, date = {2015-01-01}, journal = {Frontiers in Psychology}, volume = {6}, pages = {1--14}, abstract = {Eye-movement patterns are often utilized in studies of visual perception as indices of the specific information extracted to efficiently process a given stimulus during a given task. Our prior work, however, revealed that not only the stimulus and task influence eye-movements, but that visuomotor (start position) factors also robustly and characteristically influence eye-movement patterns to faces (Arizpe et al., 2012). Here we manipulated lateral starting side and distance from the midline of face and line-symmetrical control (butterfly) stimuli in order to further investigate the nature and generality of such visuomotor influences. First we found that increasing starting distance from midline (4°, 8°, 12°, and 16° visual angle) strongly and proportionately increased the distance of the first ordinal fixation from midline. We did not find influences of starting distance on subsequent fixations, however, suggesting that eye-movement plans are not strongly affected by starting distance following an initial orienting fixation. Further, we replicated our prior effect of starting side (left, right) to induce a spatially contralateral tendency of fixations after the first ordinal fixation. However, we also established that these visuomotor influences did not depend upon the predictability of the location of the upcoming stimulus, and were present not only for face stimuli but also for our control stimulus category (butterflies). We found a correspondence in overall left-lateralized fixation tendency between faces and butterflies. Finally, for faces, we found a relationship between left starting side (right sided fixation pattern tendency) and increased recognition performance, which likely reflects a cortical right hemisphere (left visual hemifield) advantage for face perception. These results further indicate the importance of considering and controlling for visuomotor influences in the design, analysis, and interpretation of eye-movement studies.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Eye-movement patterns are often utilized in studies of visual perception as indices of the specific information extracted to efficiently process a given stimulus during a given task. Our prior work, however, revealed that not only the stimulus and task influence eye-movements, but that visuomotor (start position) factors also robustly and characteristically influence eye-movement patterns to faces (Arizpe et al., 2012). Here we manipulated lateral starting side and distance from the midline of face and line-symmetrical control (butterfly) stimuli in order to further investigate the nature and generality of such visuomotor influences. First we found that increasing starting distance from midline (4°, 8°, 12°, and 16° visual angle) strongly and proportionately increased the distance of the first ordinal fixation from midline. We did not find influences of starting distance on subsequent fixations, however, suggesting that eye-movement plans are not strongly affected by starting distance following an initial orienting fixation. Further, we replicated our prior effect of starting side (left, right) to induce a spatially contralateral tendency of fixations after the first ordinal fixation. However, we also established that these visuomotor influences did not depend upon the predictability of the location of the upcoming stimulus, and were present not only for face stimuli but also for our control stimulus category (butterflies). We found a correspondence in overall left-lateralized fixation tendency between faces and butterflies. Finally, for faces, we found a relationship between left starting side (right sided fixation pattern tendency) and increased recognition performance, which likely reflects a cortical right hemisphere (left visual hemifield) advantage for face perception. These results further indicate the importance of considering and controlling for visuomotor influences in the design, analysis, and interpretation of eye-movement studies. |
Joseph Arizpe; Dwight J Kravitz; Vincent Walsh; Galit Yovel; Chris I Baker Differences in looking at own-and other-race faces are subtle and analysis-dependent: An account of discrepant reports Journal Article PLoS ONE, 11 (2), pp. e0148253, 2016. @article{Arizpe2016, title = {Differences in looking at own-and other-race faces are subtle and analysis-dependent: An account of discrepant reports}, author = {Joseph Arizpe and Dwight J Kravitz and Vincent Walsh and Galit Yovel and Chris I Baker}, doi = {10.1371/journal.pone.0148253}, year = {2016}, date = {2016-01-01}, journal = {PLoS ONE}, volume = {11}, number = {2}, pages = {e0148253}, abstract = {The Other-Race Effect (ORE) is the robust and well-established finding that people are generally poorer at facial recognition of individuals of another race than of their own race. Over the past four decades, much research has focused on the ORE because understanding this phenomenon is expected to elucidate fundamental face processingmechanisms and the influence of experience on such mechanisms. Several recent studies of the ORE in which the eye-movements of participants viewing own- and other-race faces were tracked have, however, reported highly conflicting results regarding the presence or absence of differential patterns of eye-movements to own- versus other-race faces. This discrepancy, of course, leads to conflicting theoretical interpretations of the perceptual basis for the ORE. Here we investigate fixation patterns to own- versus other-race (African and Chinese) faces for Caucasian participants using}, keywords = {}, pubstate = {published}, tppubtype = {article} } The Other-Race Effect (ORE) is the robust and well-established finding that people are generally poorer at facial recognition of individuals of another race than of their own race. Over the past four decades, much research has focused on the ORE because understanding this phenomenon is expected to elucidate fundamental face processingmechanisms and the influence of experience on such mechanisms. Several recent studies of the ORE in which the eye-movements of participants viewing own- and other-race faces were tracked have, however, reported highly conflicting results regarding the presence or absence of differential patterns of eye-movements to own- versus other-race faces. This discrepancy, of course, leads to conflicting theoretical interpretations of the perceptual basis for the ORE. Here we investigate fixation patterns to own- versus other-race (African and Chinese) faces for Caucasian participants using |
Joseph M Arizpe; Danielle L McKean; Jack W Tsao; Annie W -Y Chan Where you look matters for body perception: Preferred gaze location contributes to the body inversion effect Journal Article PLoS ONE, 12 (1), pp. e0169148, 2017. @article{Arizpe2017, title = {Where you look matters for body perception: Preferred gaze location contributes to the body inversion effect}, author = {Joseph M Arizpe and Danielle L McKean and Jack W Tsao and Annie W -Y Chan}, doi = {10.1371/journal.pone.0169148}, year = {2017}, date = {2017-01-01}, journal = {PLoS ONE}, volume = {12}, number = {1}, pages = {e0169148}, abstract = {The Body Inversion Effect (BIE; reduced visual discrimination performance for inverted compared to upright bodies) suggests that bodies are visually processed configurally; how- ever, the specific importance of head posture information in the BIE has been indicated in reports of BIE reduction for whole bodies with fixed head position and for headless bodies. Through measurement of gaze patterns and investigation of the causal relation of fixation location to visual body discrimination performance, the present study reveals joint contribu- tions of feature and configuration processing to visual body discrimination. Participants pre- dominantly gazed at the (body-centric) upper body for upright bodies and the lower body for inverted bodies in the context of an experimental paradigm directly comparable to that of prior studies of the BIE. Subsequent manipulation of fixation location indicates that these preferential gaze locations causally contributed to the BIE for whole bodies largely due to the informative nature of gazing at or near the head. Also, a BIE was detected for both whole and headless bodies even when fixation location on the body was held constant, indi- cating a role of configural processing in body discrimination, though inclusion of the head posture information was still highly discriminative in the context of such processing. Interest- ingly, the impact of configuration (upright and inverted) to the BIE appears greater than that of differential preferred gaze locations.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The Body Inversion Effect (BIE; reduced visual discrimination performance for inverted compared to upright bodies) suggests that bodies are visually processed configurally; how- ever, the specific importance of head posture information in the BIE has been indicated in reports of BIE reduction for whole bodies with fixed head position and for headless bodies. Through measurement of gaze patterns and investigation of the causal relation of fixation location to visual body discrimination performance, the present study reveals joint contribu- tions of feature and configuration processing to visual body discrimination. Participants pre- dominantly gazed at the (body-centric) upper body for upright bodies and the lower body for inverted bodies in the context of an experimental paradigm directly comparable to that of prior studies of the BIE. Subsequent manipulation of fixation location indicates that these preferential gaze locations causally contributed to the BIE for whole bodies largely due to the informative nature of gazing at or near the head. Also, a BIE was detected for both whole and headless bodies even when fixation location on the body was held constant, indi- cating a role of configural processing in body discrimination, though inclusion of the head posture information was still highly discriminative in the context of such processing. Interest- ingly, the impact of configuration (upright and inverted) to the BIE appears greater than that of differential preferred gaze locations. |
Joseph Arizpe; Vincent Walsh; Galit Yovel; Chris I Baker The categories, frequencies, and stability of idiosyncratic eye-movement patterns to faces Journal Article Vision Research, 141 , pp. 191–203, 2017. @article{Arizpe2017a, title = {The categories, frequencies, and stability of idiosyncratic eye-movement patterns to faces}, author = {Joseph Arizpe and Vincent Walsh and Galit Yovel and Chris I Baker}, doi = {10.1016/j.visres.2016.10.013}, year = {2017}, date = {2017-01-01}, journal = {Vision Research}, volume = {141}, pages = {191--203}, publisher = {Elsevier Ltd}, abstract = {The spatial pattern of eye-movements to faces considered typical for neurologically healthy individuals is a roughly T-shaped distribution over the internal facial features with peak fixation density tending toward the left eye (observer's perspective). However, recent studies indicate that striking deviations from this classic pattern are common within the population and are highly stable over time. The classic pattern actually reflects the average of these various idiosyncratic eye-movement patterns across individuals. The natural categories and respective frequencies of different types of idiosyncratic eye-movement patterns have not been specifically investigated before, so here we analyzed the spatial patterns of eye-movements for 48 participants to estimate the frequency of different kinds of individual eye-movement patterns to faces in the normal healthy population. Four natural clusters were discovered such that approximately 25% of our participants' fixation density peaks clustered over the left eye region (observer's perspective), 23% over the right eye-region, 31% over the nasion/bridge region of the nose, and 20% over the region spanning the nose, philthrum, and upper lips. We did not find any relationship between particular idiosyncratic eye-movement patterns and recognition performance. Individuals' eye-movement patterns early in a trial were more stereotyped than later ones and idiosyncratic fixation patterns evolved with time into a trial. Finally, while face inversion strongly modulated eye-movement patterns, individual patterns did not become less distinct for inverted compared to upright faces. Group-averaged fixation patterns do not represent individual patterns well, so exploration of such individual patterns is of value for future studies of visual cognition.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The spatial pattern of eye-movements to faces considered typical for neurologically healthy individuals is a roughly T-shaped distribution over the internal facial features with peak fixation density tending toward the left eye (observer's perspective). However, recent studies indicate that striking deviations from this classic pattern are common within the population and are highly stable over time. The classic pattern actually reflects the average of these various idiosyncratic eye-movement patterns across individuals. The natural categories and respective frequencies of different types of idiosyncratic eye-movement patterns have not been specifically investigated before, so here we analyzed the spatial patterns of eye-movements for 48 participants to estimate the frequency of different kinds of individual eye-movement patterns to faces in the normal healthy population. Four natural clusters were discovered such that approximately 25% of our participants' fixation density peaks clustered over the left eye region (observer's perspective), 23% over the right eye-region, 31% over the nasion/bridge region of the nose, and 20% over the region spanning the nose, philthrum, and upper lips. We did not find any relationship between particular idiosyncratic eye-movement patterns and recognition performance. Individuals' eye-movement patterns early in a trial were more stereotyped than later ones and idiosyncratic fixation patterns evolved with time into a trial. Finally, while face inversion strongly modulated eye-movement patterns, individual patterns did not become less distinct for inverted compared to upright faces. Group-averaged fixation patterns do not represent individual patterns well, so exploration of such individual patterns is of value for future studies of visual cognition. |
Joseph M Arizpe; Danielle L Noles; Jack W Tsao; Annie W Y Chan Eye movement dynamics differ between encoding and recognition of faces Journal Article Vision, 3 , pp. 9, 2019. @article{Arizpe2019, title = {Eye movement dynamics differ between encoding and recognition of faces}, author = {Joseph M Arizpe and Danielle L Noles and Jack W Tsao and Annie W Y Chan}, doi = {10.3390/vision3010009}, year = {2019}, date = {2019-01-01}, journal = {Vision}, volume = {3}, pages = {9}, abstract = {Facial recognition is widely thought to involve a holistic perceptual process, and optimal recognition performance can be rapidly achieved within two fixations. However, is facial identity encoding likewise holistic and rapid, and how do gaze dynamics during encoding relate to recognition? While having eye movements tracked, participants completed an encoding (“study”) phase and subsequent recognition (“test”) phase, each divided into blocks of one-or five-second stimulus presentation time conditions to distinguish the influences of experimental phase (encoding/recognition) and stimulus presentation time (short/long). Within the first two fixations, several differences between encoding and recognition were evident in the temporal and spatial dynamics of the eye-movements. Most importantly, in behavior, the long study phase presentation time alone caused improved recognition performance (i.e., longer time at recognition did not improve performance), revealing that encoding is not as rapid as recognition, since longer sequences of eye-movements are functionally required to achieve optimal encoding than to achieve optimal recognition. Together, these results are inconsistent with a scan path replay hypothesis. Rather, feature information seems to have been gradually integrated over many fixations during encoding, enabling recognition that could subsequently occur rapidly and holistically within a small number of fixations.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Facial recognition is widely thought to involve a holistic perceptual process, and optimal recognition performance can be rapidly achieved within two fixations. However, is facial identity encoding likewise holistic and rapid, and how do gaze dynamics during encoding relate to recognition? While having eye movements tracked, participants completed an encoding (“study”) phase and subsequent recognition (“test”) phase, each divided into blocks of one-or five-second stimulus presentation time conditions to distinguish the influences of experimental phase (encoding/recognition) and stimulus presentation time (short/long). Within the first two fixations, several differences between encoding and recognition were evident in the temporal and spatial dynamics of the eye-movements. Most importantly, in behavior, the long study phase presentation time alone caused improved recognition performance (i.e., longer time at recognition did not improve performance), revealing that encoding is not as rapid as recognition, since longer sequences of eye-movements are functionally required to achieve optimal encoding than to achieve optimal recognition. Together, these results are inconsistent with a scan path replay hypothesis. Rather, feature information seems to have been gradually integrated over many fixations during encoding, enabling recognition that could subsequently occur rapidly and holistically within a small number of fixations. |
Kiki Arkesteijn; Artem V Belopolsky; Jeroen B J Smeets; Mieke Donk The limits of predictive remapping of attention across eye movements Journal Article Frontiers in Psychology, 10 , pp. 1–10, 2019. @article{Arkesteijn2019, title = {The limits of predictive remapping of attention across eye movements}, author = {Kiki Arkesteijn and Artem V Belopolsky and Jeroen B J Smeets and Mieke Donk}, doi = {10.3389/fpsyg.2019.01146}, year = {2019}, date = {2019-01-01}, journal = {Frontiers in Psychology}, volume = {10}, pages = {1--10}, abstract = {With every eye movement, visual input projected onto our retina changes drastically. The fundamental question of how we keep track of relevant objects and movement targets has puzzled scientists for more than a century. Recent advances suggested that this can be accomplished through the process of predictive remapping of visual attention to the future post-saccadic locations of relevant objects. Evidence for the existence of predictive remapping of attention was first provided by Rolfs et al. (2011) (Nature Neuroscience, 14, 252-256). However, they used a single distant control location away from the task-relevant locations, which could have biased the allocation of visual attention. In this study we used a similar experimental paradigm as Rolfs et al. (2011), but probed attention equally likely at all possible locations. Our results showed that discrimination performance was higher at the remapped location than at a distant control location, but not compared to the other two control locations. A re-analysis of the results obtained by Rolfs et al. (2011) revealed a similar pattern. Together, these findings suggest that it is likely that previous reports of the predictive remapping of attention were due to a diffuse spread of attention to the task-relevant locations rather than to a specific shift toward the target's future retinotopic location.}, keywords = {}, pubstate = {published}, tppubtype = {article} } With every eye movement, visual input projected onto our retina changes drastically. The fundamental question of how we keep track of relevant objects and movement targets has puzzled scientists for more than a century. Recent advances suggested that this can be accomplished through the process of predictive remapping of visual attention to the future post-saccadic locations of relevant objects. Evidence for the existence of predictive remapping of attention was first provided by Rolfs et al. (2011) (Nature Neuroscience, 14, 252-256). However, they used a single distant control location away from the task-relevant locations, which could have biased the allocation of visual attention. In this study we used a similar experimental paradigm as Rolfs et al. (2011), but probed attention equally likely at all possible locations. Our results showed that discrimination performance was higher at the remapped location than at a distant control location, but not compared to the other two control locations. A re-analysis of the results obtained by Rolfs et al. (2011) revealed a similar pattern. Together, these findings suggest that it is likely that previous reports of the predictive remapping of attention were due to a diffuse spread of attention to the task-relevant locations rather than to a specific shift toward the target's future retinotopic location. |
Sophie C Arkin; Daniel Ruiz-Betancourt; Emery C Jamerson; Roland T Smith; Nicole E Strauss; Casimir C Klim; Daniel C Javitt; Gaurav H Patel Deficits and compensation: Attentional control cortical networks in schizophrenia Journal Article NeuroImage: Clinical, 27 , pp. 1–10, 2020. @article{Arkin2020, title = {Deficits and compensation: Attentional control cortical networks in schizophrenia}, author = {Sophie C Arkin and Daniel Ruiz-Betancourt and Emery C Jamerson and Roland T Smith and Nicole E Strauss and Casimir C Klim and Daniel C Javitt and Gaurav H Patel}, doi = {10.1016/j.nicl.2020.102348}, year = {2020}, date = {2020-01-01}, journal = {NeuroImage: Clinical}, volume = {27}, pages = {1--10}, abstract = {Visual processing and attention deficits are responsible for a substantial portion of the disability caused by schizophrenia, but the source of these deficits remains unclear. In 35 schizophrenia patients (SzP) and 34 healthy controls (HC), we used a rapid serial visual presentation (RSVP) visual search task designed to activate/deactivate the cortical components of the attentional control system (i.e. the dorsal and ventral attention networks, lateral prefrontal regions in the frontoparietal network, and cingulo-opercular/salience networks), along with resting state functional connectivity, to examine the integrity of these components. While we find that behavioral performance and activation/deactivation of the RSVP task are largely similar between groups, SzP exhibited decreased functional connectivity within late visual components and between prefrontal and other components. We also find that performance correlates with the deactivation of the ventral attention network in SzP only. This relationship is mediated by the functional connectivity of critical components of the attentional control system. In summary, our results suggest that the attentional control system is potentially used to compensate for visual cortex deficits. Furthermore, prefrontal deficits in SzP may interfere with this compensatory use of the attentional control system. In addition to highlighting focal deficits and potential compensatory mechanisms in visual processing and attention, our findings point to the attentional control system as a potential target for rehabilitation and neuromodulation-based treatments for visual processing deficits in SzP.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Visual processing and attention deficits are responsible for a substantial portion of the disability caused by schizophrenia, but the source of these deficits remains unclear. In 35 schizophrenia patients (SzP) and 34 healthy controls (HC), we used a rapid serial visual presentation (RSVP) visual search task designed to activate/deactivate the cortical components of the attentional control system (i.e. the dorsal and ventral attention networks, lateral prefrontal regions in the frontoparietal network, and cingulo-opercular/salience networks), along with resting state functional connectivity, to examine the integrity of these components. While we find that behavioral performance and activation/deactivation of the RSVP task are largely similar between groups, SzP exhibited decreased functional connectivity within late visual components and between prefrontal and other components. We also find that performance correlates with the deactivation of the ventral attention network in SzP only. This relationship is mediated by the functional connectivity of critical components of the attentional control system. In summary, our results suggest that the attentional control system is potentially used to compensate for visual cortex deficits. Furthermore, prefrontal deficits in SzP may interfere with this compensatory use of the attentional control system. In addition to highlighting focal deficits and potential compensatory mechanisms in visual processing and attention, our findings point to the attentional control system as a potential target for rehabilitation and neuromodulation-based treatments for visual processing deficits in SzP. |
Michael J Armson; Jennifer D Ryan; Brian Levine Maintaining fixation does not increase demands on working memory relative to free viewing Journal Article PeerJ, 7 , pp. 1–16, 2019. @article{Armson2019, title = {Maintaining fixation does not increase demands on working memory relative to free viewing}, author = {Michael J Armson and Jennifer D Ryan and Brian Levine}, doi = {10.7717/peerj.6839}, year = {2019}, date = {2019-04-01}, journal = {PeerJ}, volume = {7}, pages = {1--16}, publisher = {PeerJ Inc.}, abstract = {The comparison of memory performance during free and fixed viewing conditions has been used to demonstrate the involvement of eye movements in memory encoding and retrieval, with stronger effects at encoding than retrieval. Relative to conditions of free viewing, participants generally show reduced memory performance following sustained fixation, suggesting that unrestricted eye movements benefit memory. However, the cognitive basis of the memory reduction during fixed viewing is uncertain, with possible mechanisms including disruption of visual-mnemonic and/or imagery processes with sustained fixation, or greater working memory demands required for fixed relative to free viewing. To investigate one possible mechanism for this reduction, we had participants perform a working memory task—an auditory n-back task—during free and fixed viewing, as well as a repetitive finger tapping condition, included to isolate the effects of motor interference independent of the oculomotor system. As expected, finger tapping significantly interfered with n-back performance relative to free viewing, as indexed by a decrease in accuracy and increase in response times. By contrast, there was no evidence that fixed viewing interfered with n-back performance relative to free viewing. Our findings failed to support a hypothesis of increased working memory load during fixation. They are consistent with the notion that fixation disrupts long-term memory performance through interference with visual processes.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The comparison of memory performance during free and fixed viewing conditions has been used to demonstrate the involvement of eye movements in memory encoding and retrieval, with stronger effects at encoding than retrieval. Relative to conditions of free viewing, participants generally show reduced memory performance following sustained fixation, suggesting that unrestricted eye movements benefit memory. However, the cognitive basis of the memory reduction during fixed viewing is uncertain, with possible mechanisms including disruption of visual-mnemonic and/or imagery processes with sustained fixation, or greater working memory demands required for fixed relative to free viewing. To investigate one possible mechanism for this reduction, we had participants perform a working memory task—an auditory n-back task—during free and fixed viewing, as well as a repetitive finger tapping condition, included to isolate the effects of motor interference independent of the oculomotor system. As expected, finger tapping significantly interfered with n-back performance relative to free viewing, as indexed by a decrease in accuracy and increase in response times. By contrast, there was no evidence that fixed viewing interfered with n-back performance relative to free viewing. Our findings failed to support a hypothesis of increased working memory load during fixation. They are consistent with the notion that fixation disrupts long-term memory performance through interference with visual processes. |
Michael J Armson; Nicholas B Diamond; Laryssa Levesque; Jennifer D Ryan; Brian Levine Vividness of recollection is supported by eye movements in individuals with high, but not low trait autobiographical memory Journal Article Cognition, 206 , pp. 1–8, 2021. @article{Armson2021, title = {Vividness of recollection is supported by eye movements in individuals with high, but not low trait autobiographical memory}, author = {Michael J Armson and Nicholas B Diamond and Laryssa Levesque and Jennifer D Ryan and Brian Levine}, doi = {10.1016/j.cognition.2020.104487}, year = {2021}, date = {2021-01-01}, journal = {Cognition}, volume = {206}, pages = {1--8}, publisher = {Elsevier}, abstract = {There are marked individual differences in the recollection of personal past events or autobiographical memory (AM). Theory concerning the relationship between mnemonic and visual systems suggests that eye movements promote retrieval of spatiotemporal details from memory, yet assessment of this prediction within naturalistic AM has been limited. We examined the relationship of eye movements to free recall of naturalistic AM and how this relationship is modulated by individual differences in AM capacity. Participants freely recalled past episodes while viewing a blank screen under free and fixed viewing conditions. Memory performance was quantified with the Autobiographical Interview, which separates internal (episodic) and external (non-episodic) details. In Study 1, as a proof of concept, fixation rate was predictive of the number of internal (but not external) details recalled across both free and fixed viewing. In Study 2, using an experimenter-controlled staged event (a museum-style tour) the effect of fixations on free recall of internal (but not external) details was again observed. In this second study, however, the fixation-recall relationship was modulated by individual differences in autobiographical memory, such that the coupling between fixations and internal details was greater for those endorsing higher than lower episodic AM. These results suggest that those with congenitally strong AM rely on the visual system to produce episodic details, whereas those with lower AM retrieve such details via other mechanisms.}, keywords = {}, pubstate = {published}, tppubtype = {article} } There are marked individual differences in the recollection of personal past events or autobiographical memory (AM). Theory concerning the relationship between mnemonic and visual systems suggests that eye movements promote retrieval of spatiotemporal details from memory, yet assessment of this prediction within naturalistic AM has been limited. We examined the relationship of eye movements to free recall of naturalistic AM and how this relationship is modulated by individual differences in AM capacity. Participants freely recalled past episodes while viewing a blank screen under free and fixed viewing conditions. Memory performance was quantified with the Autobiographical Interview, which separates internal (episodic) and external (non-episodic) details. In Study 1, as a proof of concept, fixation rate was predictive of the number of internal (but not external) details recalled across both free and fixed viewing. In Study 2, using an experimenter-controlled staged event (a museum-style tour) the effect of fixations on free recall of internal (but not external) details was again observed. In this second study, however, the fixation-recall relationship was modulated by individual differences in autobiographical memory, such that the coupling between fixations and internal details was greater for those endorsing higher than lower episodic AM. These results suggest that those with congenitally strong AM rely on the visual system to produce episodic details, whereas those with lower AM retrieve such details via other mechanisms. |
I T Armstrong; Douglas P Munoz Inhibitory control of eye movements during oculomotor countermanding in adults with attention-deficit hyperactivity disorder Journal Article Experimental Brain Research, 152 (4), pp. 444–452, 2003. @article{Armstrong2003a, title = {Inhibitory control of eye movements during oculomotor countermanding in adults with attention-deficit hyperactivity disorder}, author = {I T Armstrong and Douglas P Munoz}, doi = {10.1007/s00221-003-1569-3}, year = {2003}, date = {2003-01-01}, journal = {Experimental Brain Research}, volume = {152}, number = {4}, pages = {444--452}, abstract = {Children with attention-deficit hyperactivity disorder (ADHD) are impulsive, and that impulsiveness can be measured using a countermanding task. Although the overt behaviors of ADHD attenuate with age, it is not clear how well impulsiveness is controlled in adults with ADHD. We tested ADHD adults with an oculomotor countermanding task. The task included two conditions: on 75% of the trials, participants viewed a central fixation marker and then looked to an eccentric target that appeared simultaneous with the disappearance of the fixation marker; on 25% of the trials, a signal was presented at variable delays after target appearance. The signal instructed subjects to stop, or countermand, an eye movement to the target. A correct movement in this case would be to hold gaze at the central fixation location. We expected ADHD participants to be impulsive in their countermanding performance. Additionally, we expected that a visual stop signal at the central fixation location would assist oculomotor countermanding because the signal is presented in the "stop" location, at fixation. To test whether a central stop signal positively biased countermanding, we used a three types of stop signal to instruct the stop: a central visual marker, a peripheral visual signal, and a non-localized sound. All subjects performed best with the central visual stop signal. Subjects with ADHD were less able to countermand eye movements and were influenced more negatively by the non-central signals. Oculomotor countermanding may be useful for quantifying impulsive dysfunction in adults with ADHD especially if a non-central stop signal is applied.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Children with attention-deficit hyperactivity disorder (ADHD) are impulsive, and that impulsiveness can be measured using a countermanding task. Although the overt behaviors of ADHD attenuate with age, it is not clear how well impulsiveness is controlled in adults with ADHD. We tested ADHD adults with an oculomotor countermanding task. The task included two conditions: on 75% of the trials, participants viewed a central fixation marker and then looked to an eccentric target that appeared simultaneous with the disappearance of the fixation marker; on 25% of the trials, a signal was presented at variable delays after target appearance. The signal instructed subjects to stop, or countermand, an eye movement to the target. A correct movement in this case would be to hold gaze at the central fixation location. We expected ADHD participants to be impulsive in their countermanding performance. Additionally, we expected that a visual stop signal at the central fixation location would assist oculomotor countermanding because the signal is presented in the "stop" location, at fixation. To test whether a central stop signal positively biased countermanding, we used a three types of stop signal to instruct the stop: a central visual marker, a peripheral visual signal, and a non-localized sound. All subjects performed best with the central visual stop signal. Subjects with ADHD were less able to countermand eye movements and were influenced more negatively by the non-central signals. Oculomotor countermanding may be useful for quantifying impulsive dysfunction in adults with ADHD especially if a non-central stop signal is applied. |
Thomas Armstrong; Mira Engel; Trevor Press; Anneka Sonstroem; Julian Reed Fast-forwarding disgust conditioning: US pre-exposure facilitates the acquisition of oculomotor avoidance Journal Article Motivation and Emotion, 43 (4), pp. 681–695, 2019. @article{Armstrong2019, title = {Fast-forwarding disgust conditioning: US pre-exposure facilitates the acquisition of oculomotor avoidance}, author = {Thomas Armstrong and Mira Engel and Trevor Press and Anneka Sonstroem and Julian Reed}, doi = {10.1007/s11031-019-09770-0}, year = {2019}, date = {2019-01-01}, journal = {Motivation and Emotion}, volume = {43}, number = {4}, pages = {681--695}, publisher = {Springer US}, abstract = {During human development, disgust is acquired to a broad range of stimuli, from rotting food to moral transgressions. Disgust's expansion surely involves associative learning, yet little is known about Pavlovian disgust conditioning. The present study examined conditioned disgust responding as revealed by oculomotor avoidance, the tendency to look away from offensive stimuli. In two experiments, oculomotor avoidance was acquired to a neutral image associated with a disgusting image. However, to our surprise, participants initially dwelled on disgusting images, avoiding them only after multiple exposures. In Experiment 1, this “rubbernecking” response delayed oculomotor avoidance of the associated neutral image. In Experiment 2, we exhausted rubbernecking prior to conditioning by repeatedly exposing participants to the disgusting images. This procedure elicited earlier oculomotor avoidance of the associated neutral stimulus, essentially fast-forwarding conditioning. These findings reveal competing motivational tendencies elicited by disgust stimuli that complicate associative disgust learning.}, keywords = {}, pubstate = {published}, tppubtype = {article} } During human development, disgust is acquired to a broad range of stimuli, from rotting food to moral transgressions. Disgust's expansion surely involves associative learning, yet little is known about Pavlovian disgust conditioning. The present study examined conditioned disgust responding as revealed by oculomotor avoidance, the tendency to look away from offensive stimuli. In two experiments, oculomotor avoidance was acquired to a neutral image associated with a disgusting image. However, to our surprise, participants initially dwelled on disgusting images, avoiding them only after multiple exposures. In Experiment 1, this “rubbernecking” response delayed oculomotor avoidance of the associated neutral image. In Experiment 2, we exhausted rubbernecking prior to conditioning by repeatedly exposing participants to the disgusting images. This procedure elicited earlier oculomotor avoidance of the associated neutral stimulus, essentially fast-forwarding conditioning. These findings reveal competing motivational tendencies elicited by disgust stimuli that complicate associative disgust learning. |
Nathan Arnett; Matthew Wagers Subject encodings and retrieval interference Journal Article Journal of Memory and Language, 93 , pp. 22–54, 2017. @article{Arnett2017, title = {Subject encodings and retrieval interference}, author = {Nathan Arnett and Matthew Wagers}, doi = {10.1016/j.jml.2016.07.005}, year = {2017}, date = {2017-01-01}, journal = {Journal of Memory and Language}, volume = {93}, pages = {22--54}, abstract = {Interference has been identified as a cause of processing difficulty in linguistic dependencies, such as the subject-verb relation (Van Dyke and Lewis, 2003). However, while mounting evidence implicates retrieval interference in sentence processing, the nature of the retrieval cues involved - and thus the source of difficulty - remains largely unexplored. Three experiments used self-paced reading and eyetracking to examine the ways in which the retrieval cues provided at a verb characterize subjects. Syntactic theory has identified a number of properties correlated with subjecthood, both phrase-structural and thematic. Findings replicate and extend previous findings of interference at a verb from additional subjects, but indicate that retrieval outcomes are relativized to the syntactic domain in which the retrieval occurs. One, the cues distinguish between thematic subjects in verbal and nominal domains. Two, within the verbal domain, retrieval is sensitive to abstract syntactic properties associated with subjects and their clauses. We argue that the processing at a verb requires cue-driven retrieval, and that the retrieval cues utilize abstract grammatical properties which may reflect parser expectations.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Interference has been identified as a cause of processing difficulty in linguistic dependencies, such as the subject-verb relation (Van Dyke and Lewis, 2003). However, while mounting evidence implicates retrieval interference in sentence processing, the nature of the retrieval cues involved - and thus the source of difficulty - remains largely unexplored. Three experiments used self-paced reading and eyetracking to examine the ways in which the retrieval cues provided at a verb characterize subjects. Syntactic theory has identified a number of properties correlated with subjecthood, both phrase-structural and thematic. Findings replicate and extend previous findings of interference at a verb from additional subjects, but indicate that retrieval outcomes are relativized to the syntactic domain in which the retrieval occurs. One, the cues distinguish between thematic subjects in verbal and nominal domains. Two, within the verbal domain, retrieval is sensitive to abstract syntactic properties associated with subjects and their clauses. We argue that the processing at a verb requires cue-driven retrieval, and that the retrieval cues utilize abstract grammatical properties which may reflect parser expectations. |
Jennifer E Arnold; Shin Yi C Lao Effects of psychological attention on pronoun comprehension Journal Article Language, Cognition and Neuroscience, 30 (7), pp. 832–852, 2015. @article{Arnold2015b, title = {Effects of psychological attention on pronoun comprehension}, author = {Jennifer E Arnold and Shin Yi C Lao}, doi = {10.1080/23273798.2015.1017511}, year = {2015}, date = {2015-01-01}, journal = {Language, Cognition and Neuroscience}, volume = {30}, number = {7}, pages = {832--852}, publisher = {Taylor & Francis}, abstract = {Pronoun comprehension is facilitated for referents that are focused in the discourse context. Discourse focus has been described as a function of attention, especially shared attention, but few studies have explicitly tested this idea. Two experiments used an exogenous capture cue paradigm to demonstrate that listeners' visual attention at the onset of a story influences their preferences during pronoun resolution later in the story. In both experiments trial-initial attention modulated listeners' transitory biases while considering referents for the pronoun, whether it was in response to the capture cue or not. These biases even had a small influence on listeners' final interpretation of the pronoun. These results provide independently motivated evidence that the listener's attention influences the online processes of pronoun comprehension. Trial-initial attentional shifts were made on the basis of non-shared, private information, demonstrating that attentional effects on pronoun comprehension are not restricted to shared attention among interlocutors.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Pronoun comprehension is facilitated for referents that are focused in the discourse context. Discourse focus has been described as a function of attention, especially shared attention, but few studies have explicitly tested this idea. Two experiments used an exogenous capture cue paradigm to demonstrate that listeners' visual attention at the onset of a story influences their preferences during pronoun resolution later in the story. In both experiments trial-initial attention modulated listeners' transitory biases while considering referents for the pronoun, whether it was in response to the capture cue or not. These biases even had a small influence on listeners' final interpretation of the pronoun. These results provide independently motivated evidence that the listener's attention influences the online processes of pronoun comprehension. Trial-initial attentional shifts were made on the basis of non-shared, private information, demonstrating that attentional effects on pronoun comprehension are not restricted to shared attention among interlocutors. |
David M Arnoldussen; Jeroen Goossens; Albert V van Den Berg Dissociation of retinal and headcentric disparity signals in dorsal human cortex Journal Article Frontiers in Systems Neuroscience, 9 , pp. 16, 2015. @article{Arnoldussen2015, title = {Dissociation of retinal and headcentric disparity signals in dorsal human cortex}, author = {David M Arnoldussen and Jeroen Goossens and Albert V {van Den Berg}}, doi = {10.3389/fnsys.2015.00016}, year = {2015}, date = {2015-01-01}, journal = {Frontiers in Systems Neuroscience}, volume = {9}, pages = {16}, abstract = {Recent fMRI studies have shown fusion of visual motion and disparity signals for shape perception (Ban et al., 2012), and unmasking camouflaged surfaces (Rokers et al., 2009), but no such interaction is known for typical dorsal motion pathway tasks, like grasping and navigation. Here, we investigate human speed perception of forward motion and its representation in the human motion network. We observe strong interaction in medial (V3ab, V6) and lateral motion areas (MT(+)), which differ significantly. Whereas the retinal disparity dominates the binocular contribution to the BOLD activity in the anterior part of area MT(+), headcentric disparity modulation of the BOLD response dominates in area V3ab and V6. This suggests that medial motion areas not only represent rotational speed of the head (Arnoldussen et al., 2011), but also translational speed of the head relative to the scene. Interestingly, a strong response to vergence eye movements was found in area V1, which showed a dependency on visual direction, just like vertical-size disparity. This is the first report of a vertical-size disparity correlate in human striate cortex.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Recent fMRI studies have shown fusion of visual motion and disparity signals for shape perception (Ban et al., 2012), and unmasking camouflaged surfaces (Rokers et al., 2009), but no such interaction is known for typical dorsal motion pathway tasks, like grasping and navigation. Here, we investigate human speed perception of forward motion and its representation in the human motion network. We observe strong interaction in medial (V3ab, V6) and lateral motion areas (MT(+)), which differ significantly. Whereas the retinal disparity dominates the binocular contribution to the BOLD activity in the anterior part of area MT(+), headcentric disparity modulation of the BOLD response dominates in area V3ab and V6. This suggests that medial motion areas not only represent rotational speed of the head (Arnoldussen et al., 2011), but also translational speed of the head relative to the scene. Interestingly, a strong response to vergence eye movements was found in area V1, which showed a dependency on visual direction, just like vertical-size disparity. This is the first report of a vertical-size disparity correlate in human striate cortex. |
Daniel S Asfaw; Pete R Jones; M M Vera; Nicholas D Smith; David P Crabb Does glaucoma alter eye movements when viewing images of natural scenes ? A between-eye study Journal Article Investigative Ophthalmology & Visual Science, 59 (8), pp. 3189–3198, 2018. @article{Asfaw2018a, title = {Does glaucoma alter eye movements when viewing images of natural scenes ? A between-eye study}, author = {Daniel S Asfaw and Pete R Jones and M M Vera and Nicholas D Smith and David P Crabb}, doi = {10.1167/iovs.18-23779}, year = {2018}, date = {2018-01-01}, journal = {Investigative Ophthalmology & Visual Science}, volume = {59}, number = {8}, pages = {3189--3198}, abstract = {PURPOSE. To investigate whether glaucoma produces measurable changes in eye movements. METHODS. Fifteen glaucoma patients with asymmetric vision loss (difference in mean deviation [MD] textgreater 6 dB between eyes) were asked to monocularly view 120 images of natural scenes, presented sequentially on a computer monitor. Each image was viewed twice—once each with the better and worse eye. Patients' eye movements were recorded with an Eyelink 1000 eye-tracker. Eye-movement parameters were computed and compared within participants (better eye versus worse eye). These parameters included a novel measure: saccadic reversal rate (SRR), as well as more traditional metrics such as saccade amplitude, fixation counts, fixation duration, and spread of fixation locations (bivariate contour ellipse area [BCEA]). In addition, the associations of these parameters with clinical measures of vision were investigated. RESULTS. In the worse eye, saccade amplitude (p = 0.012; -13%) and BCEA (p = 0.005; -16 %) were smaller, while SRR was greater (p = 0.018; +16%). There was a significant correlation between the intereye difference in BCEA, and differences in MD values (Spearman's r = 0.65; p = 0.01), while differences in SRR were associated with differences in visual acuity (Spearman's r = 0.64; p = 0.01). Furthermore, between-eye differences in BCEA were a significant predictor of between-eye differences in MD: for every 1-dB difference in MD, BCEA reduced by 6.2% (95% confidence interval, 1.6%–10.3%). CONCLUSIONS. Eye movements are altered by visual field loss, and these changes are related to changes in clinical measures. Eye movements recorded while passively viewing images could potentially be used as biomarkers for visual field damage.}, keywords = {}, pubstate = {published}, tppubtype = {article} } PURPOSE. To investigate whether glaucoma produces measurable changes in eye movements. METHODS. Fifteen glaucoma patients with asymmetric vision loss (difference in mean deviation [MD] textgreater 6 dB between eyes) were asked to monocularly view 120 images of natural scenes, presented sequentially on a computer monitor. Each image was viewed twice—once each with the better and worse eye. Patients' eye movements were recorded with an Eyelink 1000 eye-tracker. Eye-movement parameters were computed and compared within participants (better eye versus worse eye). These parameters included a novel measure: saccadic reversal rate (SRR), as well as more traditional metrics such as saccade amplitude, fixation counts, fixation duration, and spread of fixation locations (bivariate contour ellipse area [BCEA]). In addition, the associations of these parameters with clinical measures of vision were investigated. RESULTS. In the worse eye, saccade amplitude (p = 0.012; -13%) and BCEA (p = 0.005; -16 %) were smaller, while SRR was greater (p = 0.018; +16%). There was a significant correlation between the intereye difference in BCEA, and differences in MD values (Spearman's r = 0.65; p = 0.01), while differences in SRR were associated with differences in visual acuity (Spearman's r = 0.64; p = 0.01). Furthermore, between-eye differences in BCEA were a significant predictor of between-eye differences in MD: for every 1-dB difference in MD, BCEA reduced by 6.2% (95% confidence interval, 1.6%–10.3%). CONCLUSIONS. Eye movements are altered by visual field loss, and these changes are related to changes in clinical measures. Eye movements recorded while passively viewing images could potentially be used as biomarkers for visual field damage. |
Árni Gunnar Ásgeirsson; Árni Kristjánsson; Claus Bundesen Repetition priming in selective attention: A TVA analysis Journal Article Acta Psychologica, 160 (35-42), pp. 35–42, 2015. @article{Asgeirsson2015, title = {Repetition priming in selective attention: A TVA analysis}, author = {Árni Gunnar Ásgeirsson and Árni Kristjánsson and Claus Bundesen}, doi = {10.1016/j.actpsy.2015.06.008}, year = {2015}, date = {2015-01-01}, journal = {Acta Psychologica}, volume = {160}, number = {35-42}, pages = {35--42}, abstract = {Current behavior is influenced by events in the recent past. In visual attention, this is expressed in many variations of priming effects. Here, we investigate color priming in a brief exposure digit-recognition task. Observers performed a masked odd-one-out singleton recognition task where the target-color either repeated or changed between subsequent trials. Performance was measured by recognition accuracy over exposure durations. The purpose of the study was to replicate earlier findings of perceptual priming in brief displays and to model those results based on a Theory of Visual Attention (TVA; Bundesen, 1990). We tested 4 different definitions of a generic TVA-model and assessed their explanatory power. Our hypothesis was that priming effects could be explained by selective mechanisms, and that target-color repetitions would only affect the selectivity parameter ($alpha$) of our models. Repeating target colors enhanced performance for all 12 observers. As predicted, this was only true under conditions that required selection of a target among distractors, but not when a target was presented alone. Model fits by TVA were obtained with a trial-by-trial maximum likelihood estimation procedure that estimated 4-15 free parameters, depending on the particular model. We draw two main conclusions. Color priming can be modeled simply as a change in selectivity between conditions of repetition or swap of target color. Depending on the desired resolution of analysis; priming can accurately be modeled by a simple four parameter model, where VSTM capacity and spatial biases of attention are ignored, or more fine-grained by a 10 parameter model that takes these aspects into account.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Current behavior is influenced by events in the recent past. In visual attention, this is expressed in many variations of priming effects. Here, we investigate color priming in a brief exposure digit-recognition task. Observers performed a masked odd-one-out singleton recognition task where the target-color either repeated or changed between subsequent trials. Performance was measured by recognition accuracy over exposure durations. The purpose of the study was to replicate earlier findings of perceptual priming in brief displays and to model those results based on a Theory of Visual Attention (TVA; Bundesen, 1990). We tested 4 different definitions of a generic TVA-model and assessed their explanatory power. Our hypothesis was that priming effects could be explained by selective mechanisms, and that target-color repetitions would only affect the selectivity parameter ($alpha$) of our models. Repeating target colors enhanced performance for all 12 observers. As predicted, this was only true under conditions that required selection of a target among distractors, but not when a target was presented alone. Model fits by TVA were obtained with a trial-by-trial maximum likelihood estimation procedure that estimated 4-15 free parameters, depending on the particular model. We draw two main conclusions. Color priming can be modeled simply as a change in selectivity between conditions of repetition or swap of target color. Depending on the desired resolution of analysis; priming can accurately be modeled by a simple four parameter model, where VSTM capacity and spatial biases of attention are ignored, or more fine-grained by a 10 parameter model that takes these aspects into account. |
Árni Gunnar Ásgeirsson; Sander Nieuwenhuis No arousal-biased competition in focused visuospatial attention Journal Article Cognition, 168 , pp. 191–204, 2017. @article{Asgeirsson2017, title = {No arousal-biased competition in focused visuospatial attention}, author = {Árni Gunnar Ásgeirsson and Sander Nieuwenhuis}, doi = {10.1016/j.cognition.2017.07.001}, year = {2017}, date = {2017-01-01}, journal = {Cognition}, volume = {168}, pages = {191--204}, abstract = {Arousal sometimes enhances and sometimes impairs perception and memory. A recent theory attempts to reconcile these findings by proposing that arousal amplifies the competition between stimulus representations, strengthening already strong representations and weakening already weak representations. Here, we report a stringent test of this arousal-biased competition theory in the context of focused visuospatial attention. Participants were required to identify a briefly presented target in the context of multiple distractors, which varied in the degree to which they competed for representation with the target, as revealed by psychophysics. We manipulated arousal using emotionally arousing pictures (Experiment 1), alerting tones (Experiment 2) and white-noise stimulation (Experiment 3), and validated these manipulations with electroencephalography and pupillometry. In none of the experiments did we find evidence that arousal modulated the effect of distractor competition on the accuracy of target identification. Bayesian statistics revealed moderate to strong evidence against arousal-biased competition. Modeling of the psychophysical data based on Bundesen's (1990) theory of visual attention corroborated the conclusion that arousal does not bias competition in focused visuospatial attention.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Arousal sometimes enhances and sometimes impairs perception and memory. A recent theory attempts to reconcile these findings by proposing that arousal amplifies the competition between stimulus representations, strengthening already strong representations and weakening already weak representations. Here, we report a stringent test of this arousal-biased competition theory in the context of focused visuospatial attention. Participants were required to identify a briefly presented target in the context of multiple distractors, which varied in the degree to which they competed for representation with the target, as revealed by psychophysics. We manipulated arousal using emotionally arousing pictures (Experiment 1), alerting tones (Experiment 2) and white-noise stimulation (Experiment 3), and validated these manipulations with electroencephalography and pupillometry. In none of the experiments did we find evidence that arousal modulated the effect of distractor competition on the accuracy of target identification. Bayesian statistics revealed moderate to strong evidence against arousal-biased competition. Modeling of the psychophysical data based on Bundesen's (1990) theory of visual attention corroborated the conclusion that arousal does not bias competition in focused visuospatial attention. |
Mania Asgharpour; Mehdi Tehrani-Doost; Mehrnoosh Ahmadi; Hamid Moshki Visual attention to emotional face in schizophrenia: An eye tracking study Journal Article Iranian Journal of Psychiatry, 10 (1), pp. 13–18, 2015. @article{Asgharpour2015, title = {Visual attention to emotional face in schizophrenia: An eye tracking study}, author = {Mania Asgharpour and Mehdi Tehrani-Doost and Mehrnoosh Ahmadi and Hamid Moshki}, doi = {10.1186/1471-2180-9-74r1471-2180-9-74 [pii]}, year = {2015}, date = {2015-01-01}, journal = {Iranian Journal of Psychiatry}, volume = {10}, number = {1}, pages = {13--18}, abstract = {OBJECTIVE: Deficits in the processing of facial emotions have been reported extensively in patients with schizophrenia. To explore whether restricted attention is the cause of impaired emotion processing in these patients, we examined visual attention through tracking eye movements in response to emotional and neutral face stimuli in a group of patients with schizophrenia and healthy individuals. We also examined the correlation between visual attention allocation and symptoms severity in our patient group. METHOD: Thirty adult patients with schizophrenia and 30 matched healthy controls participated in this study. Visual attention data were recorded while participants passively viewed emotional-neutral face pairs for 500 ms. The relationship between the visual attention and symptoms severity were assessed by the Positive and Negative Syndrome Scale (PANSS) in the schizophrenia group. Repeated Measures ANOVAs were used to compare the groups. RESULTS: Comparing the number of fixations made during face-pairs presentation, we found that patients with schizophrenia made fewer fixations on faces, regardless of the expression of the face. Analysis of the number of fixations on negative-neutral pairs also revealed that the patients made fewer fixations on both neutral and negative faces. Analysis of number of fixations on positive-neutral pairs only showed more fixations on positive relative to neutral expressions in both groups. We found no correlations between visual attention pattern to faces and symptom severity in schizophrenic patients. CONCLUSION: The results of this study suggest that the facial recognition deficit in schizophrenia is related to decreased attention to face stimuli. Finding of no difference in visual attention for positive-neutral face pairs between the groups is in line with studies that have shown increased ability to positive emotional perception in these patients.}, keywords = {}, pubstate = {published}, tppubtype = {article} } OBJECTIVE: Deficits in the processing of facial emotions have been reported extensively in patients with schizophrenia. To explore whether restricted attention is the cause of impaired emotion processing in these patients, we examined visual attention through tracking eye movements in response to emotional and neutral face stimuli in a group of patients with schizophrenia and healthy individuals. We also examined the correlation between visual attention allocation and symptoms severity in our patient group. METHOD: Thirty adult patients with schizophrenia and 30 matched healthy controls participated in this study. Visual attention data were recorded while participants passively viewed emotional-neutral face pairs for 500 ms. The relationship between the visual attention and symptoms severity were assessed by the Positive and Negative Syndrome Scale (PANSS) in the schizophrenia group. Repeated Measures ANOVAs were used to compare the groups. RESULTS: Comparing the number of fixations made during face-pairs presentation, we found that patients with schizophrenia made fewer fixations on faces, regardless of the expression of the face. Analysis of the number of fixations on negative-neutral pairs also revealed that the patients made fewer fixations on both neutral and negative faces. Analysis of number of fixations on positive-neutral pairs only showed more fixations on positive relative to neutral expressions in both groups. We found no correlations between visual attention pattern to faces and symptom severity in schizophrenic patients. CONCLUSION: The results of this study suggest that the facial recognition deficit in schizophrenia is related to decreased attention to face stimuli. Finding of no difference in visual attention for positive-neutral face pairs between the groups is in line with studies that have shown increased ability to positive emotional perception in these patients. |
Matthew F Asher; David J Tolhurst; Tom Troscianko; Iain D Gilchrist Regional effects of clutter on human target detection performance. Journal Article Journal of vision, 13 (5), pp. 25–25, 2013. @article{Asher2013, title = {Regional effects of clutter on human target detection performance.}, author = {Matthew F Asher and David J Tolhurst and Tom Troscianko and Iain D Gilchrist}, doi = {10.1167/13.5.25}, year = {2013}, date = {2013-04-01}, journal = {Journal of vision}, volume = {13}, number = {5}, pages = {25--25}, abstract = {Clutter is something that is encountered in everyday life, from a messy desk to a crowded street. Such clutter may interfere with our ability to search for objects in such environments, like our car keys or the person we are trying to meet. A number of computational models of clutter have been proposed and shown to work well for artificial and other simplified scene search tasks. In this paper, we correlate the performance of different models of visual clutter to human performance in a visual search task using natural scenes. The models we evaluate are Feature Congestion (Rosenholtz, Li, & Nakano, 2007), Sub-band Entropy (Rosenholtz et al., 2007), Segmentation (Bravo & Farid, 2008), and Edge Density (Mack & Oliva, 2004) measures. The correlations were performed across a range of target-centered subregions to produce a correlation profile, indicating the scale at which clutter was affecting search performance. Overall clutter was rather weakly correlated with performance (r ≈ 0.2). However, different measures of clutter appear to reflect different aspects of the search task: correlations with Feature Congestion are greatest for the actual target patch, whereas the Sub-band Entropy is most highly correlated in a region 12° × 12° centered on the target.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Clutter is something that is encountered in everyday life, from a messy desk to a crowded street. Such clutter may interfere with our ability to search for objects in such environments, like our car keys or the person we are trying to meet. A number of computational models of clutter have been proposed and shown to work well for artificial and other simplified scene search tasks. In this paper, we correlate the performance of different models of visual clutter to human performance in a visual search task using natural scenes. The models we evaluate are Feature Congestion (Rosenholtz, Li, & Nakano, 2007), Sub-band Entropy (Rosenholtz et al., 2007), Segmentation (Bravo & Farid, 2008), and Edge Density (Mack & Oliva, 2004) measures. The correlations were performed across a range of target-centered subregions to produce a correlation profile, indicating the scale at which clutter was affecting search performance. Overall clutter was rather weakly correlated with performance (r ≈ 0.2). However, different measures of clutter appear to reflect different aspects of the search task: correlations with Feature Congestion are greatest for the actual target patch, whereas the Sub-band Entropy is most highly correlated in a region 12° × 12° centered on the target. |
Hiroshi Ashida; Ichiro Kuriki; Ikuya Murakami; Rumi Hisakata; Akiyoshi Kitaoka Direction-specific fMRI adaptation reveals the visual cortical network underlying the "Rotating Snakes" illusion Journal Article NeuroImage, 61 (4), pp. 1143–1152, 2012. @article{Ashida2012, title = {Direction-specific fMRI adaptation reveals the visual cortical network underlying the "Rotating Snakes" illusion}, author = {Hiroshi Ashida and Ichiro Kuriki and Ikuya Murakami and Rumi Hisakata and Akiyoshi Kitaoka}, doi = {10.1016/j.neuroimage.2012.03.033}, year = {2012}, date = {2012-01-01}, journal = {NeuroImage}, volume = {61}, number = {4}, pages = {1143--1152}, publisher = {Elsevier Inc.}, abstract = {The "Rotating Snakes" figure elicits a clear sense of anomalous motion in stationary repetitive patterns. We used an event-related fMRI adaptation paradigm to investigate cortical mechanisms underlying the illusory motion. Following an adapting stimulus (S1) and a blank period, a probe stimulus (S2) that elicited illusory motion either in the same or in the opposite direction was presented. Attention was controlled by a fixation task, and control experiments precluded explanations in terms of artefacts of local adaptation, afterimages, or involuntary eye movements. Recorded BOLD responses were smaller for S2 in the same direction than S2 in the opposite direction in V1-V4, V3A, and MT+, indicating direction-selective adaptation. Adaptation in MT. + was correlated with adaptation in V1 but not in V4. With possible downstream inheritance of adaptation, it is most likely that adaptation predominantly occurred in V1. The results extend our previous findings of activation in MT. + (I. Kuriki, H. Ashida, I. Murakami, and A. Kitaoka, 2008), revealing the activity of the cortical network for motion processing from V1 towards MT+. This provides evidence for the role of front-end motion detectors, which has been assumed in proposed models of the illusion.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The "Rotating Snakes" figure elicits a clear sense of anomalous motion in stationary repetitive patterns. We used an event-related fMRI adaptation paradigm to investigate cortical mechanisms underlying the illusory motion. Following an adapting stimulus (S1) and a blank period, a probe stimulus (S2) that elicited illusory motion either in the same or in the opposite direction was presented. Attention was controlled by a fixation task, and control experiments precluded explanations in terms of artefacts of local adaptation, afterimages, or involuntary eye movements. Recorded BOLD responses were smaller for S2 in the same direction than S2 in the opposite direction in V1-V4, V3A, and MT+, indicating direction-selective adaptation. Adaptation in MT. + was correlated with adaptation in V1 but not in V4. With possible downstream inheritance of adaptation, it is most likely that adaptation predominantly occurred in V1. The results extend our previous findings of activation in MT. + (I. Kuriki, H. Ashida, I. Murakami, and A. Kitaoka, 2008), revealing the activity of the cortical network for motion processing from V1 towards MT+. This provides evidence for the role of front-end motion detectors, which has been assumed in proposed models of the illusion. |
Carolina Astudillo; Kristofher Muñoz; Pedro E Maldonado Emotional content modulates attentional visual orientation during free viewing of natural images Journal Article Frontiers in Human Neuroscience, 12 , pp. 1–10, 2018. @article{Astudillo2018, title = {Emotional content modulates attentional visual orientation during free viewing of natural images}, author = {Carolina Astudillo and Kristofher Mu{ñ}oz and Pedro E Maldonado}, doi = {10.3389/fnhum.2018.00459}, year = {2018}, date = {2018-01-01}, journal = {Frontiers in Human Neuroscience}, volume = {12}, pages = {1--10}, abstract = {Visual attention is the process that enables us to select relevant visual stimuli in our environment to achieve a goal or perform adaptive behaviors. In this process, bottom-up mechanisms interact with top-down mechanisms underlying the automatic and voluntary orienting of attention. Cognitive functions, such as emotional processing, can influence visual attention by increasing or decreasing the resources destined for processing stimuli. The relationship between attention and emotion has been explored mainly in the field of automatic attentional capturing; especially, emotional stimuli are suddenly presented and detection rates or reaction times are recorded. Unlike these paradigms, natural visual scenes may be comprised in multiple stimuli with different emotional valences. In this setting, the mechanisms supporting voluntary visual orientation, under the influence of the emotional components of stimuli, are unknown. We employed a mosaic of pictures with different emotional valences (positive, negative, and neutral) and explored the dynamics of attentional visual orientation, assessed by eye tracking and measurements of pupil diameter. We found that pictures with affective content display increased dwelling times when compared to neutral pictures with a larger effect for negative pictures. The valence, regardless of the arousal levels, was the main factor driving the behavioral modulation of visual orientation. On the other hand, the visual exploration was accompanied by a systematic pupillary response, with the pupil contraction and dilation influenced by the arousal levels, with minor effects driven by the valence. Our results emphasize that arousal and valence should be considered different dimensions of emotional processing both interacting with cognitive processes such as visual attention.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Visual attention is the process that enables us to select relevant visual stimuli in our environment to achieve a goal or perform adaptive behaviors. In this process, bottom-up mechanisms interact with top-down mechanisms underlying the automatic and voluntary orienting of attention. Cognitive functions, such as emotional processing, can influence visual attention by increasing or decreasing the resources destined for processing stimuli. The relationship between attention and emotion has been explored mainly in the field of automatic attentional capturing; especially, emotional stimuli are suddenly presented and detection rates or reaction times are recorded. Unlike these paradigms, natural visual scenes may be comprised in multiple stimuli with different emotional valences. In this setting, the mechanisms supporting voluntary visual orientation, under the influence of the emotional components of stimuli, are unknown. We employed a mosaic of pictures with different emotional valences (positive, negative, and neutral) and explored the dynamics of attentional visual orientation, assessed by eye tracking and measurements of pupil diameter. We found that pictures with affective content display increased dwelling times when compared to neutral pictures with a larger effect for negative pictures. The valence, regardless of the arousal levels, was the main factor driving the behavioral modulation of visual orientation. On the other hand, the visual exploration was accompanied by a systematic pupillary response, with the pupil contraction and dilation influenced by the arousal levels, with minor effects driven by the valence. Our results emphasize that arousal and valence should be considered different dimensions of emotional processing both interacting with cognitive processes such as visual attention. |
Natsuki Atagi; Melissa DeWolf; James W Stigler; Scott P Johnson The role of visual representations in college students' understanding of mathematical notation Journal Article Journal of Experimental Psychology: Applied, 22 (3), pp. 295–304, 2016. @article{Atagi2016, title = {The role of visual representations in college students' understanding of mathematical notation}, author = {Natsuki Atagi and Melissa DeWolf and James W Stigler and Scott P Johnson}, doi = {10.1037/xap0000090}, year = {2016}, date = {2016-01-01}, journal = {Journal of Experimental Psychology: Applied}, volume = {22}, number = {3}, pages = {295--304}, abstract = {Developing understanding of fractions involves connections between nonsymbolic visual representations and symbolic representations. Initially, teachers introduce fraction concepts with visual representations before moving to symbolic representations. Once the focus is shifted to symbolic representations, the connections between visual representations and symbolic notation are considered to be less useful, and students are rarely asked to connect symbolic notation back to visual representations. In 2 experiments, we ask whether visual representations affect understanding of symbolic notation for adults who understand symbolic notation. In a conceptual fraction comparison task (e.g., Which is larger, 5 / a or 8 / a?), participants were given comparisons paired with accurate, helpful visual representations, misleading visual representations, or no visual representations. The results show that even college students perform significantly better when accurate visuals are provided over misleading or no visuals. Further, eye-tracking data suggest that these visual representations may affect performance even when only briefly looked at. Implications for theories of fraction understanding and education are discussed.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Developing understanding of fractions involves connections between nonsymbolic visual representations and symbolic representations. Initially, teachers introduce fraction concepts with visual representations before moving to symbolic representations. Once the focus is shifted to symbolic representations, the connections between visual representations and symbolic notation are considered to be less useful, and students are rarely asked to connect symbolic notation back to visual representations. In 2 experiments, we ask whether visual representations affect understanding of symbolic notation for adults who understand symbolic notation. In a conceptual fraction comparison task (e.g., Which is larger, 5 / a or 8 / a?), participants were given comparisons paired with accurate, helpful visual representations, misleading visual representations, or no visual representations. The results show that even college students perform significantly better when accurate visuals are provided over misleading or no visuals. Further, eye-tracking data suggest that these visual representations may affect performance even when only briefly looked at. Implications for theories of fraction understanding and education are discussed. |
Natsuki Atagi; Scott P Johnson Language experience is associated with infants' visual attention to speakers Journal Article Brain Sciences, 10 (8), pp. 1–12, 2020. @article{Atagi2020, title = {Language experience is associated with infants' visual attention to speakers}, author = {Natsuki Atagi and Scott P Johnson}, doi = {10.3390/brainsci10080550}, year = {2020}, date = {2020-01-01}, journal = {Brain Sciences}, volume = {10}, number = {8}, pages = {1--12}, abstract = {Early social-linguistic experience influences infants' attention to faces but little is known about how infants attend to the faces of speakers engaging in conversation. Here, we examine how monolingual and bilingual infants attended to speakers during a conversation, and we tested for the possibility that infants' visual attention may be modulated by familiarity with the language being spoken. We recorded eye movements in monolingual and bilingual 15-to-24-month-olds as they watched video clips of speakers using infant-directed speech while conversing in a familiar or unfamiliar language, with each other and to the infant. Overall, findings suggest that bilingual infants visually shift attention to a speaker prior to speech onset more when an unfamiliar, rather than a familiar, language is being spoken. However, this same effect was not found for monolingual infants. Thus, infants' familiarity with the language being spoken, and perhaps their language experiences, may modulate infants' visual attention to speakers.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Early social-linguistic experience influences infants' attention to faces but little is known about how infants attend to the faces of speakers engaging in conversation. Here, we examine how monolingual and bilingual infants attended to speakers during a conversation, and we tested for the possibility that infants' visual attention may be modulated by familiarity with the language being spoken. We recorded eye movements in monolingual and bilingual 15-to-24-month-olds as they watched video clips of speakers using infant-directed speech while conversing in a familiar or unfamiliar language, with each other and to the infant. Overall, findings suggest that bilingual infants visually shift attention to a speaker prior to speech onset more when an unfamiliar, rather than a familiar, language is being spoken. However, this same effect was not found for monolingual infants. Thus, infants' familiarity with the language being spoken, and perhaps their language experiences, may modulate infants' visual attention to speakers. |
Anne Atas; Nathan Faivre; Bert Timmermans; Axel Cleeremans; Sid Kouider Nonconscious learning from crowded sequences Journal Article Psychological Science, 25 (1), pp. 113–119, 2014. @article{Atas2014, title = {Nonconscious learning from crowded sequences}, author = {Anne Atas and Nathan Faivre and Bert Timmermans and Axel Cleeremans and Sid Kouider}, doi = {10.1177/0956797613499591}, year = {2014}, date = {2014-01-01}, journal = {Psychological Science}, volume = {25}, number = {1}, pages = {113--119}, abstract = {Can people learn complex information without conscious awareness? Implicit learning-learning without awareness of what has been learned-has been the focus of intense investigation over the last 50 years. However, it remains controversial whether complex knowledge can be learned implicitly. In the research reported here, we addressed this challenge by asking participants to differentiate between sequences of symbols they could not perceive consciously. Using an operant-conditioning task, we showed that participants learned to associate distinct sequences of crowded (nondiscriminable) symbols with their respective monetary outcomes (reward or punishment). Overall, our study demonstrates that sensitivity to sequential regularities can arise through the nonconscious temporal integration of perceptual information.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Can people learn complex information without conscious awareness? Implicit learning-learning without awareness of what has been learned-has been the focus of intense investigation over the last 50 years. However, it remains controversial whether complex knowledge can be learned implicitly. In the research reported here, we addressed this challenge by asking participants to differentiate between sequences of symbols they could not perceive consciously. Using an operant-conditioning task, we showed that participants learned to associate distinct sequences of crowded (nondiscriminable) symbols with their respective monetary outcomes (reward or punishment). Overall, our study demonstrates that sensitivity to sequential regularities can arise through the nonconscious temporal integration of perceptual information. |
Nada Attar; Matthew H Schneps; Marc Pomplun Working memory load predicts visual search efficiency: Evidence from a novel pupillary response paradigm Journal Article Memory and Cognition, 44 (7), pp. 1038–1049, 2016. @article{Attar2016, title = {Working memory load predicts visual search efficiency: Evidence from a novel pupillary response paradigm}, author = {Nada Attar and Matthew H Schneps and Marc Pomplun}, doi = {10.3758/s13421-016-0617-8}, year = {2016}, date = {2016-01-01}, journal = {Memory and Cognition}, volume = {44}, number = {7}, pages = {1038--1049}, publisher = {Memory & Cognition}, abstract = {An observer's pupil dilates and constricts in response to variables such as ambient and focal luminance, cognitive effort, the emotional stimulus content, and working memory load. The pupil's memory load response is of particular interest, as it might be used for estimating observers' memory load while they are performing a complex task, without adding an interruptive and confounding memory test to the protocol. One important task in which working memory's involvement is still being debated is visual search, and indeed a previous experiment by Porter, Troscianko, and Gilchrist (Quarterly Journal of Experimental Psychology, 60, 211-229, 2007) analyzed observers' pupil sizes during search to study this issue. These authors found that pupil size increased over the course of the search, and they attributed this finding to accumulating working memory load. However, since the pupil response is slow and does not depend on memory load alone, this conclusion is rather speculative. In the present study, we estimated working memory load in visual search during the presentation of intermittent fixation screens, thought to induce a low, stable level of arousal and cognitive effort. Using standard visual search and control tasks, we showed that this paradigm reduces the influence of non-memory-related factors on pupil size. Furthermore, we found an early increase in working memory load to be associated with more efficient search, indicating a significant role of working memory in the search process.}, keywords = {}, pubstate = {published}, tppubtype = {article} } An observer's pupil dilates and constricts in response to variables such as ambient and focal luminance, cognitive effort, the emotional stimulus content, and working memory load. The pupil's memory load response is of particular interest, as it might be used for estimating observers' memory load while they are performing a complex task, without adding an interruptive and confounding memory test to the protocol. One important task in which working memory's involvement is still being debated is visual search, and indeed a previous experiment by Porter, Troscianko, and Gilchrist (Quarterly Journal of Experimental Psychology, 60, 211-229, 2007) analyzed observers' pupil sizes during search to study this issue. These authors found that pupil size increased over the course of the search, and they attributed this finding to accumulating working memory load. However, since the pupil response is slow and does not depend on memory load alone, this conclusion is rather speculative. In the present study, we estimated working memory load in visual search during the presentation of intermittent fixation screens, thought to induce a low, stable level of arousal and cognitive effort. Using standard visual search and control tasks, we showed that this paradigm reduces the influence of non-memory-related factors on pupil size. Furthermore, we found an early increase in working memory load to be associated with more efficient search, indicating a significant role of working memory in the search process. |
Janice Attard; Markus Bindemann Establishing the duration of crimes: An individual differences and eye-tracking investigation into time estimation Journal Article Applied Cognitive Psychology, 28 (2), pp. 215–225, 2014. @article{Attard2014, title = {Establishing the duration of crimes: An individual differences and eye-tracking investigation into time estimation}, author = {Janice Attard and Markus Bindemann}, doi = {10.1002/acp.2986}, year = {2014}, date = {2014-01-01}, journal = {Applied Cognitive Psychology}, volume = {28}, number = {2}, pages = {215--225}, abstract = {The time available for viewing a perpetrator at a crime scene predicts successful person recognition in subsequent identity line-ups. This time is usually unknown and must be derived from eyewitnesses' duration estimates. This study therefore compared the estimates that different individuals provide for crimes. We then attempted to determine the accuracy of these durations by measuring observers' general time estimation ability with a set of estimator videos. Observers differed greatly in their ability to estimate time, but individual duration estimates correlated strongly for crime and estimator materials. This indicates that it might be possible to infer unknown durations of events, such as criminal incidents, from a person's ability to estimate known durations. We also measured observers' eye movements to a perpetrator during crimes. Only fixations on a perpetrator's face related to eyewitness accuracy, but these fixations did not correlate with exposure estimates for this person. The implications of these findings are discussed.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The time available for viewing a perpetrator at a crime scene predicts successful person recognition in subsequent identity line-ups. This time is usually unknown and must be derived from eyewitnesses' duration estimates. This study therefore compared the estimates that different individuals provide for crimes. We then attempted to determine the accuracy of these durations by measuring observers' general time estimation ability with a set of estimator videos. Observers differed greatly in their ability to estimate time, but individual duration estimates correlated strongly for crime and estimator materials. This indicates that it might be possible to infer unknown durations of events, such as criminal incidents, from a person's ability to estimate known durations. We also measured observers' eye movements to a perpetrator during crimes. Only fixations on a perpetrator's face related to eyewitness accuracy, but these fixations did not correlate with exposure estimates for this person. The implications of these findings are discussed. |
Janice Attard-Johnson; Markus Bindemann; Caoilte Ó Ciardha Pupillary response as an age-specific measure of sexual interest Journal Article Archives of Sexual Behavior, 45 (4), pp. 855–870, 2016. @article{AttardJohnson2016, title = {Pupillary response as an age-specific measure of sexual interest}, author = {Janice Attard-Johnson and Markus Bindemann and Caoilte {Ó Ciardha}}, doi = {10.1007/s10508-015-0681-3}, year = {2016}, date = {2016-01-01}, journal = {Archives of Sexual Behavior}, volume = {45}, number = {4}, pages = {855--870}, publisher = {Springer US}, abstract = {In the visual processing of sexual content, pupil dilation is an indicator of arousal that has been linked to observers' sexual orientation. This study investigated whether this measure can be extended to determine age-specific sexual interest. In two experiments, the pupillary responses of heterosexual adults to images of males and females of different ages were related to self-reported sexual interest, sexual appeal to the stimuli, and a child molestation proclivity scale. In both experiments, the pupils of male observers dilated to photographs ofwomen but not men, children, or neutral stimuli. These pupillary responses corresponded with observer's self-reported sexual interests and their sexual appeal ratings of the stimuli. Female observers showed pupil dilation to photographs of men and women but not children. In women,pupillary responses also correlated poorly with sexual appeal ratings ofthe stimuli. These experiments provide initial evidence that eye-tracking could be used as ameasure of sex-specific interest in male observers, and as an age-specific index in male and female observers.}, keywords = {}, pubstate = {published}, tppubtype = {article} } In the visual processing of sexual content, pupil dilation is an indicator of arousal that has been linked to observers' sexual orientation. This study investigated whether this measure can be extended to determine age-specific sexual interest. In two experiments, the pupillary responses of heterosexual adults to images of males and females of different ages were related to self-reported sexual interest, sexual appeal to the stimuli, and a child molestation proclivity scale. In both experiments, the pupils of male observers dilated to photographs ofwomen but not men, children, or neutral stimuli. These pupillary responses corresponded with observer's self-reported sexual interests and their sexual appeal ratings of the stimuli. Female observers showed pupil dilation to photographs of men and women but not children. In women,pupillary responses also correlated poorly with sexual appeal ratings ofthe stimuli. These experiments provide initial evidence that eye-tracking could be used as ameasure of sex-specific interest in male observers, and as an age-specific index in male and female observers. |
Janice Attard-Johnson; Markus Bindemann Sex-specific but not sexually explicit: Pupillary responses to dressed and naked adults Journal Article Royal Society Open Science, 4 (5), pp. 1–10, 2017. @article{AttardJohnson2017, title = {Sex-specific but not sexually explicit: Pupillary responses to dressed and naked adults}, author = {Janice Attard-Johnson and Markus Bindemann}, doi = {10.1098/rsos.160963}, year = {2017}, date = {2017-01-01}, journal = {Royal Society Open Science}, volume = {4}, number = {5}, pages = {1--10}, abstract = {Dilation of the pupils is an indicator of an observer's sexual interest in other people, but it remains unresolved whether this response is strengthened or diminished by sexually explicit material. To address this question, this study compared pupillary responses of heterosexual men and women to naked and dressed portraits of male and female adult film actors. Pupillary responses corresponded with observers' self-reported sexual orientation, such that dilation occurred during the viewing of opposite-sex people, but were comparable for naked and dressed targets. These findings indicate that pupillary responses provide a sex-specific measure, but are not sensitive to sexually explicit content.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Dilation of the pupils is an indicator of an observer's sexual interest in other people, but it remains unresolved whether this response is strengthened or diminished by sexually explicit material. To address this question, this study compared pupillary responses of heterosexual men and women to naked and dressed portraits of male and female adult film actors. Pupillary responses corresponded with observers' self-reported sexual orientation, such that dilation occurred during the viewing of opposite-sex people, but were comparable for naked and dressed targets. These findings indicate that pupillary responses provide a sex-specific measure, but are not sensitive to sexually explicit content. |
Janice Attard-Johnson; Markus Bindemann; Caoilte Ó Ciardha Heterosexual, homosexual, and bisexual men's pupillary responses to persons at different stages of sexual development Journal Article Journal of Sex Research, 54 (9), pp. 1085–1096, 2017. @article{AttardJohnson2017a, title = {Heterosexual, homosexual, and bisexual men's pupillary responses to persons at different stages of sexual development}, author = {Janice Attard-Johnson and Markus Bindemann and Caoilte {Ó Ciardha}}, doi = {10.1080/00224499.2016.1241857}, year = {2017}, date = {2017-01-01}, journal = {Journal of Sex Research}, volume = {54}, number = {9}, pages = {1085--1096}, publisher = {Routledge}, abstract = {This study investigated whether pupil size during the viewing of images of adults and children reflects the sexual orientation of heterosexual, homosexual, and bisexual men (n = 100}, keywords = {}, pubstate = {published}, tppubtype = {article} } This study investigated whether pupil size during the viewing of images of adults and children reflects the sexual orientation of heterosexual, homosexual, and bisexual men (n = 100 |
Ricky K C Au; Fuminori Ono; Katsumi Watanabe Time dilation induced by object motion is based on spatiotopic but not retinotopic positions Journal Article Frontiers in Psychology, 3 , pp. 58, 2012. @article{Au2012, title = {Time dilation induced by object motion is based on spatiotopic but not retinotopic positions}, author = {Ricky K C Au and Fuminori Ono and Katsumi Watanabe}, doi = {10.3389/fpsyg.2012.00058}, year = {2012}, date = {2012-01-01}, journal = {Frontiers in Psychology}, volume = {3}, pages = {58}, abstract = {Time perception of visual events depends on the visual attributes of the scene. Previous studies reported that motion of object can induce an illusion of lengthened time. In the present study, we asked the question whether such time dilation effect depends on the actual physical motion of the object (spatiotopic coordinate), or its relative motion with respect to the retina (retinotopic coordinate). Observers were presented with a moving stimulus and a static reference stimulus in separate intervals, and judged which interval they perceived as having a longer duration, under conditions with eye fixation (Experiment 1) and with eye movement at same velocity as the moving stimulus (Experiment 2). The data indicated that the perceived duration was longer under object motion, and depended on the actual movement of the object rather than relative retinal motion. These results are in support with the notion that the brain possesses a spatiotopic representation regarding the real world positions of objects in which the perception of time is associated with.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Time perception of visual events depends on the visual attributes of the scene. Previous studies reported that motion of object can induce an illusion of lengthened time. In the present study, we asked the question whether such time dilation effect depends on the actual physical motion of the object (spatiotopic coordinate), or its relative motion with respect to the retina (retinotopic coordinate). Observers were presented with a moving stimulus and a static reference stimulus in separate intervals, and judged which interval they perceived as having a longer duration, under conditions with eye fixation (Experiment 1) and with eye movement at same velocity as the moving stimulus (Experiment 2). The data indicated that the perceived duration was longer under object motion, and depended on the actual movement of the object rather than relative retinal motion. These results are in support with the notion that the brain possesses a spatiotopic representation regarding the real world positions of objects in which the perception of time is associated with. |
Carmel R Auerbach-Asch; Oded Bein; Leon Y Deouell Face selective neural activity: Comparisons between fixed and free viewing Journal Article Brain Topography, 33 (3), pp. 336–354, 2020. @article{AuerbachAsch2020, title = {Face selective neural activity: Comparisons between fixed and free viewing}, author = {Carmel R Auerbach-Asch and Oded Bein and Leon Y Deouell}, doi = {10.1007/s10548-020-00764-7}, year = {2020}, date = {2020-01-01}, journal = {Brain Topography}, volume = {33}, number = {3}, pages = {336--354}, publisher = {Springer US}, abstract = {Event Related Potentials (ERPs) are widely used to study category-selective EEG responses to visual stimuli, such as the face-selective N170 component. Typically, this is done by flashing stimuli at the point of static gaze fixation. While allowing for good experimental control, these paradigms ignore the dynamic role of eye-movements in natural vision. Fixation-related potentials (FRPs), obtained using simultaneous EEG and eye-tracking, overcome this limitation. Various studies have used FRPs to study processes such as lexical processing, target detection and attention allocation. The goal of this study was to carefully compare face-sensitive activity time-locked to an abrupt stimulus onset at fixation, with that time-locked to a self-generated fixation on a stimulus. Twelve participants participated in three experimental conditions: Free-viewing (FRPs), Cued-viewing (FRPs) and Control (ERPs). We used a multiple regression approach to disentangle overlapping activity components. Our results show that the N170 face-effect is evident for the first fixation on a stimulus, whether it follows a self-generated saccade or stimulus appearance at fixation point. The N170 face-effect has similar topography across viewing conditions, but there were major differences within each stimulus category. We ascribe these differences to an overlap of the fixation-related lambda response and the N170. We tested the plausibility of this account using dipole simulations. Finally, the N170 exhibits category-specific adaptation in free viewing. This study establishes the comparability of the free-viewing N170 face-effect with the classic event-related effect, while highlighting the importance of accounting for eye-movement related effects.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Event Related Potentials (ERPs) are widely used to study category-selective EEG responses to visual stimuli, such as the face-selective N170 component. Typically, this is done by flashing stimuli at the point of static gaze fixation. While allowing for good experimental control, these paradigms ignore the dynamic role of eye-movements in natural vision. Fixation-related potentials (FRPs), obtained using simultaneous EEG and eye-tracking, overcome this limitation. Various studies have used FRPs to study processes such as lexical processing, target detection and attention allocation. The goal of this study was to carefully compare face-sensitive activity time-locked to an abrupt stimulus onset at fixation, with that time-locked to a self-generated fixation on a stimulus. Twelve participants participated in three experimental conditions: Free-viewing (FRPs), Cued-viewing (FRPs) and Control (ERPs). We used a multiple regression approach to disentangle overlapping activity components. Our results show that the N170 face-effect is evident for the first fixation on a stimulus, whether it follows a self-generated saccade or stimulus appearance at fixation point. The N170 face-effect has similar topography across viewing conditions, but there were major differences within each stimulus category. We ascribe these differences to an overlap of the fixation-related lambda response and the N170. We tested the plausibility of this account using dipole simulations. Finally, the N170 exhibits category-specific adaptation in free viewing. This study establishes the comparability of the free-viewing N170 face-effect with the classic event-related effect, while highlighting the importance of accounting for eye-movement related effects. |
Ryszard Auksztulewicz; Karl J Friston Attentional enhancement of auditory mismatch responses: A DCM/MEG study Journal Article Cerebral Cortex, 25 (11), pp. 4273–4283, 2015. @article{Auksztulewicz2015, title = {Attentional enhancement of auditory mismatch responses: A DCM/MEG study}, author = {Ryszard Auksztulewicz and Karl J Friston}, doi = {10.1093/cercor/bhu323}, year = {2015}, date = {2015-01-01}, journal = {Cerebral Cortex}, volume = {25}, number = {11}, pages = {4273--4283}, abstract = {Despite similar behavioral effects, attention and expectation influence evoked responses differently: Attention typically enhances event-related responses, whereas expectation reduces them. This dissociation has been reconciled under predictive coding, where prediction errors are weighted by precision associated with attentional modulation. Here, we tested the predictive coding account of attention and expectation using magnetoencephalography and modeling. Temporal attention and sensory expectation were orthogonally manipulated in an auditory mismatch paradigm, revealing opposing effects on evoked response amplitude. Mismatch negativity (MMN) was enhanced by attention, speaking against its supposedly pre-attentive nature. This interaction effect was modeled in a canonical microcircuit using dynamic causal modeling, comparing models with modulation of extrinsic and intrinsic connectivity at different levels of the auditory hierarchy. While MMN was explained by recursive interplay of sensory predictions and prediction errors, attention was linked to the gain of inhibitory interneurons, consistent with its modulation of sensory precision.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Despite similar behavioral effects, attention and expectation influence evoked responses differently: Attention typically enhances event-related responses, whereas expectation reduces them. This dissociation has been reconciled under predictive coding, where prediction errors are weighted by precision associated with attentional modulation. Here, we tested the predictive coding account of attention and expectation using magnetoencephalography and modeling. Temporal attention and sensory expectation were orthogonally manipulated in an auditory mismatch paradigm, revealing opposing effects on evoked response amplitude. Mismatch negativity (MMN) was enhanced by attention, speaking against its supposedly pre-attentive nature. This interaction effect was modeled in a canonical microcircuit using dynamic causal modeling, comparing models with modulation of extrinsic and intrinsic connectivity at different levels of the auditory hierarchy. While MMN was explained by recursive interplay of sensory predictions and prediction errors, attention was linked to the gain of inhibitory interneurons, consistent with its modulation of sensory precision. |
Ryszard Auksztulewicz; Nicholas E Myers; Jan W Schnupp; Anna C Nobre Rhythmic temporal expectation boosts neural activity by increasing neural gain Journal Article Journal of Neuroscience, 39 (49), pp. 9806–9817, 2019. @article{Auksztulewicz2019, title = {Rhythmic temporal expectation boosts neural activity by increasing neural gain}, author = {Ryszard Auksztulewicz and Nicholas E Myers and Jan W Schnupp and Anna C Nobre}, doi = {10.1523/JNEUROSCI.0925-19.2019}, year = {2019}, date = {2019-01-01}, journal = {Journal of Neuroscience}, volume = {39}, number = {49}, pages = {9806--9817}, abstract = {Temporal orienting improves sensory processing, akin to other top–down biases. However, it is unknown whether these improvements reflect increased neural gain to any stimuli presented at expected time points, or specific tuning to task-relevant stimulus aspects. Furthermore, while other top–down biases are selective, the extent of trade-offs across time is less well characterized. Here, we tested whether gain and/or tuning ofauditory frequency processing in humans is modulated by rhythmic temporal expectations, and whether these modulations are specific to time points relevant for task performance. Healthy participants (N⫽ 23) of either sex performed an auditory discrimination task while their brain activity was measured using magnetoencephalography/electroencephalography (M/EEG). Acoustic stimulation consisted ofsequences ofbriefdistractors interspersed with targets, presented in a rhythmic or jittered way. Target rhythmicity not only improved behavioral discrimination accuracy and M/EEG-based decoding oftargets, but also ofirrelevant distrac- tors preceding these targets. To explain this finding in terms ofincreased sensitivity and/or sharpened tuning to auditory frequency, we estimated tuning curves based on M/EEG decoding results, with separate parameters describing gain and sharpness. The effect of rhythmic expectation on distractor decoding was linked to gain increase only, suggesting increased neural sensitivity to any stimuli presented at relevant time points.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Temporal orienting improves sensory processing, akin to other top–down biases. However, it is unknown whether these improvements reflect increased neural gain to any stimuli presented at expected time points, or specific tuning to task-relevant stimulus aspects. Furthermore, while other top–down biases are selective, the extent of trade-offs across time is less well characterized. Here, we tested whether gain and/or tuning ofauditory frequency processing in humans is modulated by rhythmic temporal expectations, and whether these modulations are specific to time points relevant for task performance. Healthy participants (N⫽ 23) of either sex performed an auditory discrimination task while their brain activity was measured using magnetoencephalography/electroencephalography (M/EEG). Acoustic stimulation consisted ofsequences ofbriefdistractors interspersed with targets, presented in a rhythmic or jittered way. Target rhythmicity not only improved behavioral discrimination accuracy and M/EEG-based decoding oftargets, but also ofirrelevant distrac- tors preceding these targets. To explain this finding in terms ofincreased sensitivity and/or sharpened tuning to auditory frequency, we estimated tuning curves based on M/EEG decoding results, with separate parameters describing gain and sharpness. The effect of rhythmic expectation on distractor decoding was linked to gain increase only, suggesting increased neural sensitivity to any stimuli presented at relevant time points. |
Étienne Aumont; Veronique D Bohbot; Gregory L West Spatial learners display enhanced oculomotor performance Journal Article Journal of Cognitive Psychology, 30 (8), pp. 872–879, 2018. @article{Aumont2018, title = {Spatial learners display enhanced oculomotor performance}, author = {{É}tienne Aumont and Veronique D Bohbot and Gregory L West}, doi = {10.1080/20445911.2018.1526178}, year = {2018}, date = {2018-01-01}, journal = {Journal of Cognitive Psychology}, volume = {30}, number = {8}, pages = {872--879}, publisher = {Taylor & Francis}, abstract = {Attention is important during navigation processes that rely on a cognitive map, as spatial relationships between environmental landmarks need to be selected, encoded, and learned. Spatial learners navigate using this process of cognitive map formation, which relies on the hippocampus. Conversely, response learners memorise a series of actions to navigate, which relies on the caudate nucleus. The present study aimed to investigate the relationship between spatial learning and oculomotor performance. We tested 23 response learners and 23 spatial learners, as determined by the 4-on-8 virtual maze, on an antisaccade task with a gap and emotional visual stimulus manipulation. Spatial learners displayed decreased saccadic reaction time latencies compared to response learners. Performance cost from the gap manipulation was significantly higher in response learners. These results could represent an attentional practice effect through the use of spatial strategies during navigation or a more global increase in cognitive function amongst spatial learners.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Attention is important during navigation processes that rely on a cognitive map, as spatial relationships between environmental landmarks need to be selected, encoded, and learned. Spatial learners navigate using this process of cognitive map formation, which relies on the hippocampus. Conversely, response learners memorise a series of actions to navigate, which relies on the caudate nucleus. The present study aimed to investigate the relationship between spatial learning and oculomotor performance. We tested 23 response learners and 23 spatial learners, as determined by the 4-on-8 virtual maze, on an antisaccade task with a gap and emotional visual stimulus manipulation. Spatial learners displayed decreased saccadic reaction time latencies compared to response learners. Performance cost from the gap manipulation was significantly higher in response learners. These results could represent an attentional practice effect through the use of spatial strategies during navigation or a more global increase in cognitive function amongst spatial learners. |
A J Austin; Theodora Duka Mechanisms of attention for appetitive and aversive outcomes in Pavlovian conditioning Journal Article Behavioural Brain Research, 213 (1), pp. 19–26, 2010. @article{Austin2010, title = {Mechanisms of attention for appetitive and aversive outcomes in Pavlovian conditioning}, author = {A J Austin and Theodora Duka}, doi = {10.1016/j.bbr.2010.04.019}, year = {2010}, date = {2010-01-01}, journal = {Behavioural Brain Research}, volume = {213}, number = {1}, pages = {19--26}, publisher = {Elsevier B.V.}, abstract = {Different mechanisms of attention controlling learning have been proposed in appetitive and aversive conditioning. The aim of the present study was to compare attention and learning in a Pavlovian conditioning paradigm using visual stimuli of varying predictive value of either monetary reward (appetitive conditioning; 10p or 50p) or blast of white noise (aversive conditioning; 97 dB or 102 dB). Outcome values were matched across the two conditions with regard to their emotional significance. Sixty-four participants were allocated to one of the four conditions matched for age and gender. All participants underwent a discriminative learning task using pairs of visual stimuli that signalled a 100%, 50%, or 0% probability of receiving an outcome. Learning was measured using a 9-point Likert scale of expectancy of the outcome, while attention using an eyetracker device. Arousal and emotional conditioning were also evaluated. Dwell time was greatest for the full predictor in the noise groups, while in the money groups attention was greatest for the partial predictor over the other two predictors. The progression of learning was the same for both groups. These findings suggest that in aversive conditioning attention is driven by the predictive salience of the stimulus while in appetitive conditioning attention is error-driven, when emotional value of the outcome is comparable.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Different mechanisms of attention controlling learning have been proposed in appetitive and aversive conditioning. The aim of the present study was to compare attention and learning in a Pavlovian conditioning paradigm using visual stimuli of varying predictive value of either monetary reward (appetitive conditioning; 10p or 50p) or blast of white noise (aversive conditioning; 97 dB or 102 dB). Outcome values were matched across the two conditions with regard to their emotional significance. Sixty-four participants were allocated to one of the four conditions matched for age and gender. All participants underwent a discriminative learning task using pairs of visual stimuli that signalled a 100%, 50%, or 0% probability of receiving an outcome. Learning was measured using a 9-point Likert scale of expectancy of the outcome, while attention using an eyetracker device. Arousal and emotional conditioning were also evaluated. Dwell time was greatest for the full predictor in the noise groups, while in the money groups attention was greatest for the partial predictor over the other two predictors. The progression of learning was the same for both groups. These findings suggest that in aversive conditioning attention is driven by the predictive salience of the stimulus while in appetitive conditioning attention is error-driven, when emotional value of the outcome is comparable. |
A J Austin; Theodora Duka Mechanisms of attention to conditioned stimuli predictive of a cigarette outcome Journal Article Behavioural Brain Research, 232 (1), pp. 183–189, 2012. @article{Austin2012, title = {Mechanisms of attention to conditioned stimuli predictive of a cigarette outcome}, author = {A J Austin and Theodora Duka}, doi = {10.1016/j.bbr.2012.04.009}, year = {2012}, date = {2012-01-01}, journal = {Behavioural Brain Research}, volume = {232}, number = {1}, pages = {183--189}, publisher = {Elsevier B.V.}, abstract = {Attention to stimuli associated with a rewarding outcome may be mediated by the incentive motivational properties that the stimulus acquires during conditioning. Other theories of attention state that the prediction error (the discrepancy between the expected and the actual outcome) during conditioning guides attention; once the outcome is fully predicted, attention should be abolished for the conditioned stimulus. The current study examined which of these mechanisms is dominant in conditioning when the outcome is highly rewarding. Allocation of attention to stimuli associated with cigarettes (the rewarding outcome) was tested in 16 smokers, who underwent a classical conditioning paradigm, where abstract visual stimuli were paired with a tobacco outcome. Stimuli were associated with 100% (stimulus A), 50% (stimulus B), or 0% (stimulus C) probability of receiving tobacco. Attention was measured using an eye-tracker device, and the appetitive value of the stimuli was measured with subjective pleasantness ratings during the conditioning process. Dwell time bias (duration of eye gaze) was greatest overall for the A stimulus, and increased over conditioning. Attention to stimulus A was dependent on the ratings of pleasantness that the stimulus evoked, and on the desire to smoke. These findings appear to support the theory that attention for conditioned stimuli is dominated by the incentive motivational qualities of the outcome they predict, and implicate a role for attention in the maintenance of addictive behaviours like smoking.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Attention to stimuli associated with a rewarding outcome may be mediated by the incentive motivational properties that the stimulus acquires during conditioning. Other theories of attention state that the prediction error (the discrepancy between the expected and the actual outcome) during conditioning guides attention; once the outcome is fully predicted, attention should be abolished for the conditioned stimulus. The current study examined which of these mechanisms is dominant in conditioning when the outcome is highly rewarding. Allocation of attention to stimuli associated with cigarettes (the rewarding outcome) was tested in 16 smokers, who underwent a classical conditioning paradigm, where abstract visual stimuli were paired with a tobacco outcome. Stimuli were associated with 100% (stimulus A), 50% (stimulus B), or 0% (stimulus C) probability of receiving tobacco. Attention was measured using an eye-tracker device, and the appetitive value of the stimuli was measured with subjective pleasantness ratings during the conditioning process. Dwell time bias (duration of eye gaze) was greatest overall for the A stimulus, and increased over conditioning. Attention to stimulus A was dependent on the ratings of pleasantness that the stimulus evoked, and on the desire to smoke. These findings appear to support the theory that attention for conditioned stimuli is dominated by the incentive motivational qualities of the outcome they predict, and implicate a role for attention in the maintenance of addictive behaviours like smoking. |
Brittany Avery; Christopher D Cowper-Smith; David A Westwood Spatial interactions between consecutive manual responses Journal Article Experimental Brain Research, 233 (11), pp. 3283–3290, 2015. @article{Avery2015, title = {Spatial interactions between consecutive manual responses}, author = {Brittany Avery and Christopher D Cowper-Smith and David A Westwood}, doi = {10.1007/s00221-015-4396-4}, year = {2015}, date = {2015-01-01}, journal = {Experimental Brain Research}, volume = {233}, number = {11}, pages = {3283--3290}, publisher = {Springer Berlin Heidelberg}, abstract = {We have shown that the latency to initiate a reaching movement is increased if its direction is the same as a previous movement compared to movements that differ by 90° or 180° (Cowper-Smith and Westwood in Atten Percept Psychophys 75:1914–1922, 2013). An influential study (Taylor and Klein in J Exp Psychol Hum Percept Perform 26:1639–1656, 2000), however, reported the opposite spatial pattern for manual keypress responses: repeated responses on the same side had reduced reaction time compared to responses on opposite sides. In order to determine whether there are fundamental differences in the patterns of spatial interactions between button-pressing responses and reaching movements, we compared both types of manual responses using common methods. Reaching movements and manual keypress responses were performed in separate blocks of trials using consecutive central arrow stimuli that directed participants to respond to left or right targets. Reaction times were greater for manual responses made to the same target as a previous response (M = 390 ms) as compared to the opposite target (M = 365 ms; similarity main effect: p textless 0.001) regardless of whether the response was a reaching movement or a keypress response. This finding is broadly consistent with an inhibitory mechanism operating at the level of motor output that discourages movements that achieve the same spatial goal as a recent action.}, keywords = {}, pubstate = {published}, tppubtype = {article} } We have shown that the latency to initiate a reaching movement is increased if its direction is the same as a previous movement compared to movements that differ by 90° or 180° (Cowper-Smith and Westwood in Atten Percept Psychophys 75:1914–1922, 2013). An influential study (Taylor and Klein in J Exp Psychol Hum Percept Perform 26:1639–1656, 2000), however, reported the opposite spatial pattern for manual keypress responses: repeated responses on the same side had reduced reaction time compared to responses on opposite sides. In order to determine whether there are fundamental differences in the patterns of spatial interactions between button-pressing responses and reaching movements, we compared both types of manual responses using common methods. Reaching movements and manual keypress responses were performed in separate blocks of trials using consecutive central arrow stimuli that directed participants to respond to left or right targets. Reaction times were greater for manual responses made to the same target as a previous response (M = 390 ms) as compared to the opposite target (M = 365 ms; similarity main effect: p textless 0.001) regardless of whether the response was a reaching movement or a keypress response. This finding is broadly consistent with an inhibitory mechanism operating at the level of motor output that discourages movements that achieve the same spatial goal as a recent action. |
Hillel Aviezer; Ran R Hassin; Jennifer D Ryan; Cheryl L Grady; Josh Susskind; Adam Anderson; Morris Moscovitch; Shlomo Bentin Angry, disgusted, or afraid? Studies on the malleability of emotion perception Journal Article Psychological Science, 19 (7), pp. 724–732, 2008. @article{Aviezer2008, title = {Angry, disgusted, or afraid? Studies on the malleability of emotion perception}, author = {Hillel Aviezer and Ran R Hassin and Jennifer D Ryan and Cheryl L Grady and Josh Susskind and Adam Anderson and Morris Moscovitch and Shlomo Bentin}, doi = {10.1111/j.1467-9280.2008.02148.x}, year = {2008}, date = {2008-01-01}, journal = {Psychological Science}, volume = {19}, number = {7}, pages = {724--732}, abstract = {Current theories of emotion perception posit that basic facial expressions signal categorically discrete emotions or affective dimensions of valence and arousal. In both cases, the information is thought to be directly ‘‘read out'' from the face in a way that is largely immune to context. In contrast, the three studies reported here demonstrated that identical facial configurations convey strikingly different emotions and dimensional values depending on the affective context in which they are embedded. This effect is modulated by the similarity between the target facial expression and the facial expression typically associated with the context. Moreover, by monitoring eye movements, we demonstrated that characteristic fixation patterns previously thought to be determined solely by the facial expression are systematically modulated by emotional context already at very early stages of visual processing, even by the first time the face is fixated. Our results indicate that the perception of basic facial expressions is not context invariant and can be categorically altered by context at early perceptual levels.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Current theories of emotion perception posit that basic facial expressions signal categorically discrete emotions or affective dimensions of valence and arousal. In both cases, the information is thought to be directly ‘‘read out'' from the face in a way that is largely immune to context. In contrast, the three studies reported here demonstrated that identical facial configurations convey strikingly different emotions and dimensional values depending on the affective context in which they are embedded. This effect is modulated by the similarity between the target facial expression and the facial expression typically associated with the context. Moreover, by monitoring eye movements, we demonstrated that characteristic fixation patterns previously thought to be determined solely by the facial expression are systematically modulated by emotional context already at very early stages of visual processing, even by the first time the face is fixated. Our results indicate that the perception of basic facial expressions is not context invariant and can be categorically altered by context at early perceptual levels. |
Federico Avila; Claudio Delrieux; Gustavo Gasaneo Complexity analysis of eye-tracking trajectories: Permutation entropy may unravel cognitive styles Journal Article European Physical Journal B, 92 , pp. 1–7, 2019. @article{Avila2019, title = {Complexity analysis of eye-tracking trajectories: Permutation entropy may unravel cognitive styles}, author = {Federico Avila and Claudio Delrieux and Gustavo Gasaneo}, doi = {10.1140/epjb/e2019-100437-4}, year = {2019}, date = {2019-01-01}, journal = {European Physical Journal B}, volume = {92}, pages = {1--7}, abstract = {We propose a novel adaptation of permutation entropy analysis applied to eye-tracking data. Eye movements arising during cognitive tasks are characterized as sequences of trajectories within a space of ordinal trajectory patterns, thus taking advantage of recent advancements in the study of complex processes in terms of statistical complexity. Results show correlations between the permutation entropies of the eye-tracking trajectories and the type of cognitive task being performed by the subjects. Moreover, the behavior of subjects along all the experiments cluster together into two groups within a projection of the ordinal pattern space in the three principal components. This strongly suggests the existence of two different underlying problem solving styles among the subjects, which are expressed in how the movement sequences are organized.}, keywords = {}, pubstate = {published}, tppubtype = {article} } We propose a novel adaptation of permutation entropy analysis applied to eye-tracking data. Eye movements arising during cognitive tasks are characterized as sequences of trajectories within a space of ordinal trajectory patterns, thus taking advantage of recent advancements in the study of complex processes in terms of statistical complexity. Results show correlations between the permutation entropies of the eye-tracking trajectories and the type of cognitive task being performed by the subjects. Moreover, the behavior of subjects along all the experiments cluster together into two groups within a projection of the ordinal pattern space in the three principal components. This strongly suggests the existence of two different underlying problem solving styles among the subjects, which are expressed in how the movement sequences are organized. |
Inbar Avni; Gal Meiri; Asif Bar-Sinai; Doron Reboh; Liora Manelis; Hagit Flusser; Analya Michaelovski; Idan Menashe; Ilan Dinstein Children with autism observe social interactions in an idiosyncratic manner Journal Article Autism Research, 13 (6), pp. 935–946, 2020. @article{Avni2020, title = {Children with autism observe social interactions in an idiosyncratic manner}, author = {Inbar Avni and Gal Meiri and Asif Bar-Sinai and Doron Reboh and Liora Manelis and Hagit Flusser and Analya Michaelovski and Idan Menashe and Ilan Dinstein}, doi = {10.1002/aur.2234}, year = {2020}, date = {2020-01-01}, journal = {Autism Research}, volume = {13}, number = {6}, pages = {935--946}, abstract = {Previous eye-tracking studies have reported that children with autism spectrum disorders (ASD) fixate less on faces in comparison to controls. To properly understand social interactions, however, children must gaze not only at faces but also at actions, gestures, body movements, contextual details, and objects, thereby creating specific gaze patterns when observing specific social interactions. We presented three different movies with social interactions to 111 children (71 with ASD) who watched each of the movies twice. Typically developing children viewed the movies in a remarkably predictable and reproducible manner, exhibiting gaze patterns that were similar to the mean gaze pattern of other controls, with strong correlations across individuals (intersubject correlations) and across movie presentations (intra-subject correlations). In contrast, children with ASD exhibited significantly more variable/idiosyncratic gaze patterns that differed from the mean gaze pattern of controls and were weakly correlated across individuals and presentations. Most importantly, quantification of gaze idiosyncrasy in individual children enabled separation of ASD and control children with higher sensitivity and specificity than traditional measures such as time gazing at faces. Individual magnitudes of gaze idiosyncrasy were also significantly correlated with ASD severity and cognitive scores and were significantly correlated across movies and movie presentations, demonstrating clinical sensitivity and reliability. These results suggest that gaze idiosyncrasy is a potent behavioral abnormality that characterizes a considerable number of children with ASD and may contribute to their impaired development. Quantification of gaze idiosyncrasy in individual children may aid in assessing symptom severity and their change in response to treatments. Autism Res 2020, 13: 935-946. textcopyright 2019 International Society for Autism Research, Wiley Periodicals, Inc. Lay Summary: Typically, developing children watch movies of social interactions in a reliable and predictable manner, attending faces, gestures, actions, body movements, and objects that are relevant to the social interaction and its narrative. Here, we demonstrate that children with ASD watch such movies with significantly more variable/idiosyncratic gaze patterns that differ across individuals and across movie presentations. We demonstrate that quantifying this gaze variability may aid in identifying children with ASD and in determining the severity of their symptoms.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Previous eye-tracking studies have reported that children with autism spectrum disorders (ASD) fixate less on faces in comparison to controls. To properly understand social interactions, however, children must gaze not only at faces but also at actions, gestures, body movements, contextual details, and objects, thereby creating specific gaze patterns when observing specific social interactions. We presented three different movies with social interactions to 111 children (71 with ASD) who watched each of the movies twice. Typically developing children viewed the movies in a remarkably predictable and reproducible manner, exhibiting gaze patterns that were similar to the mean gaze pattern of other controls, with strong correlations across individuals (intersubject correlations) and across movie presentations (intra-subject correlations). In contrast, children with ASD exhibited significantly more variable/idiosyncratic gaze patterns that differed from the mean gaze pattern of controls and were weakly correlated across individuals and presentations. Most importantly, quantification of gaze idiosyncrasy in individual children enabled separation of ASD and control children with higher sensitivity and specificity than traditional measures such as time gazing at faces. Individual magnitudes of gaze idiosyncrasy were also significantly correlated with ASD severity and cognitive scores and were significantly correlated across movies and movie presentations, demonstrating clinical sensitivity and reliability. These results suggest that gaze idiosyncrasy is a potent behavioral abnormality that characterizes a considerable number of children with ASD and may contribute to their impaired development. Quantification of gaze idiosyncrasy in individual children may aid in assessing symptom severity and their change in response to treatments. Autism Res 2020, 13: 935-946. textcopyright 2019 International Society for Autism Research, Wiley Periodicals, Inc. Lay Summary: Typically, developing children watch movies of social interactions in a reliable and predictable manner, attending faces, gestures, actions, body movements, and objects that are relevant to the social interaction and its narrative. Here, we demonstrate that children with ASD watch such movies with significantly more variable/idiosyncratic gaze patterns that differ across individuals and across movie presentations. We demonstrate that quantifying this gaze variability may aid in identifying children with ASD and in determining the severity of their symptoms. |
Emma L Axelsson; Rachel A Robbins; Helen F Copeland; Hester W Covell Body inversion effects with photographic images of body postures: Is it about faces? Journal Article Frontiers in Psychology, 10 , pp. 1–12, 2019. @article{Axelsson2019, title = {Body inversion effects with photographic images of body postures: Is it about faces?}, author = {Emma L Axelsson and Rachel A Robbins and Helen F Copeland and Hester W Covell}, doi = {10.3389/fpsyg.2019.02686}, year = {2019}, date = {2019-01-01}, journal = {Frontiers in Psychology}, volume = {10}, pages = {1--12}, abstract = {As with faces, participants are better at discriminating upright bodies than inverted bodies. This inversion effect is reliable for whole figures, namely, bodies with heads, but it is less reliable for headless bodies. This suggests that removal of the head disrupts typical processing of human figures, and raises questions about the role of faces in efficient body discrimination. In most studies, faces are occluded, but the aim here was to exclude faces in a more ecologically valid way by presenting photographic images of human figures from behind (about-facing), as well as measuring gaze to different parts of the figures. Participants determined whether pairs of sequentially presented body postures were the same or different for whole and headless figures. Presenting about-facing figures (heads seen from behind) and forward-facing figures with faces enabled a comparison of the effect of the presence or absence of faces. Replicating previous findings, there were inversion effects for forward-facing whole figures, but less reliable effects for headless images. There were also inversion effects for about-facing whole figures, but not about-facing headless figures. Accuracy was higher in the forward- compared to the about-facing conditions, but proportional dwell time was greater to bodies in about-facing images. Likewise, despite better discrimination of forward-facing upright compared to inverted whole figures, participants focused more on the heads and less on the bodies in upright compared to inverted images. However, there was no clear relationship between performance and dwell time proportions to heads. Body inversion effects (BIEs) were found with about-facing whole figures and headless forward-facing figures, despite the absence of faces. With inverted whole figures, there was a significant relationship between performance and greater looking at bodies, and less at heads suggesting that in more difficult conditions a focus on bodies is associated with better discrimination. Overall, the findings suggest that the visual system has greater sensitivity to bodies in their most experienced form, which is typically upright and with a head. Otherwise, the more a face is implied by the context, as in whole figures or forward- rather than about-facing headless bodies, the better the performance as holistic/configural processing is likely stronger.}, keywords = {}, pubstate = {published}, tppubtype = {article} } As with faces, participants are better at discriminating upright bodies than inverted bodies. This inversion effect is reliable for whole figures, namely, bodies with heads, but it is less reliable for headless bodies. This suggests that removal of the head disrupts typical processing of human figures, and raises questions about the role of faces in efficient body discrimination. In most studies, faces are occluded, but the aim here was to exclude faces in a more ecologically valid way by presenting photographic images of human figures from behind (about-facing), as well as measuring gaze to different parts of the figures. Participants determined whether pairs of sequentially presented body postures were the same or different for whole and headless figures. Presenting about-facing figures (heads seen from behind) and forward-facing figures with faces enabled a comparison of the effect of the presence or absence of faces. Replicating previous findings, there were inversion effects for forward-facing whole figures, but less reliable effects for headless images. There were also inversion effects for about-facing whole figures, but not about-facing headless figures. Accuracy was higher in the forward- compared to the about-facing conditions, but proportional dwell time was greater to bodies in about-facing images. Likewise, despite better discrimination of forward-facing upright compared to inverted whole figures, participants focused more on the heads and less on the bodies in upright compared to inverted images. However, there was no clear relationship between performance and dwell time proportions to heads. Body inversion effects (BIEs) were found with about-facing whole figures and headless forward-facing figures, despite the absence of faces. With inverted whole figures, there was a significant relationship between performance and greater looking at bodies, and less at heads suggesting that in more difficult conditions a focus on bodies is associated with better discrimination. Overall, the findings suggest that the visual system has greater sensitivity to bodies in their most experienced form, which is typically upright and with a head. Otherwise, the more a face is implied by the context, as in whole figures or forward- rather than about-facing headless bodies, the better the performance as holistic/configural processing is likely stronger. |
Naila Ayala; Matthew Heath Executive dysfunction after a sport-related concussion is independent of task-based symptom burden Journal Article Journal of Neurotrauma, 37 , pp. 2558–2568, 2020. @article{Ayala2020, title = {Executive dysfunction after a sport-related concussion is independent of task-based symptom burden}, author = {Naila Ayala and Matthew Heath}, doi = {10.1089/neu.2019.6865}, year = {2020}, date = {2020-01-01}, journal = {Journal of Neurotrauma}, volume = {37}, pages = {2558--2568}, abstract = {A sport-related concussion (SRC) results in short- and long-term deficits in oculomotor control; however, it is unclear whether this change reflects executive dysfunction and/or a performance decrement due to an increase in task-based symptom burden. Here, individuals with a SRC - and age- and sex-matched controls - completed an antisaccade task (i.e., saccade mirror-symmetrical to a target) during the early (initial assessment: textless=12 days) and later (follow-up assessment: textless 30 days) stages of recovery. Antisaccades were used because they require top-down executive control and exhibit performance decrements following a SRC. Reaction time (RT) and directional errors were included with pupillometry because pupil size in the antisaccade task has been shown to provide a neural proxy for executive control. In addition, the Sport-Concussion Assessment Tool (SCAT-5) symptom checklist was completed prior to and after each oculomotor assessment to identify a possible task-based increase in symptomology. The SRC group yielded longer initial assessment RTs, more directional errors and larger task-evoked pupil dilations (TEPD) than the control group. At the follow-up assessment, RTs for the SRC and control group did not reliably differ; however, the former demonstrated more directional errors and larger TEPDs. SCAT-5 symptom severity scores did not vary from the pre- to post-oculomotor evaluation for either initial or follow-up assessments. Accordingly, a SRC imparts a persistent executive dysfunction to oculomotor planning independent of a task-based increase in symptom burden. These findings evince that antisaccades serve as an effective tool to identify subtle executive deficits during the early and later stages of SRC recovery.}, keywords = {}, pubstate = {published}, tppubtype = {article} } A sport-related concussion (SRC) results in short- and long-term deficits in oculomotor control; however, it is unclear whether this change reflects executive dysfunction and/or a performance decrement due to an increase in task-based symptom burden. Here, individuals with a SRC - and age- and sex-matched controls - completed an antisaccade task (i.e., saccade mirror-symmetrical to a target) during the early (initial assessment: textless=12 days) and later (follow-up assessment: textless 30 days) stages of recovery. Antisaccades were used because they require top-down executive control and exhibit performance decrements following a SRC. Reaction time (RT) and directional errors were included with pupillometry because pupil size in the antisaccade task has been shown to provide a neural proxy for executive control. In addition, the Sport-Concussion Assessment Tool (SCAT-5) symptom checklist was completed prior to and after each oculomotor assessment to identify a possible task-based increase in symptomology. The SRC group yielded longer initial assessment RTs, more directional errors and larger task-evoked pupil dilations (TEPD) than the control group. At the follow-up assessment, RTs for the SRC and control group did not reliably differ; however, the former demonstrated more directional errors and larger TEPDs. SCAT-5 symptom severity scores did not vary from the pre- to post-oculomotor evaluation for either initial or follow-up assessments. Accordingly, a SRC imparts a persistent executive dysfunction to oculomotor planning independent of a task-based increase in symptom burden. These findings evince that antisaccades serve as an effective tool to identify subtle executive deficits during the early and later stages of SRC recovery. |
Naila Ayala; Ewa Niechwiej-Szwedo Effects of blocked vs. interleaved administration mode on saccade preparatory set revealed using pupillometry Journal Article Experimental Brain Research, pp. 1–11, 2020. @article{Ayala2020a, title = {Effects of blocked vs. interleaved administration mode on saccade preparatory set revealed using pupillometry}, author = {Naila Ayala and Ewa Niechwiej-Szwedo}, doi = {10.1007/s00221-020-05967-9}, year = {2020}, date = {2020-01-01}, journal = {Experimental Brain Research}, pages = {1--11}, publisher = {Springer Berlin Heidelberg}, abstract = {Eye movements have been used extensively to assess information processing and cognitive function. However, significant variability in saccade performance has been observed, which could arise from methodological variations across different studies. For example, prosaccades and antisaccades have been studied using either a blocked or interleaved design, which has a significant influence on error rates and latency. This is problematic as it makes it difficult to compare saccade performance across studies and may limit the ability to use saccades as a behavioural assay to assess neurocognitive function. Thus, the current study examined how administration mode influences saccade related preparatory activity by employing pupil size as a non-invasive proxy for neural activity related to saccade planning and execution. Saccade performance and pupil dynamics were examined in eleven participants as they completed pro- and antisaccades in blocked and interleaved paradigms. Results showed that administration mode significantly modulated saccade performance and preparatory activity. Reaction times were longer for both pro- and antisaccades in the interleaved condition, compared to the blocked condition (p textless 0.05). Prosaccade pupil dilations were larger in the interleaved condition (p textless 0.05), while antisaccade pupil dilations did not significantly differ between administration modes. Additionally, ROC analysis provided preliminary evidence that pupil size can effectively predict saccade directional errors prior to saccade onset. We propose that task-evoked pupil dilations reflect an increase in preparatory activity for prosaccades and the corresponding cognitive demands associated with interleaved administration mode. Overall, the results highlight the importance that administration mode plays in the design of neurocognitive tasks.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Eye movements have been used extensively to assess information processing and cognitive function. However, significant variability in saccade performance has been observed, which could arise from methodological variations across different studies. For example, prosaccades and antisaccades have been studied using either a blocked or interleaved design, which has a significant influence on error rates and latency. This is problematic as it makes it difficult to compare saccade performance across studies and may limit the ability to use saccades as a behavioural assay to assess neurocognitive function. Thus, the current study examined how administration mode influences saccade related preparatory activity by employing pupil size as a non-invasive proxy for neural activity related to saccade planning and execution. Saccade performance and pupil dynamics were examined in eleven participants as they completed pro- and antisaccades in blocked and interleaved paradigms. Results showed that administration mode significantly modulated saccade performance and preparatory activity. Reaction times were longer for both pro- and antisaccades in the interleaved condition, compared to the blocked condition (p textless 0.05). Prosaccade pupil dilations were larger in the interleaved condition (p textless 0.05), while antisaccade pupil dilations did not significantly differ between administration modes. Additionally, ROC analysis provided preliminary evidence that pupil size can effectively predict saccade directional errors prior to saccade onset. We propose that task-evoked pupil dilations reflect an increase in preparatory activity for prosaccades and the corresponding cognitive demands associated with interleaved administration mode. Overall, the results highlight the importance that administration mode plays in the design of neurocognitive tasks. |
Nicolai D Ayasse; Arthur Wingfield Anticipatory baseline pupil diameter Is sensitive to differences in hearing thresholds Journal Article Frontiers in Psychology, 10 , pp. 1–7, 2020. @article{Ayasse2020b, title = {Anticipatory baseline pupil diameter Is sensitive to differences in hearing thresholds}, author = {Nicolai D Ayasse and Arthur Wingfield}, doi = {10.3389/fpsyg.2019.02947}, year = {2020}, date = {2020-01-01}, journal = {Frontiers in Psychology}, volume = {10}, pages = {1--7}, abstract = {Task-evoked changes in pupil dilation have long been used as a physiological index of cognitive effort. Unlike this response, that is measured during or after an experimental trial, the baseline pupil dilation (BPD) is a measure taken prior to an experimental trial. As such, it is considered to reflect an individual's arousal level in anticipation of an experimental trial. We report data for 68 participants, ages 18 to 89, whose hearing acuity ranged from normal hearing to a moderate hearing loss, tested over a series 160 trials on an auditory sentence comprehension task. Results showed that BPDs progressively declined over the course of the experimental trials, with participants with poorer pure tone detection thresholds showing a steeper rate of decline than those with better thresholds. Data showed this slope difference to be due to participants with poorer hearing having larger BPDs than those with better hearing at the start of the experiment, but with their BPDs approaching that of the better hearing participants by the end of the 160 trials. A finding of increasing response accuracy over trials was seen as inconsistent with a fatigue or reduced task engagement account of the diminishing BPDs. Rather, the present results imply BPD as reflecting a heightened arousal level in poorer-hearing participants in anticipation of a task that demands accurate speech perception, a concern that dissipates over trials with task success. These data taken with others suggest that the baseline pupillary response may not reflect a single construct.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Task-evoked changes in pupil dilation have long been used as a physiological index of cognitive effort. Unlike this response, that is measured during or after an experimental trial, the baseline pupil dilation (BPD) is a measure taken prior to an experimental trial. As such, it is considered to reflect an individual's arousal level in anticipation of an experimental trial. We report data for 68 participants, ages 18 to 89, whose hearing acuity ranged from normal hearing to a moderate hearing loss, tested over a series 160 trials on an auditory sentence comprehension task. Results showed that BPDs progressively declined over the course of the experimental trials, with participants with poorer pure tone detection thresholds showing a steeper rate of decline than those with better thresholds. Data showed this slope difference to be due to participants with poorer hearing having larger BPDs than those with better hearing at the start of the experiment, but with their BPDs approaching that of the better hearing participants by the end of the 160 trials. A finding of increasing response accuracy over trials was seen as inconsistent with a fatigue or reduced task engagement account of the diminishing BPDs. Rather, the present results imply BPD as reflecting a heightened arousal level in poorer-hearing participants in anticipation of a task that demands accurate speech perception, a concern that dissipates over trials with task success. These data taken with others suggest that the baseline pupillary response may not reflect a single construct. |
Vladislav Ayzenberg; Meghan R Hickey; Stella F Lourenco Pupillometry reveals the physiological underpinnings of the aversion to holes Journal Article PeerJ, 6 , pp. 1–19, 2018. @article{Ayzenberg2018, title = {Pupillometry reveals the physiological underpinnings of the aversion to holes}, author = {Vladislav Ayzenberg and Meghan R Hickey and Stella F Lourenco}, doi = {10.7717/peerj.4185}, year = {2018}, date = {2018-01-01}, journal = {PeerJ}, volume = {6}, pages = {1--19}, abstract = {An unusual, but common, aversion to images with clusters of holes is known as trypophobia. Recent research suggests that trypophobic reactions are caused by visual spectral properties also present in aversive images of evolutionary threatening animals (e.g., snakes and spiders). However, despite similar spectral properties, it remains unknown whether there is a shared emotional response to holes and threatening animals. Whereas snakes and spiders are known to elicit a fear reaction, associated with the sympathetic nervous system, anecdotal reports from self-described trypophobes suggest reactions more consistent with disgust, which is associated with activation of the parasympathetic nervous system. Here we used pupillometry in a novel attempt to uncover the distinct emotional response associated with a trypophobic response to holes. Across two experiments, images of holes elicited greater constriction compared to images of threatening animals and neutral images. Moreover, this effect held when controlling for level of arousal and accounting for the pupil grating response. This pattern of pupillary response is consistent with involvement of the parasympathetic nervous system and suggests a disgust, not a fear, response to images of holes. Although general aversion may be rooted in shared visual-spectral properties, we propose that the specific emotion is determined by cognitive appraisal of the distinct image content.}, keywords = {}, pubstate = {published}, tppubtype = {article} } An unusual, but common, aversion to images with clusters of holes is known as trypophobia. Recent research suggests that trypophobic reactions are caused by visual spectral properties also present in aversive images of evolutionary threatening animals (e.g., snakes and spiders). However, despite similar spectral properties, it remains unknown whether there is a shared emotional response to holes and threatening animals. Whereas snakes and spiders are known to elicit a fear reaction, associated with the sympathetic nervous system, anecdotal reports from self-described trypophobes suggest reactions more consistent with disgust, which is associated with activation of the parasympathetic nervous system. Here we used pupillometry in a novel attempt to uncover the distinct emotional response associated with a trypophobic response to holes. Across two experiments, images of holes elicited greater constriction compared to images of threatening animals and neutral images. Moreover, this effect held when controlling for level of arousal and accounting for the pupil grating response. This pattern of pupillary response is consistent with involvement of the parasympathetic nervous system and suggests a disgust, not a fear, response to images of holes. Although general aversion may be rooted in shared visual-spectral properties, we propose that the specific emotion is determined by cognitive appraisal of the distinct image content. |
Habiba Azab; Benjamin Y Hayden Correlates of decisional dynamics in the dorsal anterior cingulate cortex Journal Article PLoS Biology, 15 (11), pp. e2003091, 2017. @article{Azab2017, title = {Correlates of decisional dynamics in the dorsal anterior cingulate cortex}, author = {Habiba Azab and Benjamin Y Hayden}, doi = {10.1371/journal.pbio.2003091}, year = {2017}, date = {2017-01-01}, journal = {PLoS Biology}, volume = {15}, number = {11}, pages = {e2003091}, abstract = {We hypothesized that during binary economic choice, decision makers use the first option they attend as a default to which they compare the second. To test this idea, we recorded activity of neurons in the dorsal anterior cingulate cortex (dACC) of macaques choosing between gambles presented asynchronously. We find that ensemble encoding of the value of the first offer includes both choice-dependent and choice-independent aspects, as if reflecting a partial decision. That is, its responses are neither entirely pre- nor post-decisional. In contrast, coding of the value of the second offer is entirely decision dependent (i.e., post-decisional). This result holds even when offer-value encodings are compared within the same time period. Additionally, we see no evidence for 2 pools of neurons linked to the 2 offers; instead, all comparison appears to occur within a single functionally homogenous pool of task-selective neurons. These observations suggest that economic choices reflect a context-dependent evaluation of attended options. Moreover, they raise the possibility that value representations reflect, to some extent, a tentative commitment to a choice.}, keywords = {}, pubstate = {published}, tppubtype = {article} } We hypothesized that during binary economic choice, decision makers use the first option they attend as a default to which they compare the second. To test this idea, we recorded activity of neurons in the dorsal anterior cingulate cortex (dACC) of macaques choosing between gambles presented asynchronously. We find that ensemble encoding of the value of the first offer includes both choice-dependent and choice-independent aspects, as if reflecting a partial decision. That is, its responses are neither entirely pre- nor post-decisional. In contrast, coding of the value of the second offer is entirely decision dependent (i.e., post-decisional). This result holds even when offer-value encodings are compared within the same time period. Additionally, we see no evidence for 2 pools of neurons linked to the 2 offers; instead, all comparison appears to occur within a single functionally homogenous pool of task-selective neurons. These observations suggest that economic choices reflect a context-dependent evaluation of attended options. Moreover, they raise the possibility that value representations reflect, to some extent, a tentative commitment to a choice. |
Habiba Azab; Benjamin Y Hayden Correlates of economic decisions in the dorsal and subgenual anterior cingulate cortices Journal Article European Journal of Neuroscience, 47 (8), pp. 979–993, 2018. @article{Azab2018, title = {Correlates of economic decisions in the dorsal and subgenual anterior cingulate cortices}, author = {Habiba Azab and Benjamin Y Hayden}, doi = {10.1111/ejn.13865}, year = {2018}, date = {2018-01-01}, journal = {European Journal of Neuroscience}, volume = {47}, number = {8}, pages = {979--993}, abstract = {The anterior cingulate cortex can be divided into distinct ventral (subgenual, sgACC) and dorsal (dACC), portions. The role of dACC in value-based decision-making is hotly debated, while the role of sgACC is poorly understood. We recorded neuronal activity in both regions in rhesus macaques performing a token-gambling task. We find that both encode many of the same vari- ables; including integrated offered values of gambles, primary as well as secondary reward outcomes, number of current tokens and anticipated rewards. Both regions exhibit memory traces for offer values and putative value comparison signals. Both regions use a consistent scheme to encode the value of the attended option. This result suggests that neurones do not appear to be spe- cialized for specific offers (that is, neurones use an attentional as opposed to labelled line coding scheme). We also observed some differences between the two regions: (i) coding strengths in dACC were consistently greater than those in sgACC, (ii) neu- rones in sgACC responded especially to losses and in anticipation of primary rewards, while those in dACC showed more bal- anced responding and (iii) responses to the first offer were slightly faster in sgACC. These results indicate that sgACC and dACC have some functional overlap in economic choice, and are consistent with the idea, inspired by neuroanatomy, which sgACC may serve as input to dACC.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The anterior cingulate cortex can be divided into distinct ventral (subgenual, sgACC) and dorsal (dACC), portions. The role of dACC in value-based decision-making is hotly debated, while the role of sgACC is poorly understood. We recorded neuronal activity in both regions in rhesus macaques performing a token-gambling task. We find that both encode many of the same vari- ables; including integrated offered values of gambles, primary as well as secondary reward outcomes, number of current tokens and anticipated rewards. Both regions exhibit memory traces for offer values and putative value comparison signals. Both regions use a consistent scheme to encode the value of the attended option. This result suggests that neurones do not appear to be spe- cialized for specific offers (that is, neurones use an attentional as opposed to labelled line coding scheme). We also observed some differences between the two regions: (i) coding strengths in dACC were consistently greater than those in sgACC, (ii) neu- rones in sgACC responded especially to losses and in anticipation of primary rewards, while those in dACC showed more bal- anced responding and (iii) responses to the first offer were slightly faster in sgACC. These results indicate that sgACC and dACC have some functional overlap in economic choice, and are consistent with the idea, inspired by neuroanatomy, which sgACC may serve as input to dACC. |
Habiba Azab; Benjamin Y Hayden Partial integration of the components of value in anterior cingulate cortex Journal Article Behavioral Neuroscience, 134 (4), pp. 296–308, 2020. @article{Azab2020, title = {Partial integration of the components of value in anterior cingulate cortex}, author = {Habiba Azab and Benjamin Y Hayden}, doi = {10.1037/bne0000382}, year = {2020}, date = {2020-01-01}, journal = {Behavioral Neuroscience}, volume = {134}, number = {4}, pages = {296--308}, abstract = {Evaluation often involves integrating multiple determinants of value, such as the different possible outcomes in risky choice. A brain region can be placed either before or after a presumed evaluation stage by measuring how responses of its neurons depend on multiple determinants of value. A brain region could also, in principle, show partial integration, which would indicate that it occupies a middle position between (preevaluative) nonintegration and (postevaluative) full integration. Existing mathematical techniques cannot distinguish full from partial integration and therefore risk misidentifying regional function. Here we use a new Bayesian regression-based approach to analyze responses of neurons in dorsal anterior cingulate cortex (dACC) to risky offers. We find that dACC neurons only partially integrate across outcome dimensions, indicating that dACC cannot be assigned to either a pre- or postevaluative position. Neurons in dACC also show putative signatures of value comparison, thereby demonstrating that comparison does not require complete evaluation before proceeding.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Evaluation often involves integrating multiple determinants of value, such as the different possible outcomes in risky choice. A brain region can be placed either before or after a presumed evaluation stage by measuring how responses of its neurons depend on multiple determinants of value. A brain region could also, in principle, show partial integration, which would indicate that it occupies a middle position between (preevaluative) nonintegration and (postevaluative) full integration. Existing mathematical techniques cannot distinguish full from partial integration and therefore risk misidentifying regional function. Here we use a new Bayesian regression-based approach to analyze responses of neurons in dorsal anterior cingulate cortex (dACC) to risky offers. We find that dACC neurons only partially integrate across outcome dimensions, indicating that dACC cannot be assigned to either a pre- or postevaluative position. Neurons in dACC also show putative signatures of value comparison, thereby demonstrating that comparison does not require complete evaluation before proceeding. |
Bobby Azarian; Elizabeth G Esser; Matthew S Peterson Watch out! Directional threat-related postures cue attention and the eyes Journal Article Cognition and Emotion, 30 (3), pp. 561–569, 2016. @article{Azarian2016a, title = {Watch out! Directional threat-related postures cue attention and the eyes}, author = {Bobby Azarian and Elizabeth G Esser and Matthew S Peterson}, doi = {10.1080/02699931.2015.1013089}, year = {2016}, date = {2016-01-01}, journal = {Cognition and Emotion}, volume = {30}, number = {3}, pages = {561--569}, publisher = {Taylor & Francis}, abstract = {Previous work indicates that threatening facial expressions with averted eye gaze can act as a signal of imminent danger, enhancing attentional orienting in the gazed-at direction. However, this threat-related gaze-cueing effect is only present in individuals reporting high levels of anxiety. The present study used eye tracking to investigate whether additional directional social cues, such as averted angry and fearful human body postures, not only cue attention, but also the eyes. The data show that although body direction did not predict target location, anxious individuals made faster eye movements when fearful or angry postures were facing towards (congruent condition) rather than away (incongruent condition) from peripheral targets. Our results provide evidence for attentional cueing in response to threat-related directional body postures in those with anxiety. This suggests that for such individuals, attention is guided by threatening social stimuli in ways that can influence and bias eye movement behaviour.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Previous work indicates that threatening facial expressions with averted eye gaze can act as a signal of imminent danger, enhancing attentional orienting in the gazed-at direction. However, this threat-related gaze-cueing effect is only present in individuals reporting high levels of anxiety. The present study used eye tracking to investigate whether additional directional social cues, such as averted angry and fearful human body postures, not only cue attention, but also the eyes. The data show that although body direction did not predict target location, anxious individuals made faster eye movements when fearful or angry postures were facing towards (congruent condition) rather than away (incongruent condition) from peripheral targets. Our results provide evidence for attentional cueing in response to threat-related directional body postures in those with anxiety. This suggests that for such individuals, attention is guided by threatening social stimuli in ways that can influence and bias eye movement behaviour. |
Bobby Azarian; Elizabeth G Esser; Matthew S Peterson Evidence from the eyes: Threatening postures hold attention Journal Article Psychonomic Bulletin & Review, 23 (3), pp. 764–770, 2016. @article{Azarian2016b, title = {Evidence from the eyes: Threatening postures hold attention}, author = {Bobby Azarian and Elizabeth G Esser and Matthew S Peterson}, doi = {10.3758/s13423-015-0942-0}, year = {2016}, date = {2016-01-01}, journal = {Psychonomic Bulletin & Review}, volume = {23}, number = {3}, pages = {764--770}, publisher = {Psychonomic Bulletin & Review}, abstract = {Efficient detection of threat provides obvious survival advantages and has resulted in a fast and accurate threatdetection system. Although beneficial under normal circumstances, this system may become hypersensitive and cause threat-processing abnormalities. Past research has shown that anxious individuals have difficulty disengaging attention from threatening faces, but it is unknown whether other forms of threatening social stimuli also influence attentional orienting. Much like faces, human body postures are salient social stimuli, because they are informative of one's emotional state and next likely action. Additionally, postures can convey such information in situations in which another's facial expression is not easily visible. Here we investigated whether there is a threat-specific effect for high-anxious individuals, by measuring the time that it takes the eyes to leave the attended stimulus, a task-irrelevant body posture. The results showed that relative to nonthreating postures, threat-related postures hold attention in anxious individuals, providing further evidence of an anxiety-related attentional bias for threatening information. This is the first study to demonstrate that attentional disengagement from threatening postures is affected by emotional valence in those reporting anxiety.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Efficient detection of threat provides obvious survival advantages and has resulted in a fast and accurate threatdetection system. Although beneficial under normal circumstances, this system may become hypersensitive and cause threat-processing abnormalities. Past research has shown that anxious individuals have difficulty disengaging attention from threatening faces, but it is unknown whether other forms of threatening social stimuli also influence attentional orienting. Much like faces, human body postures are salient social stimuli, because they are informative of one's emotional state and next likely action. Additionally, postures can convey such information in situations in which another's facial expression is not easily visible. Here we investigated whether there is a threat-specific effect for high-anxious individuals, by measuring the time that it takes the eyes to leave the attended stimulus, a task-irrelevant body posture. The results showed that relative to nonthreating postures, threat-related postures hold attention in anxious individuals, providing further evidence of an anxiety-related attentional bias for threatening information. This is the first study to demonstrate that attentional disengagement from threatening postures is affected by emotional valence in those reporting anxiety. |
Bobby Azarian; George A Buzzell; Elizabeth G Esser; Alexander Dornstauder; Matthew S Peterson Averted body postures facilitate orienting of the eyes Journal Article Acta Psychologica, 175 , pp. 28–32, 2017. @article{Azarian2017, title = {Averted body postures facilitate orienting of the eyes}, author = {Bobby Azarian and George A Buzzell and Elizabeth G Esser and Alexander Dornstauder and Matthew S Peterson}, doi = {10.1016/j.actpsy.2017.02.006}, year = {2017}, date = {2017-01-01}, journal = {Acta Psychologica}, volume = {175}, pages = {28--32}, publisher = {Elsevier}, abstract = {It is well established that certain social cues, such as averted eye gaze, can automatically initiate the orienting of another's spatial attention. However, whether human posture can also reflexively cue spatial attention remains unclear. The present study directly investigated whether averted neutral postures reflexively cue the attention of observers in a normal population of college students. Similar to classic gaze-cuing paradigms, non-predictive averted posture stimuli were presented prior to the onset of a peripheral target stimulus at one of five SOAs (100 ms–500 ms). Participants were instructed to move their eyes to the target as fast as possible. Eye-tracking data revealed that participants were significantly faster in initiating saccades when the posture direction was congruent with the target stimulus. Since covert attention shifts precede overt shifts in an obligatory fashion, this suggests that directional postures reflexively orient the attention of others. In line with previous work on gaze-cueing, the congruency effect of posture cue was maximal at the 300 ms SOA. These results support the notion that a variety of social cues are used by the human visual system in determining the “direction of attention” of others, and also suggest that human body postures are salient stimuli capable of automatically shifting an observer's attention.}, keywords = {}, pubstate = {published}, tppubtype = {article} } It is well established that certain social cues, such as averted eye gaze, can automatically initiate the orienting of another's spatial attention. However, whether human posture can also reflexively cue spatial attention remains unclear. The present study directly investigated whether averted neutral postures reflexively cue the attention of observers in a normal population of college students. Similar to classic gaze-cuing paradigms, non-predictive averted posture stimuli were presented prior to the onset of a peripheral target stimulus at one of five SOAs (100 ms–500 ms). Participants were instructed to move their eyes to the target as fast as possible. Eye-tracking data revealed that participants were significantly faster in initiating saccades when the posture direction was congruent with the target stimulus. Since covert attention shifts precede overt shifts in an obligatory fashion, this suggests that directional postures reflexively orient the attention of others. In line with previous work on gaze-cueing, the congruency effect of posture cue was maximal at the 300 ms SOA. These results support the notion that a variety of social cues are used by the human visual system in determining the “direction of attention” of others, and also suggest that human body postures are salient stimuli capable of automatically shifting an observer's attention. |
Hamidreza Azemati; Fatemeh Jam; Modjtaba Ghorbani; Matthias Dehmer; Reza Ebrahimpour; Abdolhamid Ghanbaran; Frank Emmert-Streib The role of symmetry in the aesthetics of residential building façades using cognitive science methods Journal Article Symmetry, 12 , pp. 1–15, 2020. @article{Azemati2020, title = {The role of symmetry in the aesthetics of residential building façades using cognitive science methods}, author = {Hamidreza Azemati and Fatemeh Jam and Modjtaba Ghorbani and Matthias Dehmer and Reza Ebrahimpour and Abdolhamid Ghanbaran and Frank Emmert-Streib}, doi = {10.3390/sym12091438}, year = {2020}, date = {2020-01-01}, journal = {Symmetry}, volume = {12}, pages = {1--15}, abstract = {Symmetry is an important visual feature for humans and its application in architecture is completely evident. This paper aims to investigate the role of symmetry in the aesthetics judgment of residential building façades and study the pattern of eye movement based on the expertise of subjects in architecture. In order to implement this in the present paper, we have created images in two categories: symmetrical and asymmetrical façade images. The experiment design allows us to investigate the preference of subjects and their reaction time to decide about presented images as well as record their eye movements. It was inferred that the aesthetic experience of a building façade is influenced by the expertise of the subjects. There is a significant difference between experts and non-experts in all conditions, and symmetrical façades are in line with the taste of non-expert subjects. Moreover, the patterns of fixational eye movements indicate that the horizontal or vertical symmetry (mirror symmetry) has a profound influence on the observer's attention, but there is a difference in the points watched and their fixation duration. Thus, although symmetry may attract the same attention during eye movements on façade images, it does not necessarily lead to the same preference between the expert and non-expert groups.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Symmetry is an important visual feature for humans and its application in architecture is completely evident. This paper aims to investigate the role of symmetry in the aesthetics judgment of residential building façades and study the pattern of eye movement based on the expertise of subjects in architecture. In order to implement this in the present paper, we have created images in two categories: symmetrical and asymmetrical façade images. The experiment design allows us to investigate the preference of subjects and their reaction time to decide about presented images as well as record their eye movements. It was inferred that the aesthetic experience of a building façade is influenced by the expertise of the subjects. There is a significant difference between experts and non-experts in all conditions, and symmetrical façades are in line with the taste of non-expert subjects. Moreover, the patterns of fixational eye movements indicate that the horizontal or vertical symmetry (mirror symmetry) has a profound influence on the observer's attention, but there is a difference in the points watched and their fixation duration. Thus, although symmetry may attract the same attention during eye movements on façade images, it does not necessarily lead to the same preference between the expert and non-expert groups. |
Marzyeh Azimi; Mariann Oemisch; Thilo Womelsdorf Psychopharmacology, 237 (4), pp. 997–1010, 2020. @article{Azimi2020, title = {Dissociation of nicotinic $alpha$7 and $alpha$4/$beta$2 sub-receptor agonists for enhancing learning and attentional filtering in nonhuman primates}, author = {Marzyeh Azimi and Mariann Oemisch and Thilo Womelsdorf}, doi = {10.1007/s00213-019-05430-w}, year = {2020}, date = {2020-01-01}, journal = {Psychopharmacology}, volume = {237}, number = {4}, pages = {997--1010}, publisher = {Psychopharmacology}, abstract = {Rationale: Nicotinic acetylcholine receptors (nAChRs) modulate attention, memory, and higher executive functioning, but it is unclear how nACh sub-receptors mediate different mechanisms supporting these functions. Objectives: We investigated whether selective agonists for the alpha-7 nAChR versus the alpha-4/beta-2 nAChR have unique functional contributions for value learning and attentional filtering of distractors in the nonhuman primate. Methods: Two adult rhesus macaque monkeys performed reversal learning following systemic administration of either the alpha-7 nAChR agonist PHA-543613 or the alpha-4/beta-2 nAChR agonist ABT-089 or a vehicle control. Behavioral analysis quantified performance accuracy, speed of processing, reversal learning speed, the control of distractor interference, perseveration tendencies, and motivation. Results: We found that the alpha-7 nAChR agonist PHA-543613 enhanced the learning speed of feature values but did not modulate how salient distracting information was filtered from ongoing choice processes. In contrast, the selective alpha-4/beta-2 nAChR agonist ABT-089 did not affect learning speed but reduced distractibility. This dissociation was dose-dependent and evident in the absence of systematic changes in overall performance, reward intake, motivation to perform the task, perseveration tendencies, or reaction times. Conclusions: These results suggest nicotinic sub-receptor specific mechanisms consistent with (1) alpha-4/beta-2 nAChR specific amplification of cholinergic transients in prefrontal cortex linked to enhanced cue detection in light of interferences, and (2) alpha-7 nAChR specific activation prolonging cholinergic transients, which could facilitate subjects to follow-through with newly established attentional strategies when outcome contingencies change. These insights will be critical for developing function-specific drugs alleviating attention and learning deficits in neuro-psychiatric diseases.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Rationale: Nicotinic acetylcholine receptors (nAChRs) modulate attention, memory, and higher executive functioning, but it is unclear how nACh sub-receptors mediate different mechanisms supporting these functions. Objectives: We investigated whether selective agonists for the alpha-7 nAChR versus the alpha-4/beta-2 nAChR have unique functional contributions for value learning and attentional filtering of distractors in the nonhuman primate. Methods: Two adult rhesus macaque monkeys performed reversal learning following systemic administration of either the alpha-7 nAChR agonist PHA-543613 or the alpha-4/beta-2 nAChR agonist ABT-089 or a vehicle control. Behavioral analysis quantified performance accuracy, speed of processing, reversal learning speed, the control of distractor interference, perseveration tendencies, and motivation. Results: We found that the alpha-7 nAChR agonist PHA-543613 enhanced the learning speed of feature values but did not modulate how salient distracting information was filtered from ongoing choice processes. In contrast, the selective alpha-4/beta-2 nAChR agonist ABT-089 did not affect learning speed but reduced distractibility. This dissociation was dose-dependent and evident in the absence of systematic changes in overall performance, reward intake, motivation to perform the task, perseveration tendencies, or reaction times. Conclusions: These results suggest nicotinic sub-receptor specific mechanisms consistent with (1) alpha-4/beta-2 nAChR specific amplification of cholinergic transients in prefrontal cortex linked to enhanced cue detection in light of interferences, and (2) alpha-7 nAChR specific activation prolonging cholinergic transients, which could facilitate subjects to follow-through with newly established attentional strategies when outcome contingencies change. These insights will be critical for developing function-specific drugs alleviating attention and learning deficits in neuro-psychiatric diseases. |
Jasmine R Aziz; Samantha R Good; Raymond M Klein; Gail A Eskes Role of aging and working memory in performance on a naturalistic visual search task Journal Article Cortex, 136 , pp. 28–40, 2021. @article{Aziz2021, title = {Role of aging and working memory in performance on a naturalistic visual search task}, author = {Jasmine R Aziz and Samantha R Good and Raymond M Klein and Gail A Eskes}, doi = {10.1016/j.cortex.2020.12.003}, year = {2021}, date = {2021-12-01}, journal = {Cortex}, volume = {136}, pages = {28--40}, publisher = {Elsevier BV}, abstract = {Studying age-related changes in working memory (WM) and visual search can provide insights into mechanisms of visuospatial attention. In visual search, WM is used to remember previously inspected objects/locations and to maintain a mental representation of the target to guide the search. We sought to extend this work, using aging as a case of reduced WM capacity. The present study tested whether various domains of WM would predict visual search performance in both young (n = 47; aged 18-35 yrs) and older (n = 48; aged 55-78) adults. Participants completed executive and domain-specific WM measures, and a naturalistic visual search task with (single) feature and triple-conjunction (three-feature) search conditions. We also varied the WM load requirements of the search task by manipulating whether a reference picture of the target (i.e., target template) was displayed during the search, or whether participants needed to search from memory. In both age groups, participants with better visuospatial executive WM were faster to locate complex search targets. Working memory storage capacity predicted search performance regardless of target complexity; however, visuospatial storage capacity was more predictive for young adults, whereas verbal storage capacity was more predictive for older adults. Displaying a target template during search diminished the involvement of WM in search performance, but this effect was primarily observed in young adults. Age-specific interactions between WM and visual search abilities are discussed in the context of mechanisms of visuospatial attention and how they may vary across the lifespan.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Studying age-related changes in working memory (WM) and visual search can provide insights into mechanisms of visuospatial attention. In visual search, WM is used to remember previously inspected objects/locations and to maintain a mental representation of the target to guide the search. We sought to extend this work, using aging as a case of reduced WM capacity. The present study tested whether various domains of WM would predict visual search performance in both young (n = 47; aged 18-35 yrs) and older (n = 48; aged 55-78) adults. Participants completed executive and domain-specific WM measures, and a naturalistic visual search task with (single) feature and triple-conjunction (three-feature) search conditions. We also varied the WM load requirements of the search task by manipulating whether a reference picture of the target (i.e., target template) was displayed during the search, or whether participants needed to search from memory. In both age groups, participants with better visuospatial executive WM were faster to locate complex search targets. Working memory storage capacity predicted search performance regardless of target complexity; however, visuospatial storage capacity was more predictive for young adults, whereas verbal storage capacity was more predictive for older adults. Displaying a target template during search diminished the involvement of WM in search performance, but this effect was primarily observed in young adults. Age-specific interactions between WM and visual search abilities are discussed in the context of mechanisms of visuospatial attention and how they may vary across the lifespan. |
Elham Azizi; Larry Allen Abel; Matthew J Stainer The influence of action video game playing on eye movement behaviour during visual search in abstract, in-game and natural scenes Journal Article Attention, Perception, and Psychophysics, 79 (2), pp. 484–497, 2017. @article{Azizi2017, title = {The influence of action video game playing on eye movement behaviour during visual search in abstract, in-game and natural scenes}, author = {Elham Azizi and Larry Allen Abel and Matthew J Stainer}, doi = {10.3758/s13414-016-1256-7}, year = {2017}, date = {2017-01-01}, journal = {Attention, Perception, and Psychophysics}, volume = {79}, number = {2}, pages = {484--497}, publisher = {Attention, Perception, & Psychophysics}, abstract = {Action game playing has been associated with several improvements in visual attention tasks. However, it is not clear how such changes might influence the way we overtly select information from our visual world (i.e. eye movements). We examined whether action-video-game training changed eye movement behaviour in a series of visual search tasks including conjunctive search (relatively abstracted from natural behaviour), game-related search, and more naturalistic scene search. Forty nongamers were trained in either an action first-person shooter game or a card game (control) for 10 hours. As a further control, we recorded eye movements of 20 experienced action gamers on the same tasks. The results did not show any change in duration of fixations or saccade amplitude either from before to after the training or between all nongamers (pretraining) and experienced action gamers. However, we observed a change in search strategy, reflected by a reduction in the vertical distribution of fixations for the game-related search task in the action-game-trained group. This might suggest learning the likely distribution of targets. In other words, game training only skilled participants to search game images for targets important to the game, with no indication of transfer to the more natural scene search. Taken together, these results suggest no modification in overt allocation of attention. Either the skills that can be trained with action gaming are not powerful enough to influence information selection through eye movements, or action-game-learned skills are not used when deciding where to move the eyes.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Action game playing has been associated with several improvements in visual attention tasks. However, it is not clear how such changes might influence the way we overtly select information from our visual world (i.e. eye movements). We examined whether action-video-game training changed eye movement behaviour in a series of visual search tasks including conjunctive search (relatively abstracted from natural behaviour), game-related search, and more naturalistic scene search. Forty nongamers were trained in either an action first-person shooter game or a card game (control) for 10 hours. As a further control, we recorded eye movements of 20 experienced action gamers on the same tasks. The results did not show any change in duration of fixations or saccade amplitude either from before to after the training or between all nongamers (pretraining) and experienced action gamers. However, we observed a change in search strategy, reflected by a reduction in the vertical distribution of fixations for the game-related search task in the action-game-trained group. This might suggest learning the likely distribution of targets. In other words, game training only skilled participants to search game images for targets important to the game, with no indication of transfer to the more natural scene search. Taken together, these results suggest no modification in overt allocation of attention. Either the skills that can be trained with action gaming are not powerful enough to influence information selection through eye movements, or action-game-learned skills are not used when deciding where to move the eyes. |
Alexandre Ba; Marwa Shams; Sabine Schmidt; Miguel P Eckstein; Francis R Verdun; François O Bochud Search of low-contrast liver lesions in abdominal CT: the importance of scrolling behavior Journal Article Journal of Medical Imaging, 7 (4), pp. 1–12, 2020. @article{Ba2020, title = {Search of low-contrast liver lesions in abdominal CT: the importance of scrolling behavior}, author = {Alexandre Ba and Marwa Shams and Sabine Schmidt and Miguel P Eckstein and Francis R Verdun and Fran{ç}ois O Bochud}, doi = {10.1117/1.jmi.7.4.045501}, year = {2020}, date = {2020-01-01}, journal = {Journal of Medical Imaging}, volume = {7}, number = {4}, pages = {1--12}, abstract = {Purpose: Visual search using volumetric images is becoming the standard in medical imaging. However, we do not fully understand how eye movement strategies mediate diagnostic perfor- mance. A recent study on computed tomography (CT) images showed that the search strategies of radiologists could be classified based on saccade amplitudes and cross-quadrant eye move- ments [eye movement index (EMI)] into two categories: drillers and scanners. Approach: We investigate how the number of times a radiologist scrolls in a given direction during analysis of the images (number of courses) could add a supplementary variable to use to characterize search strategies. We used a set of 15 normal liver CT images in which we inserted 1 to 5 hypodense metastases of two different signal contrast amplitudes. Twenty radiologists were asked to search for the metastases while their eye-gaze was recorded by an eye-tracker device (EyeLink1000, SR Research Ltd., Mississauga, Ontario, Canada). Results: We found that categorizing radiologists based on the number of courses (rather than EMI) could better predict differences in decision times, percentage of image covered, and search error rates. Radiologists with a larger number of courses covered more volume in more time, found more metastases, and made fewer search errors than those with a lower number of courses. Our results suggest that the traditional definition of drillers and scanners could be expanded to include scrolling behavior. Drillers could be defined as scrolling back and forth through the image stack, each time exploring a different area on each image (low EMI and high number of courses). Scanners could be defined as scrolling progressively through the stack of images and focusing on different areas within each image slice (high EMI and low number of courses). Conclusions: Together, our results further enhance the understanding of how radiologists inves- tigate three-dimensional volumes and may improve how to teach effective reading strategies to radiology residents.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Purpose: Visual search using volumetric images is becoming the standard in medical imaging. However, we do not fully understand how eye movement strategies mediate diagnostic perfor- mance. A recent study on computed tomography (CT) images showed that the search strategies of radiologists could be classified based on saccade amplitudes and cross-quadrant eye move- ments [eye movement index (EMI)] into two categories: drillers and scanners. Approach: We investigate how the number of times a radiologist scrolls in a given direction during analysis of the images (number of courses) could add a supplementary variable to use to characterize search strategies. We used a set of 15 normal liver CT images in which we inserted 1 to 5 hypodense metastases of two different signal contrast amplitudes. Twenty radiologists were asked to search for the metastases while their eye-gaze was recorded by an eye-tracker device (EyeLink1000, SR Research Ltd., Mississauga, Ontario, Canada). Results: We found that categorizing radiologists based on the number of courses (rather than EMI) could better predict differences in decision times, percentage of image covered, and search error rates. Radiologists with a larger number of courses covered more volume in more time, found more metastases, and made fewer search errors than those with a lower number of courses. Our results suggest that the traditional definition of drillers and scanners could be expanded to include scrolling behavior. Drillers could be defined as scrolling back and forth through the image stack, each time exploring a different area on each image (low EMI and high number of courses). Scanners could be defined as scrolling progressively through the stack of images and focusing on different areas within each image slice (high EMI and low number of courses). Conclusions: Together, our results further enhance the understanding of how radiologists inves- tigate three-dimensional volumes and may improve how to teach effective reading strategies to radiology residents. |
Mariana Babo-Rebelo; Craig G Richter; Catherine Tallon-Baudry Neural responses to heartbeats in the default network encode the self in spontaneous thoughts Journal Article Journal of Neuroscience, 36 (30), pp. 7829–7840, 2016. @article{BaboRebelo2016, title = {Neural responses to heartbeats in the default network encode the self in spontaneous thoughts}, author = {Mariana Babo-Rebelo and Craig G Richter and Catherine Tallon-Baudry}, doi = {10.1523/JNEUROSCI.0262-16.2016}, year = {2016}, date = {2016-01-01}, journal = {Journal of Neuroscience}, volume = {36}, number = {30}, pages = {7829--7840}, abstract = {The default network (DN) has been consistently associated with self-related cognition, but also to bodily state monitoring and autonomic regulation. We hypothesized that these two seemingly disparate functional roles of the DN are functionally coupled, in line with theories proposing that selfhood is grounded in the neural monitoring of internal organs, such as the heart. We measured with magnetoencephalograhy neural responses evoked by heartbeats while human participants freely mind-wandered. When interrupted by a visual stimulus at random intervals, participants scored the self-relatedness of the interrupted thought. They evaluated their involvement as the first-person perspective subject or agent in the thought ("I"), and on another scale to what degree they were thinking about themselves ("Me"). During the interrupted thought, neural responses to heartbeats in two regions of the DN, the ventral precuneus and the ventromedial prefrontal cortex, covaried, respectively, with the "I" and the "Me" dimensions of the self, even at the single-trial level. No covariation between self-relatedness and peripheral autonomic measures (heart rate, heart rate variability, pupil diameter, electrodermal activity, respiration rate, and phase) or alpha power was observed. Our results reveal a direct link between selfhood and neural responses to heartbeats in the DN and thus directly support theories grounding selfhood in the neural monitoring of visceral inputs. More generally, the tight functional coupling between self-related processing and cardiac monitoring observed here implies that, even in the absence of measured changes in peripheral bodily measures, physiological and cognitive functions have to be considered jointly in the DN.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The default network (DN) has been consistently associated with self-related cognition, but also to bodily state monitoring and autonomic regulation. We hypothesized that these two seemingly disparate functional roles of the DN are functionally coupled, in line with theories proposing that selfhood is grounded in the neural monitoring of internal organs, such as the heart. We measured with magnetoencephalograhy neural responses evoked by heartbeats while human participants freely mind-wandered. When interrupted by a visual stimulus at random intervals, participants scored the self-relatedness of the interrupted thought. They evaluated their involvement as the first-person perspective subject or agent in the thought ("I"), and on another scale to what degree they were thinking about themselves ("Me"). During the interrupted thought, neural responses to heartbeats in two regions of the DN, the ventral precuneus and the ventromedial prefrontal cortex, covaried, respectively, with the "I" and the "Me" dimensions of the self, even at the single-trial level. No covariation between self-relatedness and peripheral autonomic measures (heart rate, heart rate variability, pupil diameter, electrodermal activity, respiration rate, and phase) or alpha power was observed. Our results reveal a direct link between selfhood and neural responses to heartbeats in the DN and thus directly support theories grounding selfhood in the neural monitoring of visceral inputs. More generally, the tight functional coupling between self-related processing and cardiac monitoring observed here implies that, even in the absence of measured changes in peripheral bodily measures, physiological and cognitive functions have to be considered jointly in the DN. |
Mariana Babo-Rebelo; Anne Buot; Catherine Tallon-Baudry Neural responses to heartbeats distinguish self from other during imagination Journal Article NeuroImage, 191 , pp. 10–20, 2019. @article{BaboRebelo2019, title = {Neural responses to heartbeats distinguish self from other during imagination}, author = {Mariana Babo-Rebelo and Anne Buot and Catherine Tallon-Baudry}, doi = {10.1016/J.NEUROIMAGE.2019.02.012}, year = {2019}, date = {2019-05-01}, journal = {NeuroImage}, volume = {191}, pages = {10--20}, publisher = {Academic Press}, abstract = {Imagination is an internally-generated process, where one can make oneself or other people appear as protagonists of a scene. How does the brain tag the protagonist of an imagined scene as being oneself or someone else? Crucially, during imagination, neither external stimuli nor motor feedback are available to disentangle imagining oneself from imagining someone else. Here, we test the hypothesis that an internal mechanism based on the neural monitoring of heartbeats could distinguish between self and other. 23 participants imagined themselves (from a first-person perspective) or a friend (from a third-person perspective) in various scenarios, while their brain activity was recorded with magnetoencephalography and their cardiac activity was simultaneously monitored. We measured heartbeat-evoked responses, i.e. transients of neural activity occurring in response to each heartbeat, during imagination. The amplitude of heartbeat-evoked responses differed between imagining oneself and imagining a friend, in the precuneus and posterior cingulate regions bilaterally. Effect size was modulated by the daydreaming frequency scores of participants but not by their interoceptive abilities. These results could not be accounted for by other characteristics of imagination (e.g., the ability to adopt the perspective, valence or arousal), nor by cardiac parameters (e.g., heart rate) or arousal levels (e.g. arousal ratings, pupil diameter). Heartbeat-evoked responses thus appear as a neural marker distinguishing self from other during imagination.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Imagination is an internally-generated process, where one can make oneself or other people appear as protagonists of a scene. How does the brain tag the protagonist of an imagined scene as being oneself or someone else? Crucially, during imagination, neither external stimuli nor motor feedback are available to disentangle imagining oneself from imagining someone else. Here, we test the hypothesis that an internal mechanism based on the neural monitoring of heartbeats could distinguish between self and other. 23 participants imagined themselves (from a first-person perspective) or a friend (from a third-person perspective) in various scenarios, while their brain activity was recorded with magnetoencephalography and their cardiac activity was simultaneously monitored. We measured heartbeat-evoked responses, i.e. transients of neural activity occurring in response to each heartbeat, during imagination. The amplitude of heartbeat-evoked responses differed between imagining oneself and imagining a friend, in the precuneus and posterior cingulate regions bilaterally. Effect size was modulated by the daydreaming frequency scores of participants but not by their interoceptive abilities. These results could not be accounted for by other characteristics of imagination (e.g., the ability to adopt the perspective, valence or arousal), nor by cardiac parameters (e.g., heart rate) or arousal levels (e.g. arousal ratings, pupil diameter). Heartbeat-evoked responses thus appear as a neural marker distinguishing self from other during imagination. |
Dominik R Bach; Monika Näf; Markus Deutschmann; Shiva K Tyagarajan; Boris B Quednow Threat memory reminder under matrix metalloproteinase 9 inhibitor doxycycline globally reduces subsequent memory plasticity Journal Article Journal of Neuroscience, 39 (47), pp. 9424 –9434, 2019. @article{Bach2019, title = {Threat memory reminder under matrix metalloproteinase 9 inhibitor doxycycline globally reduces subsequent memory plasticity}, author = {Dominik R Bach and Monika Näf and Markus Deutschmann and Shiva K Tyagarajan and Boris B Quednow}, doi = {10.1523/jneurosci.1285-19.2019}, year = {2019}, date = {2019-01-01}, journal = {Journal of Neuroscience}, volume = {39}, number = {47}, pages = {9424 --9434}, abstract = {Associative memory can be rendered malleable by a reminder. Blocking the ensuing re-consolidation process is suggested as a therapeutic target for unwanted aversive memories. Matrix metalloproteinase (MMP)-9 is required for structural synapse remodelling involved in memory consolidation. Inhibiting MMP-9 with doxycycline is suggested to attenuate human threat conditioning. Here, we investigate whether MMP-9 inhibition also interferes with threat memory re-consolidation. N=78 male and female human participants learned the association between two visual conditioned stimuli (CS+) and a 50% chance of an unconditioned nociceptive stimulus (US), and between CS- and the absence of US. On day 7, one CS+ was reminded without reinforcement 3.5 hours after ingesting either 200 mg doxycycline, or placebo. On day 14, retention of CS memory was assessed under extinction, by fear-potentiated startle. Contrary to our expectations, we observed a greater CS+/CS- difference in participants who were reminded under doxycycline, compared to placebo. Participants who were reminded under placebo showed extinction learning during the retention test, which was not observed in the doxycycline group. There was no difference between the reminded and the non-reminded CS+ in either group. In contrast, during re-learning after the retention test, CS+/CS- difference was more pronounced in the placebo than the doxycycline group. To summarize, a single dose of doxycycline appeared to have no specific impact on re-consolidation, but to globally impair extinction learning, and threat re-learning, after drug clearance.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Associative memory can be rendered malleable by a reminder. Blocking the ensuing re-consolidation process is suggested as a therapeutic target for unwanted aversive memories. Matrix metalloproteinase (MMP)-9 is required for structural synapse remodelling involved in memory consolidation. Inhibiting MMP-9 with doxycycline is suggested to attenuate human threat conditioning. Here, we investigate whether MMP-9 inhibition also interferes with threat memory re-consolidation. N=78 male and female human participants learned the association between two visual conditioned stimuli (CS+) and a 50% chance of an unconditioned nociceptive stimulus (US), and between CS- and the absence of US. On day 7, one CS+ was reminded without reinforcement 3.5 hours after ingesting either 200 mg doxycycline, or placebo. On day 14, retention of CS memory was assessed under extinction, by fear-potentiated startle. Contrary to our expectations, we observed a greater CS+/CS- difference in participants who were reminded under doxycycline, compared to placebo. Participants who were reminded under placebo showed extinction learning during the retention test, which was not observed in the doxycycline group. There was no difference between the reminded and the non-reminded CS+ in either group. In contrast, during re-learning after the retention test, CS+/CS- difference was more pronounced in the placebo than the doxycycline group. To summarize, a single dose of doxycycline appeared to have no specific impact on re-consolidation, but to globally impair extinction learning, and threat re-learning, after drug clearance. |
Mareike Bacha-Trams; Elisa Ryyppo; Enrico Glerean; Mikko Sams; Iiro P Jaaskelainen Social perspective-taking shapes brain hemodynamic activity and eye movements during movie viewing Journal Article Social Cognitive and Affective Neuroscience, 15 (2), pp. 175–191, 2020. @article{BachaTrams2020, title = {Social perspective-taking shapes brain hemodynamic activity and eye movements during movie viewing}, author = {Mareike Bacha-Trams and Elisa Ryyppo and Enrico Glerean and Mikko Sams and Iiro P Jaaskelainen}, doi = {10.1093/scan/nsaa033}, year = {2020}, date = {2020-01-01}, journal = {Social Cognitive and Affective Neuroscience}, volume = {15}, number = {2}, pages = {175--191}, abstract = {Putting oneself into the shoes of others is an important aspect of social cognition.We measured brain hemodynamic activity and eye-gaze patterns while participants were viewing a shortened version of the movie 'My Sister's Keeper' from two perspectives: That of a potential organ donor, who violates moral norms by refusing to donate her kidney, and that of a potential organ recipient, who suffers in pain. Inter-subject correlation (ISC) of brain activity was significantly higher during the potential organ donor's perspective in dorsolateral and inferior prefrontal, lateral and inferior occipital, and inferior-anterior temporal areas. In the reverse contrast, stronger ISC was observed in superior temporal, posterior frontal and anterior parietal areas. Eye-gaze analysis showed higher proportion of fixations on the potential organ recipient during both perspectives. Taken together, these results suggest that during social perspective-taking different brain areas can be flexibly recruited depending on the nature of the perspective that is taken.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Putting oneself into the shoes of others is an important aspect of social cognition.We measured brain hemodynamic activity and eye-gaze patterns while participants were viewing a shortened version of the movie 'My Sister's Keeper' from two perspectives: That of a potential organ donor, who violates moral norms by refusing to donate her kidney, and that of a potential organ recipient, who suffers in pain. Inter-subject correlation (ISC) of brain activity was significantly higher during the potential organ donor's perspective in dorsolateral and inferior prefrontal, lateral and inferior occipital, and inferior-anterior temporal areas. In the reverse contrast, stronger ISC was observed in superior temporal, posterior frontal and anterior parietal areas. Eye-gaze analysis showed higher proportion of fixations on the potential organ recipient during both perspectives. Taken together, these results suggest that during social perspective-taking different brain areas can be flexibly recruited depending on the nature of the perspective that is taken. |
Cathleen Bache; Anne Springer; Hannes Noack; Waltraud Stadler; Franziska Kopp; Ulman Lindenberger; Markus Werkle-Bergner 10-month-old infants are sensitive to the time course of perceived actions: Eye-tracking and EEG evidence Journal Article Frontiers in Psychology, 8 , pp. 1–18, 2017. @article{Bache2017, title = {10-month-old infants are sensitive to the time course of perceived actions: Eye-tracking and EEG evidence}, author = {Cathleen Bache and Anne Springer and Hannes Noack and Waltraud Stadler and Franziska Kopp and Ulman Lindenberger and Markus Werkle-Bergner}, doi = {10.3389/fpsyg.2017.01170}, year = {2017}, date = {2017-01-01}, journal = {Frontiers in Psychology}, volume = {8}, pages = {1--18}, abstract = {Research has shown that infants are able to track a moving target efficiently – even if it is transiently occluded from sight. This basic ability allows prediction of when and where events happen in everyday life. Yet, it is unclear whether, and how, infants internally represent the time course of ongoing movements to derive predictions. In this study, 10-month-old crawlers observed the video of a same-aged crawling baby that was transiently occluded and reappeared in either a temporally continuous or non-continuous manner (i.e., delayed by 500 ms vs. forwarded by 500 ms relative to the real-time movement). Eye movement and rhythmic neural brain activity (EEG) were measured simultaneously. Eye movement analyses showed that infants were sensitive to slight temporal shifts in movement continuation after occlusion. Furthermore, brain activity associated with sensorimotor processing differed between observation of continuous and non-continuous movements. Early sensitivity to an action's timing may hence be explained within the internal real-time simulation account of action observation. Overall, the results support the hypothesis that 10-month-old infants are well prepared for internal representation of the time course of observed movements that are within the infants' current motor repertoire.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Research has shown that infants are able to track a moving target efficiently – even if it is transiently occluded from sight. This basic ability allows prediction of when and where events happen in everyday life. Yet, it is unclear whether, and how, infants internally represent the time course of ongoing movements to derive predictions. In this study, 10-month-old crawlers observed the video of a same-aged crawling baby that was transiently occluded and reappeared in either a temporally continuous or non-continuous manner (i.e., delayed by 500 ms vs. forwarded by 500 ms relative to the real-time movement). Eye movement and rhythmic neural brain activity (EEG) were measured simultaneously. Eye movement analyses showed that infants were sensitive to slight temporal shifts in movement continuation after occlusion. Furthermore, brain activity associated with sensorimotor processing differed between observation of continuous and non-continuous movements. Early sensitivity to an action's timing may hence be explained within the internal real-time simulation account of action observation. Overall, the results support the hypothesis that 10-month-old infants are well prepared for internal representation of the time course of observed movements that are within the infants' current motor repertoire. |
Theda Backen; Stefan Treue; Julio C Martinez-Trujillo Encoding of spatial attention by primate prefrontal cortex neuronal ensembles Journal Article Eneuro, 5 (1), pp. 1–19, 2018. @article{Backen2018, title = {Encoding of spatial attention by primate prefrontal cortex neuronal ensembles}, author = {Theda Backen and Stefan Treue and Julio C Martinez-Trujillo}, doi = {10.1523/eneuro.0372-16.2017}, year = {2018}, date = {2018-01-01}, journal = {Eneuro}, volume = {5}, number = {1}, pages = {1--19}, abstract = {Single neurons in the primate lateral prefrontal cortex (LPFC) encode information about the allocation of visual attention and the features of visual stimuli. However, how this compares to the performance of neuronal ensembles at encoding the same information is poorly understood. Here, we recorded the responses of neuronal ensembles in the LPFC of two macaque monkeys while they performed a task that required attending to one of two moving random dot patterns positioned in different hemifields and ignoring the other pattern. We found single units selective for the location of the attended stimulus as well as for its motion direction. To determine the coding of both variables in the population of recorded units, we used a linear classifier and progressively built neuronal ensembles by iteratively adding units according to their individual performance (best single units), or by iteratively adding units based on their contribution to the ensemble performance (best ensemble). For both methods, ensembles of relatively small sizes (n textless 60) yielded substantially higher decoding performance relative to individual single units. However, the decoder reached similar performance using fewer neurons with the best ensemble building method compared to the best single units method. Our results indicate that neuronal ensembles within the LPFC encode more information about the attended spatial and non-spatial features of visual stimuli than individual neurons. They further suggest that efficient coding of attention can be achieved by relatively small neuronal ensembles characterized by a certain relationship between signal and noise correlation structures. Significance Statement Single neurons in the primate lateral prefrontal cortex (LPFC) are known to encode the spatial location of attended stimuli as well as other visual features. Here, we investigate how these single neuron coding properties translate into how ensembles of neurons encode information. Our results show that LPFC neuronal ensembles encode both the allocation of attention and the direction of motion of moving stimuli with higher efficiency than single units. Furthermore, relatively small ensembles reach the same decoding accuracy as the full ensembles. Our findings indicate that information coding by neuronal ensembles within the LPFC depends on complex network properties that cannot be solely estimated from coding properties of individual neurons.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Single neurons in the primate lateral prefrontal cortex (LPFC) encode information about the allocation of visual attention and the features of visual stimuli. However, how this compares to the performance of neuronal ensembles at encoding the same information is poorly understood. Here, we recorded the responses of neuronal ensembles in the LPFC of two macaque monkeys while they performed a task that required attending to one of two moving random dot patterns positioned in different hemifields and ignoring the other pattern. We found single units selective for the location of the attended stimulus as well as for its motion direction. To determine the coding of both variables in the population of recorded units, we used a linear classifier and progressively built neuronal ensembles by iteratively adding units according to their individual performance (best single units), or by iteratively adding units based on their contribution to the ensemble performance (best ensemble). For both methods, ensembles of relatively small sizes (n textless 60) yielded substantially higher decoding performance relative to individual single units. However, the decoder reached similar performance using fewer neurons with the best ensemble building method compared to the best single units method. Our results indicate that neuronal ensembles within the LPFC encode more information about the attended spatial and non-spatial features of visual stimuli than individual neurons. They further suggest that efficient coding of attention can be achieved by relatively small neuronal ensembles characterized by a certain relationship between signal and noise correlation structures. Significance Statement Single neurons in the primate lateral prefrontal cortex (LPFC) are known to encode the spatial location of attended stimuli as well as other visual features. Here, we investigate how these single neuron coding properties translate into how ensembles of neurons encode information. Our results show that LPFC neuronal ensembles encode both the allocation of attention and the direction of motion of moving stimuli with higher efficiency than single units. Furthermore, relatively small ensembles reach the same decoding accuracy as the full ensembles. Our findings indicate that information coding by neuronal ensembles within the LPFC depends on complex network properties that cannot be solely estimated from coding properties of individual neurons. |
Stephen P Badham; Claire V Hutchinson Characterising eye movement dysfunction in myalgic encephalomyelitis/ chronic fatigue syndrome Journal Article Graefe's Archive for Clinical and Experimental Ophthalmology, 251 (12), pp. 2769–2776, 2013. @article{Badham2013, title = {Characterising eye movement dysfunction in myalgic encephalomyelitis/ chronic fatigue syndrome}, author = {Stephen P Badham and Claire V Hutchinson}, doi = {10.1007/s00417-013-2431-3}, year = {2013}, date = {2013-01-01}, journal = {Graefe's Archive for Clinical and Experimental Ophthalmology}, volume = {251}, number = {12}, pages = {2769--2776}, abstract = {BACKGROUND: People who suffer from myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) often report that their eye movements are sluggish and that they have difficulties tracking moving objects. However, descriptions of these visual problems are based solely on patients' self-reports of their subjective visual experiences, and there is a distinct lack of empirical evidence to objectively verify their claims. This paper presents the first experimental research to objectively examine eye movements in those suffering from ME/CFS. METHODS: Patients were assessed for ME/CFS symptoms and were compared to age, gender, and education matched controls for their ability to generate saccades and smooth pursuit eye movements. RESULTS: Patients and controls exhibited similar error rates and saccade latencies (response times) on prosaccade and antisaccade tasks. Patients showed relatively intact ability to accurately fixate the target (prosaccades), but were impaired when required to focus accurately in a specific position opposite the target (antisaccades). Patients were most markedly impaired when required to direct their gaze as closely as possible to a smoothly moving target (smooth pursuit). CONCLUSIONS: It is hypothesised that the effects of ME/CFS can be overcome briefly for completion of saccades, but that continuous pursuit activity (accurately tracking a moving object), even for a short time period, highlights dysfunctional eye movement behaviour in ME/CFS patients. Future smooth pursuit research may elucidate and improve diagnosis of ME/CFS.}, keywords = {}, pubstate = {published}, tppubtype = {article} } BACKGROUND: People who suffer from myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) often report that their eye movements are sluggish and that they have difficulties tracking moving objects. However, descriptions of these visual problems are based solely on patients' self-reports of their subjective visual experiences, and there is a distinct lack of empirical evidence to objectively verify their claims. This paper presents the first experimental research to objectively examine eye movements in those suffering from ME/CFS. METHODS: Patients were assessed for ME/CFS symptoms and were compared to age, gender, and education matched controls for their ability to generate saccades and smooth pursuit eye movements. RESULTS: Patients and controls exhibited similar error rates and saccade latencies (response times) on prosaccade and antisaccade tasks. Patients showed relatively intact ability to accurately fixate the target (prosaccades), but were impaired when required to focus accurately in a specific position opposite the target (antisaccades). Patients were most markedly impaired when required to direct their gaze as closely as possible to a smoothly moving target (smooth pursuit). CONCLUSIONS: It is hypothesised that the effects of ME/CFS can be overcome briefly for completion of saccades, but that continuous pursuit activity (accurately tracking a moving object), even for a short time period, highlights dysfunctional eye movement behaviour in ME/CFS patients. Future smooth pursuit research may elucidate and improve diagnosis of ME/CFS. |
Jeremy B Badler; Philippe Lefevre; Marcus Missal Causality attribution biases oculomotor responses Journal Article Journal of Neuroscience, 30 (31), pp. 10517–10525, 2010. @article{Badler2010, title = {Causality attribution biases oculomotor responses}, author = {Jeremy B Badler and Philippe Lefevre and Marcus Missal}, doi = {10.1523/JNEUROSCI.1733-10.2010}, year = {2010}, date = {2010-01-01}, journal = {Journal of Neuroscience}, volume = {30}, number = {31}, pages = {10517--10525}, abstract = {When viewing one object move after being struck by another, humans perceive that the action of the first object "caused" the motion of the second, not that the two events occurred independently. Although established as a perceptual and linguistic concept, it is not yet known whether the notion of causality exists as a fundamental, preattentional "Gestalt" that can influence predictive motor processes. Therefore, eye movements of human observers were measured while viewing a display in which a launcher impacted a tool to trigger the motion of a second "reaction" target. The reaction target could move either in the direction predicted by transfer of momentum after the collision ("causal") or in a different direction ("noncausal"), with equal probability. Control trials were also performed with identical target motion, either with a 100 ms time delay between the collision and reactive motion, or without the interposed tool. Subjects made significantly more predictive movements (smooth pursuit and saccades) in the causal direction during standard trials, and smooth pursuit latencies were also shorter overall. These trends were reduced or absent in control trials. In addition, pursuit latencies in the noncausal direction were longer during standard trials than during control trials. The results show that causal context has a strong influence on predictive movements.}, keywords = {}, pubstate = {published}, tppubtype = {article} } When viewing one object move after being struck by another, humans perceive that the action of the first object "caused" the motion of the second, not that the two events occurred independently. Although established as a perceptual and linguistic concept, it is not yet known whether the notion of causality exists as a fundamental, preattentional "Gestalt" that can influence predictive motor processes. Therefore, eye movements of human observers were measured while viewing a display in which a launcher impacted a tool to trigger the motion of a second "reaction" target. The reaction target could move either in the direction predicted by transfer of momentum after the collision ("causal") or in a different direction ("noncausal"), with equal probability. Control trials were also performed with identical target motion, either with a 100 ms time delay between the collision and reactive motion, or without the interposed tool. Subjects made significantly more predictive movements (smooth pursuit and saccades) in the causal direction during standard trials, and smooth pursuit latencies were also shorter overall. These trends were reduced or absent in control trials. In addition, pursuit latencies in the noncausal direction were longer during standard trials than during control trials. The results show that causal context has a strong influence on predictive movements. |
Yasaman Bagherzadeh; Daniel Baldauf; Dimitrios Pantazis; Robert Desimone Alpha synchrony and the neurofeedback control of spatial attention Journal Article Neuron, 105 , pp. 1–11, 2020. @article{Bagherzadeh2020, title = {Alpha synchrony and the neurofeedback control of spatial attention}, author = {Yasaman Bagherzadeh and Daniel Baldauf and Dimitrios Pantazis and Robert Desimone}, doi = {10.1016/j.neuron.2019.11.001}, year = {2020}, date = {2020-01-01}, journal = {Neuron}, volume = {105}, pages = {1--11}, publisher = {Elsevier Inc.}, abstract = {Decreases in alpha synchronization are correlated with enhanced attention, whereas alpha increases are correlated with inattention. However, correlation is not causality, and synchronization may be a byproduct of attention rather than a cause. To test for a causal role of alpha synchrony in attention, we used MEG neurofeedback to train subjects to manipulate the ratio of alpha power over the left versus right parietal cortex. We found that a comparable alpha asymmetry developed over the visual cortex. The alpha training led to corresponding asymmetrical changes in visually evoked responses to probes presented in the two hemifields during training. Thus, reduced alpha was associated with enhanced sensory processing. Testing after training showed a persistent bias in attention in the expected directions. The results support the proposal that alpha synchrony plays a causal role in modulating attention and visual processing, and alpha training could be used for testing hypotheses about synchrony.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Decreases in alpha synchronization are correlated with enhanced attention, whereas alpha increases are correlated with inattention. However, correlation is not causality, and synchronization may be a byproduct of attention rather than a cause. To test for a causal role of alpha synchrony in attention, we used MEG neurofeedback to train subjects to manipulate the ratio of alpha power over the left versus right parietal cortex. We found that a comparable alpha asymmetry developed over the visual cortex. The alpha training led to corresponding asymmetrical changes in visually evoked responses to probes presented in the two hemifields during training. Thus, reduced alpha was associated with enhanced sensory processing. Testing after training showed a persistent bias in attention in the expected directions. The results support the proposal that alpha synchrony plays a causal role in modulating attention and visual processing, and alpha training could be used for testing hypotheses about synchrony. |
Brett Bahle; Mark Mills; Michael D Dodd Human classifier: Observers can deduce task solely from eye movements Journal Article Attention, Perception, and Psychophysics, 79 (5), pp. 1415–1425, 2017. @article{Bahle2017, title = {Human classifier: Observers can deduce task solely from eye movements}, author = {Brett Bahle and Mark Mills and Michael D Dodd}, doi = {10.3758/s13414-017-1324-7}, year = {2017}, date = {2017-01-01}, journal = {Attention, Perception, and Psychophysics}, volume = {79}, number = {5}, pages = {1415--1425}, abstract = {Computer classifiers have been successful at classifying various tasks using eye movement statistics. However, the question of human classification of task from eye movements has rarely been studied. Across two experiments, we examined whether humans could classify task based solely on the eye movements of other individuals. In Experiment 1, human classifiers were shown one of three sets of eye movements: Fixations, which were displayed as blue circles, with larger circles meaning longer fixation durations; Scanpaths, which were displayed as yellow arrows; and Videos, in which a neon green dot moved around the screen. There was an additional Scene manipulation in which eye movement properties were displayed either on the original scene where the task (Search, Memory, or Rating) was performed or on a black background in which no scene information was available. Experiment 2 used similar methods but only displayed Fixations and Videos with the same Scene manipulation. The results of both experiments showed successful classification of Search. Interestingly, Search was best classified in the absence of the original scene, particularly in the Fixation condition. Memory also was classified above chance with the strongest classification occurring with Videos in the presence of the scene. Additional analyses on the pattern of correct responses in these two conditions demonstrated which eye movement properties successful classifiers were using. These findings demonstrate conditions under which humans can extract information from eye movement characteristics in addition to providing insight into the relative success/failure of previous computer classifiers.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Computer classifiers have been successful at classifying various tasks using eye movement statistics. However, the question of human classification of task from eye movements has rarely been studied. Across two experiments, we examined whether humans could classify task based solely on the eye movements of other individuals. In Experiment 1, human classifiers were shown one of three sets of eye movements: Fixations, which were displayed as blue circles, with larger circles meaning longer fixation durations; Scanpaths, which were displayed as yellow arrows; and Videos, in which a neon green dot moved around the screen. There was an additional Scene manipulation in which eye movement properties were displayed either on the original scene where the task (Search, Memory, or Rating) was performed or on a black background in which no scene information was available. Experiment 2 used similar methods but only displayed Fixations and Videos with the same Scene manipulation. The results of both experiments showed successful classification of Search. Interestingly, Search was best classified in the absence of the original scene, particularly in the Fixation condition. Memory also was classified above chance with the strongest classification occurring with Videos in the presence of the scene. Additional analyses on the pattern of correct responses in these two conditions demonstrated which eye movement properties successful classifiers were using. These findings demonstrate conditions under which humans can extract information from eye movement characteristics in addition to providing insight into the relative success/failure of previous computer classifiers. |
Brett Bahle; Valerie M Beck; Andrew Hollingworth The architecture of interaction between visual working memory and visual attention Journal Article Journal of Experimental Psychology: Human Perception and Performance, 44 (7), pp. 992–1011, 2018. @article{Bahle2018, title = {The architecture of interaction between visual working memory and visual attention}, author = {Brett Bahle and Valerie M Beck and Andrew Hollingworth}, doi = {10.1037/xhp0000509}, year = {2018}, date = {2018-01-01}, journal = {Journal of Experimental Psychology: Human Perception and Performance}, volume = {44}, number = {7}, pages = {992--1011}, abstract = {In five experiments, we examined whether a task-irrelevant item in visual working memory (VWM) interacts with perceptual selection when VWM must also be used to maintain a template representation of a search target. This question is critical to distinguishing between competing theories specifying the architecture of interaction between VWM and attention. The single-item template hypothesis (SIT) posits that only a single item in VWM can be maintained in a state that interacts with attention. Thus, the secondary item should be inert with respect to attentional guidance. The multiple-item template hypothesis (MIT) posits that multiple items can be maintained in a state that interacts with attention; thus, both the target representation and the secondary item should be capable of guiding selection. This question has been addressed previously in attention capture studies, but the results have been ambiguous. Here, we modified these earlier paradigms to optimize sensitivity to capture. Capture by a distractor matching the secondary item in VWM was observed consistently across multiple types of search task (abstract arrays and natural scenes), multiple dependent measures (search RT and oculomotor capture), multiple memory dimensions (color and shape), and multiple search stimulus dimensions (color, shape, common objects), providing strong support for the MIT.}, keywords = {}, pubstate = {published}, tppubtype = {article} } In five experiments, we examined whether a task-irrelevant item in visual working memory (VWM) interacts with perceptual selection when VWM must also be used to maintain a template representation of a search target. This question is critical to distinguishing between competing theories specifying the architecture of interaction between VWM and attention. The single-item template hypothesis (SIT) posits that only a single item in VWM can be maintained in a state that interacts with attention. Thus, the secondary item should be inert with respect to attentional guidance. The multiple-item template hypothesis (MIT) posits that multiple items can be maintained in a state that interacts with attention; thus, both the target representation and the secondary item should be capable of guiding selection. This question has been addressed previously in attention capture studies, but the results have been ambiguous. Here, we modified these earlier paradigms to optimize sensitivity to capture. Capture by a distractor matching the secondary item in VWM was observed consistently across multiple types of search task (abstract arrays and natural scenes), multiple dependent measures (search RT and oculomotor capture), multiple memory dimensions (color and shape), and multiple search stimulus dimensions (color, shape, common objects), providing strong support for the MIT. |
Brett Bahle; Andrew Hollingworth Contrasting episodic and template-based guidance during search through natural scenes Journal Article Journal of Experimental Psychology: Human Perception and Performance, 45 (4), pp. 523–536, 2019. @article{Bahle2019, title = {Contrasting episodic and template-based guidance during search through natural scenes}, author = {Brett Bahle and Andrew Hollingworth}, doi = {10.1037/xhp0000624.supp}, year = {2019}, date = {2019-01-01}, journal = {Journal of Experimental Psychology: Human Perception and Performance}, volume = {45}, number = {4}, pages = {523--536}, abstract = {Visual search through natural scenes can be guided by knowledge of where a target object has been observed previously (episodic guidance) and knowledge of that object's visual properties (template guidance). In the present experiments, we compared the relative contributions of these two sources of guidance. Episodic guidance was implemented in a contextual cuing task: participants searched multiple times through a set of scenes for a target letter that appeared in a consistent location within each scene. Template guidance was implemented by the color match between a critical distractor in each scene and a secondary visual working memory (VWM) load. There were four main findings. First, search time decreased with increasing scene repetition; episodic memory guided search. Second, the critical distractor was fixated more frequently on match compared with mismatch trials, consistent with automatic template guidance. Third, the VWM-match effect persisted in blocks with strong episodic guidance. Finally, VWM-match effects were observed from the first saccade during search, whereas episodic guidance to the target developed only later in the trial. The results support a view of natural search in which template-based mechanisms operate early during search in a manner that is not strongly constrained by scene-based forms of guidance, such as episodic knowledge. Public Significance Statement Real-world searches are guided by knowledge of where a target object has been observed previously and knowledge of that object's visual features. The present study investigates the interaction between these two sources of guidance during search. By better understanding how these searches are performed, vital tasks in the real world that rely on similar sources of knowledge (e.g., a baggage screener looking for dangerous items or a radiologist looking for tumors) can be potentially improved.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Visual search through natural scenes can be guided by knowledge of where a target object has been observed previously (episodic guidance) and knowledge of that object's visual properties (template guidance). In the present experiments, we compared the relative contributions of these two sources of guidance. Episodic guidance was implemented in a contextual cuing task: participants searched multiple times through a set of scenes for a target letter that appeared in a consistent location within each scene. Template guidance was implemented by the color match between a critical distractor in each scene and a secondary visual working memory (VWM) load. There were four main findings. First, search time decreased with increasing scene repetition; episodic memory guided search. Second, the critical distractor was fixated more frequently on match compared with mismatch trials, consistent with automatic template guidance. Third, the VWM-match effect persisted in blocks with strong episodic guidance. Finally, VWM-match effects were observed from the first saccade during search, whereas episodic guidance to the target developed only later in the trial. The results support a view of natural search in which template-based mechanisms operate early during search in a manner that is not strongly constrained by scene-based forms of guidance, such as episodic knowledge. Public Significance Statement Real-world searches are guided by knowledge of where a target object has been observed previously and knowledge of that object's visual features. The present study investigates the interaction between these two sources of guidance during search. By better understanding how these searches are performed, vital tasks in the real world that rely on similar sources of knowledge (e.g., a baggage screener looking for dangerous items or a radiologist looking for tumors) can be potentially improved. |
Zahra Bahmani; Mohammad Reza Daliri; Yaser Merrikhi; Kelsey Clark; Behrad Noudoost Working memory enhances cortical representations via spatially specific coordination of spike times Journal Article Neuron, 97 (4), pp. 967–979, 2018. @article{Bahmani2018, title = {Working memory enhances cortical representations via spatially specific coordination of spike times}, author = {Zahra Bahmani and Mohammad Reza Daliri and Yaser Merrikhi and Kelsey Clark and Behrad Noudoost}, doi = {10.1016/j.neuron.2018.01.012}, year = {2018}, date = {2018-01-01}, journal = {Neuron}, volume = {97}, number = {4}, pages = {967--979}, publisher = {Elsevier Inc.}, abstract = {The online maintenance and manipulation of information in working memory (WM) is essential for guiding behavior based on our goals. Understanding how WM alters sensory processing in pursuit of different behavioral objectives is therefore crucial to establish the neural basis of our goal-directed behavior. Here we show that, in the middle temporal (MT) area of rhesus monkeys, the power of the local field potentials in the ab band (8–25 Hz) increases, reflecting the remembered location and the animal's performance. Moreover, the content of WM determines how coherently MT sites oscillate and how synchronized spikes are relative to these oscillations. These changes in spike timing are not only sufficient to carry sensory and memory information, they can also account for WM-induced sensory enhancement. These results provide a mechanistic-level understanding of how WM alters sensory processing by coordinating the timing of spikes across the neuronal population, enhancing the sensory representation of WM targets.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The online maintenance and manipulation of information in working memory (WM) is essential for guiding behavior based on our goals. Understanding how WM alters sensory processing in pursuit of different behavioral objectives is therefore crucial to establish the neural basis of our goal-directed behavior. Here we show that, in the middle temporal (MT) area of rhesus monkeys, the power of the local field potentials in the ab band (8–25 Hz) increases, reflecting the remembered location and the animal's performance. Moreover, the content of WM determines how coherently MT sites oscillate and how synchronized spikes are relative to these oscillations. These changes in spike timing are not only sufficient to carry sensory and memory information, they can also account for WM-induced sensory enhancement. These results provide a mechanistic-level understanding of how WM alters sensory processing by coordinating the timing of spikes across the neuronal population, enhancing the sensory representation of WM targets. |
Julia Bahnmueller; Stefan Huber; Hans-Christoph Nuerk; Silke M Göbel; Korbinian Moeller Processing multi-digit numbers: a translingual eye-tracking study Journal Article Psychological Research, 80 (3), pp. 422–433, 2016. @article{Bahnmueller2016, title = {Processing multi-digit numbers: a translingual eye-tracking study}, author = {Julia Bahnmueller and Stefan Huber and Hans-Christoph Nuerk and Silke M Göbel and Korbinian Moeller}, doi = {10.1007/s00426-015-0729-y}, year = {2016}, date = {2016-01-01}, journal = {Psychological Research}, volume = {80}, number = {3}, pages = {422--433}, abstract = {The present study aimed at investigating the underlying cognitive processes and language specificities of three-digit number processing. More specifically, it was intended to clarify whether the single digits of three-digit numbers are processed in parallel and/or sequentially and whether processing strategies are influenced by the inversion of number words with respect to the Arabic digits [e.g., 43: dreiundvierzig (“three and forty”)] and/or by differences in reading behavior of the respective first language. Therefore, English- and German-speaking adults had to complete a three-digit number comparison task while their eye-fixation behavior was recorded. Replicating previous results, reliable hundred-decade-compatibility effects (e.g., 742_896: hundred-decade compatible because 7 textless 8 and 4 textless 9; 362_517: hundred-decade incompatible because 3 textless 5 but 6 textgreater 1) for English- as well as hundred-unit-compatibility effects for English- and German-speaking participants were observed, indicating parallel processing strategies. While no indices of partial sequential processing were found for the English-speaking group, about half of the German-speaking participants showed an inverse hundred-decade-compatibility effect accompanied by longer inspection time on the hundred digit indicating additional sequential processes. Thereby, the present data revealed that in transition from two- to higher multi-digit numbers, the homogeneity of underlying processing strategies varies between language groups. The regular German orthography (allowing for letter-by-letter reading) and its associated more sequential reading behavior may have promoted sequential processing strategies in multi-digit number processing. Furthermore, these results indicated that the inversion of number words alone is not sufficient to explain all observed language differences in three-digit number processing.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The present study aimed at investigating the underlying cognitive processes and language specificities of three-digit number processing. More specifically, it was intended to clarify whether the single digits of three-digit numbers are processed in parallel and/or sequentially and whether processing strategies are influenced by the inversion of number words with respect to the Arabic digits [e.g., 43: dreiundvierzig (“three and forty”)] and/or by differences in reading behavior of the respective first language. Therefore, English- and German-speaking adults had to complete a three-digit number comparison task while their eye-fixation behavior was recorded. Replicating previous results, reliable hundred-decade-compatibility effects (e.g., 742_896: hundred-decade compatible because 7 textless 8 and 4 textless 9; 362_517: hundred-decade incompatible because 3 textless 5 but 6 textgreater 1) for English- as well as hundred-unit-compatibility effects for English- and German-speaking participants were observed, indicating parallel processing strategies. While no indices of partial sequential processing were found for the English-speaking group, about half of the German-speaking participants showed an inverse hundred-decade-compatibility effect accompanied by longer inspection time on the hundred digit indicating additional sequential processes. Thereby, the present data revealed that in transition from two- to higher multi-digit numbers, the homogeneity of underlying processing strategies varies between language groups. The regular German orthography (allowing for letter-by-letter reading) and its associated more sequential reading behavior may have promoted sequential processing strategies in multi-digit number processing. Furthermore, these results indicated that the inversion of number words alone is not sufficient to explain all observed language differences in three-digit number processing. |