Rick A Adams; Markus Bauer; Dimitris Pinotsis; Karl J Friston Dynamic causal modelling of eye movements during pursuit: Confirming precision-encoding in V1 using MEG Journal Article Neuroimage, 132 , pp. 175–189, 2016. @article{Adams2016, title = {Dynamic causal modelling of eye movements during pursuit: Confirming precision-encoding in V1 using MEG}, author = {Rick A Adams and Markus Bauer and Dimitris Pinotsis and Karl J Friston}, doi = {10.1016/j.neuroimage.2016.02.055}, year = {2016}, date = {2016-01-01}, journal = {Neuroimage}, volume = {132}, pages = {175--189}, publisher = {The Authors}, abstract = {This paper shows that it is possible to estimate the subjective precision (inverse variance) of Bayesian beliefs during oculomotor pursuit. Subjects viewed a sinusoidal target, with or without random fluctuations in its motion. Eye trajectories and magnetoencephalographic (MEG) data were recorded concurrently. The target was periodically occluded, such that its reappearance caused a visual evoked response field (ERF). Dynamic causal modelling (DCM) was used to fit models of eye trajectories and the ERFs. The DCM for pursuit was based on predictive coding and active inference, and predicts subjects' eye movements based on their (subjective) Bayesian beliefs about target (and eye) motion. The precisions of these hierarchical beliefs can be inferred from behavioural (pursuit) data. The DCM for MEG data used an established biophysical model of neuronal activity that includes parameters for the gain of superficial pyramidal cells, which is thought to encode precision at the neuronal level. Previous studies (using DCM of pursuit data) suggest that noisy target motion increases subjective precision at the sensory level: i.e., subjects attend more to the target's sensory attributes. We compared (noisy motion-induced) changes in the synaptic gain based on the modelling of MEG data to changes in subjective precision estimated using the pursuit data. We demonstrate that imprecise target motion increases the gain of superficial pyramidal cells in V1 (across subjects). Furthermore, increases in sensory precision – inferred by our behavioural DCM – correlate with the increase in gain in V1, across subjects. This is a step towards a fully integrated model of brain computations, cortical responses and behaviour that may provide a useful clinical tool in conditions like schizophrenia.}, keywords = {}, pubstate = {published}, tppubtype = {article} } This paper shows that it is possible to estimate the subjective precision (inverse variance) of Bayesian beliefs during oculomotor pursuit. Subjects viewed a sinusoidal target, with or without random fluctuations in its motion. Eye trajectories and magnetoencephalographic (MEG) data were recorded concurrently. The target was periodically occluded, such that its reappearance caused a visual evoked response field (ERF). Dynamic causal modelling (DCM) was used to fit models of eye trajectories and the ERFs. The DCM for pursuit was based on predictive coding and active inference, and predicts subjects' eye movements based on their (subjective) Bayesian beliefs about target (and eye) motion. The precisions of these hierarchical beliefs can be inferred from behavioural (pursuit) data. The DCM for MEG data used an established biophysical model of neuronal activity that includes parameters for the gain of superficial pyramidal cells, which is thought to encode precision at the neuronal level. Previous studies (using DCM of pursuit data) suggest that noisy target motion increases subjective precision at the sensory level: i.e., subjects attend more to the target's sensory attributes. We compared (noisy motion-induced) changes in the synaptic gain based on the modelling of MEG data to changes in subjective precision estimated using the pursuit data. We demonstrate that imprecise target motion increases the gain of superficial pyramidal cells in V1 (across subjects). Furthermore, increases in sensory precision – inferred by our behavioural DCM – correlate with the increase in gain in V1, across subjects. This is a step towards a fully integrated model of brain computations, cortical responses and behaviour that may provide a useful clinical tool in conditions like schizophrenia. |
Rick A Adams; Daniel Bush; Fanfan Zheng; Sofie S Meyer; Raphael Kaplan; Stelios Orfanos; Tiago Reis Marques; Oliver D Howes; Neil Burgess Impaired theta phase coupling underlies frontotemporal dysconnectivity in schizophrenia Journal Article Brain, 143 (3), pp. 1261–1277, 2020. @article{Adams2020a, title = {Impaired theta phase coupling underlies frontotemporal dysconnectivity in schizophrenia}, author = {Rick A Adams and Daniel Bush and Fanfan Zheng and Sofie S Meyer and Raphael Kaplan and Stelios Orfanos and Tiago Reis Marques and Oliver D Howes and Neil Burgess}, doi = {10.1093/brain/awaa035}, year = {2020}, date = {2020-01-01}, journal = {Brain}, volume = {143}, number = {3}, pages = {1261--1277}, abstract = {Frontotemporal dysconnectivity is a key pathology in schizophrenia. The specific nature of this dysconnectivity is unknown, but animal models imply dysfunctional theta phase coupling between hippocampus and medial prefrontal cortex (mPFC). We tested this hypothesis by examining neural dynamics in 18 participants with a schizophrenia diagnosis, both medicated and unmedicated; and 26 age, sex and IQ matched control subjects. All participants completed two tasks known to elicit hippocampal-prefrontal theta coupling: a spatial memory task (during magnetoencephalography) and a memory integration task. In addition, an overlapping group of 33 schizophrenia and 29 control subjects underwent PET to measure the availability of GABAARs expressing the a5 subunit (concentrated on hippocampal somatostatin interneurons). We demonstrate-in the spatial memory task, during memory recall-that theta power increases in left medial temporal lobe (mTL) are impaired in schizophrenia, as is theta phase coupling between mPFC and mTL. Importantly, the latter cannot be explained by theta power changes, head movement, antipsychotics, cannabis use, or IQ, and is not found in other frequency bands. Moreover, mPFC-mTL theta coupling correlated strongly with performance in controls, but not in subjects with schizophrenia, who were mildly impaired at the spatial memory task and no better than chance on the memory integration task. Finally, mTL regions showing reduced phase coupling in schizophrenia magnetoencephalography participants overlapped substantially with areas of diminished a5-GABAAR availability in the wider schizophrenia PET sample. These results indicate that mPFC-mTL dysconnectivity in schizophrenia is due to a loss of theta phase coupling, and imply a5-GABAARs (and the cells that express them) have a role in this process.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Frontotemporal dysconnectivity is a key pathology in schizophrenia. The specific nature of this dysconnectivity is unknown, but animal models imply dysfunctional theta phase coupling between hippocampus and medial prefrontal cortex (mPFC). We tested this hypothesis by examining neural dynamics in 18 participants with a schizophrenia diagnosis, both medicated and unmedicated; and 26 age, sex and IQ matched control subjects. All participants completed two tasks known to elicit hippocampal-prefrontal theta coupling: a spatial memory task (during magnetoencephalography) and a memory integration task. In addition, an overlapping group of 33 schizophrenia and 29 control subjects underwent PET to measure the availability of GABAARs expressing the a5 subunit (concentrated on hippocampal somatostatin interneurons). We demonstrate-in the spatial memory task, during memory recall-that theta power increases in left medial temporal lobe (mTL) are impaired in schizophrenia, as is theta phase coupling between mPFC and mTL. Importantly, the latter cannot be explained by theta power changes, head movement, antipsychotics, cannabis use, or IQ, and is not found in other frequency bands. Moreover, mPFC-mTL theta coupling correlated strongly with performance in controls, but not in subjects with schizophrenia, who were mildly impaired at the spatial memory task and no better than chance on the memory integration task. Finally, mTL regions showing reduced phase coupling in schizophrenia magnetoencephalography participants overlapped substantially with areas of diminished a5-GABAAR availability in the wider schizophrenia PET sample. These results indicate that mPFC-mTL dysconnectivity in schizophrenia is due to a loss of theta phase coupling, and imply a5-GABAARs (and the cells that express them) have a role in this process. |
Kivilcim Afacan-Seref; Natalie A Steinemann; Annabelle Blangero; Simon P Kelly Dynamic interplay of value and sensory information in high-speed decision making Journal Article Current Biology, 28 (5), pp. 795–802, 2018. @article{AfacanSeref2018, title = {Dynamic interplay of value and sensory information in high-speed decision making}, author = {Kivilcim Afacan-Seref and Natalie A Steinemann and Annabelle Blangero and Simon P Kelly}, doi = {10.1016/j.cub.2018.01.071}, year = {2018}, date = {2018-03-01}, journal = {Current Biology}, volume = {28}, number = {5}, pages = {795--802}, abstract = {In dynamic environments, split-second sensorimotor decisions must be prioritized according to potential payoffs to maximize overall rewards. The impact of relative value on deliberative perceptual judgments has been examined extensively [1–6], but relatively little is known about value-biasing mechanisms in the common situation where physical evidence is strong but the time to act is severely limited. In prominent decision models, a noisy but statistically stationary representation of sensory evidence is integrated over time to an action-triggering bound, and value-biases are affected by starting the integrator closer to the more valuable bound. Here, we show significant departures from this account for humans making rapid sensory-instructed action choices. Behavior was best explained by a simple model in which the evidence representation—and hence, rate of accumulation—is itself biased by value and is non-stationary, increasing over the short decision time frame. Because the value bias initially dominates, the model uniquely predicts a dynamic ‘‘turn-around'' effect on low-value cues, where the accumulator first launches toward the incorrect action but is then re-routed to the correct one. This was clearly exhibited in electrophysiological signals reflecting motor preparation and evidence accumulation. Finally, we construct an extended model that implements this dynamic effect through plausible sensory neural response modulations and demonstrate the correspondence between decision signal dynamics simulated from a behavioral fit of that model and the empirical decision signals. Our findings suggest that value and sensory information can exert simultaneous and dynamically countervailing influences on the trajectory of the accumulation-to-bound process, driving rapid, sensory-guided actions.}, keywords = {}, pubstate = {published}, tppubtype = {article} } In dynamic environments, split-second sensorimotor decisions must be prioritized according to potential payoffs to maximize overall rewards. The impact of relative value on deliberative perceptual judgments has been examined extensively [1–6], but relatively little is known about value-biasing mechanisms in the common situation where physical evidence is strong but the time to act is severely limited. In prominent decision models, a noisy but statistically stationary representation of sensory evidence is integrated over time to an action-triggering bound, and value-biases are affected by starting the integrator closer to the more valuable bound. Here, we show significant departures from this account for humans making rapid sensory-instructed action choices. Behavior was best explained by a simple model in which the evidence representation—and hence, rate of accumulation—is itself biased by value and is non-stationary, increasing over the short decision time frame. Because the value bias initially dominates, the model uniquely predicts a dynamic ‘‘turn-around'' effect on low-value cues, where the accumulator first launches toward the incorrect action but is then re-routed to the correct one. This was clearly exhibited in electrophysiological signals reflecting motor preparation and evidence accumulation. Finally, we construct an extended model that implements this dynamic effect through plausible sensory neural response modulations and demonstrate the correspondence between decision signal dynamics simulated from a behavioral fit of that model and the empirical decision signals. Our findings suggest that value and sensory information can exert simultaneous and dynamically countervailing influences on the trajectory of the accumulation-to-bound process, driving rapid, sensory-guided actions. |
Luis Aguado; Karisa B Parkington; Teresa Dieguez-Risco; José A Hinojosa; Roxane J Itier Joint modulation of facial expression processing by contextual congruency and task demands Journal Article Brain Sciences, 9 , pp. 1–20, 2019. @article{Aguado2019, title = {Joint modulation of facial expression processing by contextual congruency and task demands}, author = {Luis Aguado and Karisa B Parkington and Teresa Dieguez-Risco and José A Hinojosa and Roxane J Itier}, doi = {10.3390/brainsci9050116}, year = {2019}, date = {2019-01-01}, journal = {Brain Sciences}, volume = {9}, pages = {1--20}, abstract = {Faces showing expressions of happiness or anger were presented together with sentences that described happiness-inducing or anger-inducing situations. Two main variables were manipulated: (i) congruency between contexts and expressions (congruent/incongruent) and (ii) the task assigned to the participant, discriminating the emotion shown by the target face (emotion task) or judging whether the expression shown by the face was congruent or not with the context (congruency task). Behavioral and electrophysiological results (event-related potentials (ERP)) showed that processing facial expressions was jointly influenced by congruency and task demands. ERP results revealed task effects at frontal sites, with larger positive amplitudes between 250–450 ms in the congruency task, reflecting the higher cognitive effort required by this task. Effects of congruency appeared at latencies and locations corresponding to the early posterior negativity (EPN) and late positive potential (LPP) components that have previously been found to be sensitive to emotion and affective congruency. The magnitude and spatial distribution of the congruency effects varied depending on the task and the target expression. These results are discussed in terms of the modulatory role of context on facial expression processing and the different mechanisms underlying the processing of expressions of positive and negative emotions.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Faces showing expressions of happiness or anger were presented together with sentences that described happiness-inducing or anger-inducing situations. Two main variables were manipulated: (i) congruency between contexts and expressions (congruent/incongruent) and (ii) the task assigned to the participant, discriminating the emotion shown by the target face (emotion task) or judging whether the expression shown by the face was congruent or not with the context (congruency task). Behavioral and electrophysiological results (event-related potentials (ERP)) showed that processing facial expressions was jointly influenced by congruency and task demands. ERP results revealed task effects at frontal sites, with larger positive amplitudes between 250–450 ms in the congruency task, reflecting the higher cognitive effort required by this task. Effects of congruency appeared at latencies and locations corresponding to the early posterior negativity (EPN) and late positive potential (LPP) components that have previously been found to be sensitive to emotion and affective congruency. The magnitude and spatial distribution of the congruency effects varied depending on the task and the target expression. These results are discussed in terms of the modulatory role of context on facial expression processing and the different mechanisms underlying the processing of expressions of positive and negative emotions. |
C J Aine; H J Bockholt; J R Bustillo; J M Cañive; A Caprihan; C Gasparovic; F M Hanlon; J M Houck; R E Jung; J Lauriello; J Liu; A R Mayer; N I Perrone-Bizzozero; S Posse; Julia M Stephen; J A Turner; V P Clark; Vince D Calhoun Multimodal neuroimaging in schizophrenia: Description and dissemination Journal Article Neuroinformatics, 15 (4), pp. 343–364, 2017. @article{Aine2017, title = {Multimodal neuroimaging in schizophrenia: Description and dissemination}, author = {C J Aine and H J Bockholt and J R Bustillo and J M Ca{ñ}ive and A Caprihan and C Gasparovic and F M Hanlon and J M Houck and R E Jung and J Lauriello and J Liu and A R Mayer and N I Perrone-Bizzozero and S Posse and Julia M Stephen and J A Turner and V P Clark and Vince D Calhoun}, doi = {10.1007/s12021-017-9338-9}, year = {2017}, date = {2017-01-01}, journal = {Neuroinformatics}, volume = {15}, number = {4}, pages = {343--364}, publisher = {Neuroinformatics}, abstract = {In this paper we describe an open-access collection ofmultimodal neuroimaging data in schizophrenia for release to the community. Data were acquired from approximately 100 patients with schizophrenia and 100 age-matched controls during rest as well as several task activation paradigms targeting a hierarchy of cognitive constructs. Neuroimaging data include structural MRI, functional MRI, diffusion MRI, MR spectroscopic imaging, and magnetoencephalography. For three of the hypothesis-driven projects, task activation paradigms were acquired on subsets of~200 volunteers which examined a range of sensory and cognitive processes (e.g., auditory sensory gating, auditory/visual multisensory integration, visual transverse patterning). Neuropsychological data were also acquired and genetic material via saliva samples were collected from most of the participants and have been typed for both genome-wide polymorphism data as well as genome-wide methylation data. Some results are also present- ed from the individual studies as well as from our data-driven multimodal analyses (e.g., multimodal examinations of network structure and network dynamics and multitask fMRI data analysis across projects). All data will be released through the Mind Research Network's collaborative informatics and neuroimaging suite (COINS).}, keywords = {}, pubstate = {published}, tppubtype = {article} } In this paper we describe an open-access collection ofmultimodal neuroimaging data in schizophrenia for release to the community. Data were acquired from approximately 100 patients with schizophrenia and 100 age-matched controls during rest as well as several task activation paradigms targeting a hierarchy of cognitive constructs. Neuroimaging data include structural MRI, functional MRI, diffusion MRI, MR spectroscopic imaging, and magnetoencephalography. For three of the hypothesis-driven projects, task activation paradigms were acquired on subsets of~200 volunteers which examined a range of sensory and cognitive processes (e.g., auditory sensory gating, auditory/visual multisensory integration, visual transverse patterning). Neuropsychological data were also acquired and genetic material via saliva samples were collected from most of the participants and have been typed for both genome-wide polymorphism data as well as genome-wide methylation data. Some results are also present- ed from the individual studies as well as from our data-driven multimodal analyses (e.g., multimodal examinations of network structure and network dynamics and multitask fMRI data analysis across projects). All data will be released through the Mind Research Network's collaborative informatics and neuroimaging suite (COINS). |
Micah Allen; Darya Frank; Samuel D Schwarzkopf; Francesca Fardo; Joel S Winston; Tobias U Hauser; Geraint Rees Unexpected arousal modulates the influence of sensory noise on confidence Journal Article eLife, 5 , pp. 1–17, 2016. @article{Allen2016, title = {Unexpected arousal modulates the influence of sensory noise on confidence}, author = {Micah Allen and Darya Frank and Samuel D Schwarzkopf and Francesca Fardo and Joel S Winston and Tobias U Hauser and Geraint Rees}, doi = {10.7554/eLife.18103}, year = {2016}, date = {2016-01-01}, journal = {eLife}, volume = {5}, pages = {1--17}, abstract = {Human perception is invariably accompanied by a graded feeling of confidence that guides metacognitive awareness and decision-making. It is often assumed that this arises solely from the feed-forward encoding of the strength or precision of sensory inputs. In contrast, interoceptive inference models suggest that confidence reflects a weighted integration of sensory precision and expectations about internal states, such as arousal. Here we test this hypothesis using a novel psychophysical paradigm, in which unseen disgust-cues induced unexpected, unconscious arousal just before participants discriminated motion signals of variable precision. Across measures of perceptual bias, uncertainty, and physiological arousal we found that arousing disgust cues modulated the encoding of sensory noise. Furthermore, the degree to which trial-by-trial pupil fluctuations encoded this nonlinear interaction correlated with trial level confidence. Our results suggest that unexpected arousal regulates perceptual precision, such that subjective confidence reflects the integration of both external sensory and internal, embodied states.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Human perception is invariably accompanied by a graded feeling of confidence that guides metacognitive awareness and decision-making. It is often assumed that this arises solely from the feed-forward encoding of the strength or precision of sensory inputs. In contrast, interoceptive inference models suggest that confidence reflects a weighted integration of sensory precision and expectations about internal states, such as arousal. Here we test this hypothesis using a novel psychophysical paradigm, in which unseen disgust-cues induced unexpected, unconscious arousal just before participants discriminated motion signals of variable precision. Across measures of perceptual bias, uncertainty, and physiological arousal we found that arousing disgust cues modulated the encoding of sensory noise. Furthermore, the degree to which trial-by-trial pupil fluctuations encoded this nonlinear interaction correlated with trial level confidence. Our results suggest that unexpected arousal regulates perceptual precision, such that subjective confidence reflects the integration of both external sensory and internal, embodied states. |
Máté Aller; Uta Noppeney To integrate or not to integrate: Temporal dynamics of hierarchical Bayesian causal inference Journal Article PLoS Biology, 17 (4), pp. e3000210, 2019. @article{Aller2019, title = {To integrate or not to integrate: Temporal dynamics of hierarchical Bayesian causal inference}, author = {Máté Aller and Uta Noppeney}, doi = {10.1371/journal.pbio.3000210}, year = {2019}, date = {2019-01-01}, journal = {PLoS Biology}, volume = {17}, number = {4}, pages = {e3000210}, abstract = {To form a percept of the environment, the brain needs to solve the binding problem—inferring whether signals come from a common cause and are integrated or come from independent causes and are segregated. Behaviourally, humans solve this problem near-optimally as predicted by Bayesian causal inference; but the neural mechanisms remain unclear. Combining Bayesian modelling, electroencephalography (EEG), and multivariate decoding in an audiovisual spatial localisation task, we show that the brain accomplishes Bayesian causal inference by dynamically encoding multiple spatial estimates. Initially, auditory and visual signal locations are estimated independently; next, an estimate is formed that combines information from vision and audition. Yet, it is only from 200 ms onwards that the brain integrates audiovisual signals weighted by their bottom-up sensory reliabilities and top-down task relevance into spatial priority maps that guide behavioural responses. As predicted by Bayesian causal inference, these spatial priority maps take into account the brain's uncertainty about the world's causal structure and flexibly arbitrate between sensory integration and segregation. The dynamic evolution of perceptual estimates thus reflects the hierarchical nature of Bayesian causal inference, a statistical computation, which is crucial for effective interactions with the environment.}, keywords = {}, pubstate = {published}, tppubtype = {article} } To form a percept of the environment, the brain needs to solve the binding problem—inferring whether signals come from a common cause and are integrated or come from independent causes and are segregated. Behaviourally, humans solve this problem near-optimally as predicted by Bayesian causal inference; but the neural mechanisms remain unclear. Combining Bayesian modelling, electroencephalography (EEG), and multivariate decoding in an audiovisual spatial localisation task, we show that the brain accomplishes Bayesian causal inference by dynamically encoding multiple spatial estimates. Initially, auditory and visual signal locations are estimated independently; next, an estimate is formed that combines information from vision and audition. Yet, it is only from 200 ms onwards that the brain integrates audiovisual signals weighted by their bottom-up sensory reliabilities and top-down task relevance into spatial priority maps that guide behavioural responses. As predicted by Bayesian causal inference, these spatial priority maps take into account the brain's uncertainty about the world's causal structure and flexibly arbitrate between sensory integration and segregation. The dynamic evolution of perceptual estimates thus reflects the hierarchical nature of Bayesian causal inference, a statistical computation, which is crucial for effective interactions with the environment. |
Roy Amit; Dekel Abeles; Marisa Carrasco; Shlomit Yuval-Greenberg Oculomotor inhibition reflects temporal expectations Journal Article NeuroImage, 184 , pp. 279–292, 2019. @article{Amit2019a, title = {Oculomotor inhibition reflects temporal expectations}, author = {Roy Amit and Dekel Abeles and Marisa Carrasco and Shlomit Yuval-Greenberg}, doi = {10.1016/j.neuroimage.2018.09.026}, year = {2019}, date = {2019-01-01}, journal = {NeuroImage}, volume = {184}, pages = {279--292}, abstract = {The accurate extraction of signals out of noisy environments is a major challenge of the perceptual system. Forming temporal expectations and continuously matching them with perceptual input can facilitate this process. In humans, temporal expectations are typically assessed using behavioral measures, which provide only retrospective but no real-time estimates during target anticipation, or by using electrophysiological measures, which require extensive preprocessing and are difficult to interpret. Here we show a new correlate of temporal expectations based on oculomotor behavior. Observers performed an orientation-discrimination task on a central grating target, while their gaze position and EEG were monitored. In each trial, a cue preceded the target by a varying interval (“foreperiod”). In separate blocks, the cue was either predictive or non-predictive regarding the timing of the target. Results showed that saccades and blinks were inhibited more prior to an anticipated regular target than a less-anticipated irregular one. This consistent oculomotor inhibition effect enabled a trial-by-trial classification according to interval-regularity. Additionally, in the regular condition the slope of saccade-rate and drift were shallower for longer than shorter foreperiods, indicating their adjustment according to temporal expectations. Comparing the sensitivity of this oculomotor marker with those of other common predictability markers (e.g. alpha-suppression) showed that it is a sensitive marker for cue-related anticipation. In contrast, temporal changes in conditional probabilities (hazard-rate) modulated alpha-suppression more than cue-related anticipation. We conclude that pre-target oculomotor inhibition is a correlate of temporal predictions induced by cue-target associations, whereas alpha-suppression is more sensitive to conditional probabilities across time.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The accurate extraction of signals out of noisy environments is a major challenge of the perceptual system. Forming temporal expectations and continuously matching them with perceptual input can facilitate this process. In humans, temporal expectations are typically assessed using behavioral measures, which provide only retrospective but no real-time estimates during target anticipation, or by using electrophysiological measures, which require extensive preprocessing and are difficult to interpret. Here we show a new correlate of temporal expectations based on oculomotor behavior. Observers performed an orientation-discrimination task on a central grating target, while their gaze position and EEG were monitored. In each trial, a cue preceded the target by a varying interval (“foreperiod”). In separate blocks, the cue was either predictive or non-predictive regarding the timing of the target. Results showed that saccades and blinks were inhibited more prior to an anticipated regular target than a less-anticipated irregular one. This consistent oculomotor inhibition effect enabled a trial-by-trial classification according to interval-regularity. Additionally, in the regular condition the slope of saccade-rate and drift were shallower for longer than shorter foreperiods, indicating their adjustment according to temporal expectations. Comparing the sensitivity of this oculomotor marker with those of other common predictability markers (e.g. alpha-suppression) showed that it is a sensitive marker for cue-related anticipation. In contrast, temporal changes in conditional probabilities (hazard-rate) modulated alpha-suppression more than cue-related anticipation. We conclude that pre-target oculomotor inhibition is a correlate of temporal predictions induced by cue-target associations, whereas alpha-suppression is more sensitive to conditional probabilities across time. |
Lucía Amoruso; Agustín Ibáñez; Bruno Fonseca; Sebastián Gadea; Lucas Sedeño; Mariano Sigman; Adolfo M García; Ricardo Fraiman; Daniel Fraiman Variability in functional brain networks predicts expertise during action observation Journal Article NeuroImage, 146 , pp. 690–700, 2017. @article{Amoruso2017, title = {Variability in functional brain networks predicts expertise during action observation}, author = {Lucía Amoruso and Agustín Ibá{ñ}ez and Bruno Fonseca and Sebastián Gadea and Lucas Sede{ñ}o and Mariano Sigman and Adolfo M García and Ricardo Fraiman and Daniel Fraiman}, doi = {10.1016/j.neuroimage.2016.09.041}, year = {2017}, date = {2017-01-01}, journal = {NeuroImage}, volume = {146}, pages = {690--700}, publisher = {Elsevier}, abstract = {Observing an action performed by another individual activates, in the observer, similar circuits as those involved in the actual execution of that action. This activation is modulated by prior experience; indeed, sustained training in a particular motor domain leads to structural and functional changes in critical brain areas. Here, we capitalized on a novel graph-theory approach to electroencephalographic data (Fraiman et al., 2016) to test whether variability in functional brain networks implicated in Tango observation can discriminate between groups differing in their level of expertise. We found that experts and beginners significantly differed in the functional organization of task-relevant networks. Specifically, networks in expert Tango dancers exhibited less variability and a more robust functional architecture. Notably, these expertise-dependent effects were captured within networks derived from electrophysiological brain activity recorded in a very short time window (2 s). In brief, variability in the organization of task-related networks seems to be a highly sensitive indicator of long-lasting training effects. This finding opens new methodological and theoretical windows to explore the impact of domain-specific expertise on brain plasticity, while highlighting variability as a fruitful measure in neuroimaging research.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Observing an action performed by another individual activates, in the observer, similar circuits as those involved in the actual execution of that action. This activation is modulated by prior experience; indeed, sustained training in a particular motor domain leads to structural and functional changes in critical brain areas. Here, we capitalized on a novel graph-theory approach to electroencephalographic data (Fraiman et al., 2016) to test whether variability in functional brain networks implicated in Tango observation can discriminate between groups differing in their level of expertise. We found that experts and beginners significantly differed in the functional organization of task-relevant networks. Specifically, networks in expert Tango dancers exhibited less variability and a more robust functional architecture. Notably, these expertise-dependent effects were captured within networks derived from electrophysiological brain activity recorded in a very short time window (2 s). In brief, variability in the organization of task-related networks seems to be a highly sensitive indicator of long-lasting training effects. This finding opens new methodological and theoretical windows to explore the impact of domain-specific expertise on brain plasticity, while highlighting variability as a fruitful measure in neuroimaging research. |
Jamila Andoh; Reiko Matsushita; Robert J Zatorre Asymmetric interhemispheric transfer in the auditory network: Evidence from TMS, resting-state fMRI, and diffusion imaging Journal Article Journal of Neuroscience, 43 (43), pp. 14602–14611, 2015. @article{Andoh2015, title = {Asymmetric interhemispheric transfer in the auditory network: Evidence from TMS, resting-state fMRI, and diffusion imaging}, author = {Jamila Andoh and Reiko Matsushita and Robert J Zatorre}, doi = {10.1523/JNEUROSCI.2333-15.2015}, year = {2015}, date = {2015-01-01}, journal = {Journal of Neuroscience}, volume = {43}, number = {43}, pages = {14602--14611}, abstract = {Hemispheric asymmetries in human auditory cortical function and structure are still highly debated. Brain stimulation approaches can complement correlational techniques by uncovering causal influences. Previous studies have shown asymmetrical effects of transcranial magnetic stimulation (TMS) on task performance, but it is unclear whether these effects are task-specific or reflect intrinsic network properties. To test how modulation of auditory cortex (AC) influences functional networks and whether this influence is asymmetrical, the present study measured resting-state fMRI connectivity networks in 17 healthy volunteers before and immediately after TMS (continuous theta burst stimulation) to the left or right AC, and the vertex as a control. We also examined the relationship between TMS-induced interhemispheric signal propagation and anatomical properties of callosal auditory fibers as measured with diffusion-weighted MRI. We found that TMS to the right AC, but not the left, resulted in widespread connectivity decreases in auditory- and motor-related networks in the resting state. Individual differences in the degree of change in functional connectivity between auditory cortices after TMS applied over the right AC were negatively related to the volume of callosal auditory fibers. The findings show that TMS-induced network modulation occurs, even in the absence of an explicit task, and that the magnitude of the effect differs across individuals as a function of callosal structure, supporting a role for the corpus callosum in mediating functional asymmetry. The findings support theoretical models emphasizing hemispheric differences in network organization and are of practical significance in showing that brain stimulation studies need to take network-level effects into account.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Hemispheric asymmetries in human auditory cortical function and structure are still highly debated. Brain stimulation approaches can complement correlational techniques by uncovering causal influences. Previous studies have shown asymmetrical effects of transcranial magnetic stimulation (TMS) on task performance, but it is unclear whether these effects are task-specific or reflect intrinsic network properties. To test how modulation of auditory cortex (AC) influences functional networks and whether this influence is asymmetrical, the present study measured resting-state fMRI connectivity networks in 17 healthy volunteers before and immediately after TMS (continuous theta burst stimulation) to the left or right AC, and the vertex as a control. We also examined the relationship between TMS-induced interhemispheric signal propagation and anatomical properties of callosal auditory fibers as measured with diffusion-weighted MRI. We found that TMS to the right AC, but not the left, resulted in widespread connectivity decreases in auditory- and motor-related networks in the resting state. Individual differences in the degree of change in functional connectivity between auditory cortices after TMS applied over the right AC were negatively related to the volume of callosal auditory fibers. The findings show that TMS-induced network modulation occurs, even in the absence of an explicit task, and that the magnitude of the effect differs across individuals as a function of callosal structure, supporting a role for the corpus callosum in mediating functional asymmetry. The findings support theoretical models emphasizing hemispheric differences in network organization and are of practical significance in showing that brain stimulation studies need to take network-level effects into account. |
Ayelet Arazi; Gil Gonen-Yaacovi; Ilan Dinstein The magnitude of trial-by-trial neural variability Is reproducible over time and across tasks in humans Journal Article eNeuro, 4 (6), pp. ENEURO.0292–17.2017, 2017. @article{Arazi2017b, title = {The magnitude of trial-by-trial neural variability Is reproducible over time and across tasks in humans}, author = {Ayelet Arazi and Gil Gonen-Yaacovi and Ilan Dinstein}, doi = {10.1523/ENEURO.0292-17.2017}, year = {2017}, date = {2017-01-01}, journal = {eNeuro}, volume = {4}, number = {6}, pages = {ENEURO.0292--17.2017}, abstract = {Numerous studies have shown that neural activity in sensory cortices is remarkably variable over time and across trials even when subjects are presented with an identical repeating stimulus or task. This trial-by-trial neural variability is relatively large in the prestimulus period and considerably smaller (quenched) following stimulus presentation. Previous studies have suggested that the magnitude of neural variability affects behavior such that perceptual performance is better on trials and in individuals where variability quenching is larger. To what degree are neural variability magnitudes of individual subjects flexible or static? Here, we used EEG recordings from adult humans to demonstrate that neural variability magnitudes in visual cortex are remarkably consistent across different tasks and recording sessions. While magnitudes of neural variability differed dramatically across individual subjects, they were surprisingly stable across four tasks with different stimuli, temporal structures, and attentional/cognitive demands as well as across experimental sessions separated by one year. These experiments reveal that, in adults, neural variability magnitudes are mostly solidified individual characteristics that change little with task or time, and are likely to predispose individual subjects to exhibit distinct behavioral capabilities.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Numerous studies have shown that neural activity in sensory cortices is remarkably variable over time and across trials even when subjects are presented with an identical repeating stimulus or task. This trial-by-trial neural variability is relatively large in the prestimulus period and considerably smaller (quenched) following stimulus presentation. Previous studies have suggested that the magnitude of neural variability affects behavior such that perceptual performance is better on trials and in individuals where variability quenching is larger. To what degree are neural variability magnitudes of individual subjects flexible or static? Here, we used EEG recordings from adult humans to demonstrate that neural variability magnitudes in visual cortex are remarkably consistent across different tasks and recording sessions. While magnitudes of neural variability differed dramatically across individual subjects, they were surprisingly stable across four tasks with different stimuli, temporal structures, and attentional/cognitive demands as well as across experimental sessions separated by one year. These experiments reveal that, in adults, neural variability magnitudes are mostly solidified individual characteristics that change little with task or time, and are likely to predispose individual subjects to exhibit distinct behavioral capabilities. |
Ayelet Arazi; Yaffa Yeshurun; Ilan Dinstein Neural variability is quenched by attention Journal Article Journal of Neuroscience, 39 (30), pp. 5975–5985, 2019. @article{Arazi2019, title = {Neural variability is quenched by attention}, author = {Ayelet Arazi and Yaffa Yeshurun and Ilan Dinstein}, doi = {10.1523/JNEUROSCI.0355-19.2019}, year = {2019}, date = {2019-01-01}, journal = {Journal of Neuroscience}, volume = {39}, number = {30}, pages = {5975--5985}, abstract = {Attention can be subdivided into several components, including alertness and spatial attention. It is believed that the behavioral benefits of attention, such as increased accuracy and faster reaction times, are generated by an increase in neural activity and a decrease in neural variability, which enhance the signal-to-noise ratio of task-relevant neural populations. However, empirical evidence regarding attention-related changes in neural variability in humans is extremely rare. Here we used EEG to demonstrate that trial-by-trial neural variability was reduced by visual cues that modulated alertness and spatial attention. Reductions in neural variability were specific to the visual system and larger in the contralateral hemisphere of the attended visual field. Subjects with higher initial levels of neural variability and larger decreases in variability exhibited greater behavioral benefits from attentional cues. These findings demonstrate that both alertness and spatial attention modulate neural variability and highlight the importance of reducing/quenching neural variability for attaining the behavioral benefits of attention.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Attention can be subdivided into several components, including alertness and spatial attention. It is believed that the behavioral benefits of attention, such as increased accuracy and faster reaction times, are generated by an increase in neural activity and a decrease in neural variability, which enhance the signal-to-noise ratio of task-relevant neural populations. However, empirical evidence regarding attention-related changes in neural variability in humans is extremely rare. Here we used EEG to demonstrate that trial-by-trial neural variability was reduced by visual cues that modulated alertness and spatial attention. Reductions in neural variability were specific to the visual system and larger in the contralateral hemisphere of the attended visual field. Subjects with higher initial levels of neural variability and larger decreases in variability exhibited greater behavioral benefits from attentional cues. These findings demonstrate that both alertness and spatial attention modulate neural variability and highlight the importance of reducing/quenching neural variability for attaining the behavioral benefits of attention. |
Carmel R Auerbach-Asch; Oded Bein; Leon Y Deouell Face selective neural activity: Comparisons between fixed and free viewing Journal Article Brain Topography, 33 (3), pp. 336–354, 2020. @article{AuerbachAsch2020, title = {Face selective neural activity: Comparisons between fixed and free viewing}, author = {Carmel R Auerbach-Asch and Oded Bein and Leon Y Deouell}, doi = {10.1007/s10548-020-00764-7}, year = {2020}, date = {2020-01-01}, journal = {Brain Topography}, volume = {33}, number = {3}, pages = {336--354}, publisher = {Springer US}, abstract = {Event Related Potentials (ERPs) are widely used to study category-selective EEG responses to visual stimuli, such as the face-selective N170 component. Typically, this is done by flashing stimuli at the point of static gaze fixation. While allowing for good experimental control, these paradigms ignore the dynamic role of eye-movements in natural vision. Fixation-related potentials (FRPs), obtained using simultaneous EEG and eye-tracking, overcome this limitation. Various studies have used FRPs to study processes such as lexical processing, target detection and attention allocation. The goal of this study was to carefully compare face-sensitive activity time-locked to an abrupt stimulus onset at fixation, with that time-locked to a self-generated fixation on a stimulus. Twelve participants participated in three experimental conditions: Free-viewing (FRPs), Cued-viewing (FRPs) and Control (ERPs). We used a multiple regression approach to disentangle overlapping activity components. Our results show that the N170 face-effect is evident for the first fixation on a stimulus, whether it follows a self-generated saccade or stimulus appearance at fixation point. The N170 face-effect has similar topography across viewing conditions, but there were major differences within each stimulus category. We ascribe these differences to an overlap of the fixation-related lambda response and the N170. We tested the plausibility of this account using dipole simulations. Finally, the N170 exhibits category-specific adaptation in free viewing. This study establishes the comparability of the free-viewing N170 face-effect with the classic event-related effect, while highlighting the importance of accounting for eye-movement related effects.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Event Related Potentials (ERPs) are widely used to study category-selective EEG responses to visual stimuli, such as the face-selective N170 component. Typically, this is done by flashing stimuli at the point of static gaze fixation. While allowing for good experimental control, these paradigms ignore the dynamic role of eye-movements in natural vision. Fixation-related potentials (FRPs), obtained using simultaneous EEG and eye-tracking, overcome this limitation. Various studies have used FRPs to study processes such as lexical processing, target detection and attention allocation. The goal of this study was to carefully compare face-sensitive activity time-locked to an abrupt stimulus onset at fixation, with that time-locked to a self-generated fixation on a stimulus. Twelve participants participated in three experimental conditions: Free-viewing (FRPs), Cued-viewing (FRPs) and Control (ERPs). We used a multiple regression approach to disentangle overlapping activity components. Our results show that the N170 face-effect is evident for the first fixation on a stimulus, whether it follows a self-generated saccade or stimulus appearance at fixation point. The N170 face-effect has similar topography across viewing conditions, but there were major differences within each stimulus category. We ascribe these differences to an overlap of the fixation-related lambda response and the N170. We tested the plausibility of this account using dipole simulations. Finally, the N170 exhibits category-specific adaptation in free viewing. This study establishes the comparability of the free-viewing N170 face-effect with the classic event-related effect, while highlighting the importance of accounting for eye-movement related effects. |
Ryszard Auksztulewicz; Karl J Friston Attentional enhancement of auditory mismatch responses: A DCM/MEG study Journal Article Cerebral Cortex, 25 (11), pp. 4273–4283, 2015. @article{Auksztulewicz2015, title = {Attentional enhancement of auditory mismatch responses: A DCM/MEG study}, author = {Ryszard Auksztulewicz and Karl J Friston}, doi = {10.1093/cercor/bhu323}, year = {2015}, date = {2015-01-01}, journal = {Cerebral Cortex}, volume = {25}, number = {11}, pages = {4273--4283}, abstract = {Despite similar behavioral effects, attention and expectation influence evoked responses differently: Attention typically enhances event-related responses, whereas expectation reduces them. This dissociation has been reconciled under predictive coding, where prediction errors are weighted by precision associated with attentional modulation. Here, we tested the predictive coding account of attention and expectation using magnetoencephalography and modeling. Temporal attention and sensory expectation were orthogonally manipulated in an auditory mismatch paradigm, revealing opposing effects on evoked response amplitude. Mismatch negativity (MMN) was enhanced by attention, speaking against its supposedly pre-attentive nature. This interaction effect was modeled in a canonical microcircuit using dynamic causal modeling, comparing models with modulation of extrinsic and intrinsic connectivity at different levels of the auditory hierarchy. While MMN was explained by recursive interplay of sensory predictions and prediction errors, attention was linked to the gain of inhibitory interneurons, consistent with its modulation of sensory precision.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Despite similar behavioral effects, attention and expectation influence evoked responses differently: Attention typically enhances event-related responses, whereas expectation reduces them. This dissociation has been reconciled under predictive coding, where prediction errors are weighted by precision associated with attentional modulation. Here, we tested the predictive coding account of attention and expectation using magnetoencephalography and modeling. Temporal attention and sensory expectation were orthogonally manipulated in an auditory mismatch paradigm, revealing opposing effects on evoked response amplitude. Mismatch negativity (MMN) was enhanced by attention, speaking against its supposedly pre-attentive nature. This interaction effect was modeled in a canonical microcircuit using dynamic causal modeling, comparing models with modulation of extrinsic and intrinsic connectivity at different levels of the auditory hierarchy. While MMN was explained by recursive interplay of sensory predictions and prediction errors, attention was linked to the gain of inhibitory interneurons, consistent with its modulation of sensory precision. |
Ryszard Auksztulewicz; Nicholas E Myers; Jan W Schnupp; Anna C Nobre Rhythmic temporal expectation boosts neural activity by increasing neural gain Journal Article Journal of Neuroscience, 39 (49), pp. 9806–9817, 2019. @article{Auksztulewicz2019, title = {Rhythmic temporal expectation boosts neural activity by increasing neural gain}, author = {Ryszard Auksztulewicz and Nicholas E Myers and Jan W Schnupp and Anna C Nobre}, doi = {10.1523/JNEUROSCI.0925-19.2019}, year = {2019}, date = {2019-01-01}, journal = {Journal of Neuroscience}, volume = {39}, number = {49}, pages = {9806--9817}, abstract = {Temporal orienting improves sensory processing, akin to other top–down biases. However, it is unknown whether these improvements reflect increased neural gain to any stimuli presented at expected time points, or specific tuning to task-relevant stimulus aspects. Furthermore, while other top–down biases are selective, the extent of trade-offs across time is less well characterized. Here, we tested whether gain and/or tuning ofauditory frequency processing in humans is modulated by rhythmic temporal expectations, and whether these modulations are specific to time points relevant for task performance. Healthy participants (N⫽ 23) of either sex performed an auditory discrimination task while their brain activity was measured using magnetoencephalography/electroencephalography (M/EEG). Acoustic stimulation consisted ofsequences ofbriefdistractors interspersed with targets, presented in a rhythmic or jittered way. Target rhythmicity not only improved behavioral discrimination accuracy and M/EEG-based decoding oftargets, but also ofirrelevant distrac- tors preceding these targets. To explain this finding in terms ofincreased sensitivity and/or sharpened tuning to auditory frequency, we estimated tuning curves based on M/EEG decoding results, with separate parameters describing gain and sharpness. The effect of rhythmic expectation on distractor decoding was linked to gain increase only, suggesting increased neural sensitivity to any stimuli presented at relevant time points.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Temporal orienting improves sensory processing, akin to other top–down biases. However, it is unknown whether these improvements reflect increased neural gain to any stimuli presented at expected time points, or specific tuning to task-relevant stimulus aspects. Furthermore, while other top–down biases are selective, the extent of trade-offs across time is less well characterized. Here, we tested whether gain and/or tuning ofauditory frequency processing in humans is modulated by rhythmic temporal expectations, and whether these modulations are specific to time points relevant for task performance. Healthy participants (N⫽ 23) of either sex performed an auditory discrimination task while their brain activity was measured using magnetoencephalography/electroencephalography (M/EEG). Acoustic stimulation consisted ofsequences ofbriefdistractors interspersed with targets, presented in a rhythmic or jittered way. Target rhythmicity not only improved behavioral discrimination accuracy and M/EEG-based decoding oftargets, but also ofirrelevant distrac- tors preceding these targets. To explain this finding in terms ofincreased sensitivity and/or sharpened tuning to auditory frequency, we estimated tuning curves based on M/EEG decoding results, with separate parameters describing gain and sharpness. The effect of rhythmic expectation on distractor decoding was linked to gain increase only, suggesting increased neural sensitivity to any stimuli presented at relevant time points. |
Mariana Babo-Rebelo; Craig G Richter; Catherine Tallon-Baudry Neural responses to heartbeats in the default network encode the self in spontaneous thoughts Journal Article Journal of Neuroscience, 36 (30), pp. 7829–7840, 2016. @article{BaboRebelo2016, title = {Neural responses to heartbeats in the default network encode the self in spontaneous thoughts}, author = {Mariana Babo-Rebelo and Craig G Richter and Catherine Tallon-Baudry}, doi = {10.1523/JNEUROSCI.0262-16.2016}, year = {2016}, date = {2016-01-01}, journal = {Journal of Neuroscience}, volume = {36}, number = {30}, pages = {7829--7840}, abstract = {The default network (DN) has been consistently associated with self-related cognition, but also to bodily state monitoring and autonomic regulation. We hypothesized that these two seemingly disparate functional roles of the DN are functionally coupled, in line with theories proposing that selfhood is grounded in the neural monitoring of internal organs, such as the heart. We measured with magnetoencephalograhy neural responses evoked by heartbeats while human participants freely mind-wandered. When interrupted by a visual stimulus at random intervals, participants scored the self-relatedness of the interrupted thought. They evaluated their involvement as the first-person perspective subject or agent in the thought ("I"), and on another scale to what degree they were thinking about themselves ("Me"). During the interrupted thought, neural responses to heartbeats in two regions of the DN, the ventral precuneus and the ventromedial prefrontal cortex, covaried, respectively, with the "I" and the "Me" dimensions of the self, even at the single-trial level. No covariation between self-relatedness and peripheral autonomic measures (heart rate, heart rate variability, pupil diameter, electrodermal activity, respiration rate, and phase) or alpha power was observed. Our results reveal a direct link between selfhood and neural responses to heartbeats in the DN and thus directly support theories grounding selfhood in the neural monitoring of visceral inputs. More generally, the tight functional coupling between self-related processing and cardiac monitoring observed here implies that, even in the absence of measured changes in peripheral bodily measures, physiological and cognitive functions have to be considered jointly in the DN.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The default network (DN) has been consistently associated with self-related cognition, but also to bodily state monitoring and autonomic regulation. We hypothesized that these two seemingly disparate functional roles of the DN are functionally coupled, in line with theories proposing that selfhood is grounded in the neural monitoring of internal organs, such as the heart. We measured with magnetoencephalograhy neural responses evoked by heartbeats while human participants freely mind-wandered. When interrupted by a visual stimulus at random intervals, participants scored the self-relatedness of the interrupted thought. They evaluated their involvement as the first-person perspective subject or agent in the thought ("I"), and on another scale to what degree they were thinking about themselves ("Me"). During the interrupted thought, neural responses to heartbeats in two regions of the DN, the ventral precuneus and the ventromedial prefrontal cortex, covaried, respectively, with the "I" and the "Me" dimensions of the self, even at the single-trial level. No covariation between self-relatedness and peripheral autonomic measures (heart rate, heart rate variability, pupil diameter, electrodermal activity, respiration rate, and phase) or alpha power was observed. Our results reveal a direct link between selfhood and neural responses to heartbeats in the DN and thus directly support theories grounding selfhood in the neural monitoring of visceral inputs. More generally, the tight functional coupling between self-related processing and cardiac monitoring observed here implies that, even in the absence of measured changes in peripheral bodily measures, physiological and cognitive functions have to be considered jointly in the DN. |
Dominik R Bach; Nicholas FWIBBLE; Gareth Barnes; Raymond J Dolan Sustained magnetic responses in temporal cortex reflect instantaneous significance of approaching and receding sounds Journal Article PLoS ONE, 10 (7), pp. e0134060, 2015. @article{Bach2015, title = {Sustained magnetic responses in temporal cortex reflect instantaneous significance of approaching and receding sounds}, author = {Dominik R Bach and Nicholas FWIBBLE and Gareth Barnes and Raymond J Dolan}, doi = {10.1371/journal.pone.0134060}, year = {2015}, date = {2015-01-01}, journal = {PLoS ONE}, volume = {10}, number = {7}, pages = {e0134060}, abstract = {Rising sound intensity often signals an approaching sound source and can serve as a powerful warning cue, eliciting phasic attention, perception biases and emotional responses. How the evaluation of approaching sounds unfolds over time remains elusive. Here, we capitalised on the temporal resolution of magnetoencephalograpy (MEG) to investigate in humans a dynamic encoding of perceiving approaching and receding sounds. We compared magnetic responses to intensity envelopes of complex sounds to those of white noise sounds, in which intensity change is not perceived as approaching. Sustained magnetic fields over temporal sensors tracked intensity change in complex sounds in an approximately linear fashion, an effect not seen for intensity change in white noise sounds, or for overall intensity. Hence, these fields are likely to track approach/recession, but not the apparent (instantaneous) distance of the sound source, or its intensity as such. As a likely source of this activity, the bilateral inferior temporal gyrus and right temporo-parietal junction emerged. Our results indicate that discrete temporal cortical areas parametrically encode behavioural significance in moving sound sources where the signal unfolded in a manner reminiscent of evidence accumulation. This may help an understanding of how acoustic percepts are evaluated as behaviourally relevant, where our results highlight a crucial role of cortical areas.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Rising sound intensity often signals an approaching sound source and can serve as a powerful warning cue, eliciting phasic attention, perception biases and emotional responses. How the evaluation of approaching sounds unfolds over time remains elusive. Here, we capitalised on the temporal resolution of magnetoencephalograpy (MEG) to investigate in humans a dynamic encoding of perceiving approaching and receding sounds. We compared magnetic responses to intensity envelopes of complex sounds to those of white noise sounds, in which intensity change is not perceived as approaching. Sustained magnetic fields over temporal sensors tracked intensity change in complex sounds in an approximately linear fashion, an effect not seen for intensity change in white noise sounds, or for overall intensity. Hence, these fields are likely to track approach/recession, but not the apparent (instantaneous) distance of the sound source, or its intensity as such. As a likely source of this activity, the bilateral inferior temporal gyrus and right temporo-parietal junction emerged. Our results indicate that discrete temporal cortical areas parametrically encode behavioural significance in moving sound sources where the signal unfolded in a manner reminiscent of evidence accumulation. This may help an understanding of how acoustic percepts are evaluated as behaviourally relevant, where our results highlight a crucial role of cortical areas. |
Cathleen Bache; Anne Springer; Hannes Noack; Waltraud Stadler; Franziska Kopp; Ulman Lindenberger; Markus Werkle-Bergner 10-month-old infants are sensitive to the time course of perceived actions: Eye-tracking and EEG evidence Journal Article Frontiers in Psychology, 8 , pp. 1–18, 2017. @article{Bache2017, title = {10-month-old infants are sensitive to the time course of perceived actions: Eye-tracking and EEG evidence}, author = {Cathleen Bache and Anne Springer and Hannes Noack and Waltraud Stadler and Franziska Kopp and Ulman Lindenberger and Markus Werkle-Bergner}, doi = {10.3389/fpsyg.2017.01170}, year = {2017}, date = {2017-01-01}, journal = {Frontiers in Psychology}, volume = {8}, pages = {1--18}, abstract = {Research has shown that infants are able to track a moving target efficiently – even if it is transiently occluded from sight. This basic ability allows prediction of when and where events happen in everyday life. Yet, it is unclear whether, and how, infants internally represent the time course of ongoing movements to derive predictions. In this study, 10-month-old crawlers observed the video of a same-aged crawling baby that was transiently occluded and reappeared in either a temporally continuous or non-continuous manner (i.e., delayed by 500 ms vs. forwarded by 500 ms relative to the real-time movement). Eye movement and rhythmic neural brain activity (EEG) were measured simultaneously. Eye movement analyses showed that infants were sensitive to slight temporal shifts in movement continuation after occlusion. Furthermore, brain activity associated with sensorimotor processing differed between observation of continuous and non-continuous movements. Early sensitivity to an action's timing may hence be explained within the internal real-time simulation account of action observation. Overall, the results support the hypothesis that 10-month-old infants are well prepared for internal representation of the time course of observed movements that are within the infants' current motor repertoire.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Research has shown that infants are able to track a moving target efficiently – even if it is transiently occluded from sight. This basic ability allows prediction of when and where events happen in everyday life. Yet, it is unclear whether, and how, infants internally represent the time course of ongoing movements to derive predictions. In this study, 10-month-old crawlers observed the video of a same-aged crawling baby that was transiently occluded and reappeared in either a temporally continuous or non-continuous manner (i.e., delayed by 500 ms vs. forwarded by 500 ms relative to the real-time movement). Eye movement and rhythmic neural brain activity (EEG) were measured simultaneously. Eye movement analyses showed that infants were sensitive to slight temporal shifts in movement continuation after occlusion. Furthermore, brain activity associated with sensorimotor processing differed between observation of continuous and non-continuous movements. Early sensitivity to an action's timing may hence be explained within the internal real-time simulation account of action observation. Overall, the results support the hypothesis that 10-month-old infants are well prepared for internal representation of the time course of observed movements that are within the infants' current motor repertoire. |
Yasaman Bagherzadeh; Daniel Baldauf; Dimitrios Pantazis; Robert Desimone Alpha synchrony and the neurofeedback control of spatial attention Journal Article Neuron, 105 , pp. 1–11, 2020. @article{Bagherzadeh2020, title = {Alpha synchrony and the neurofeedback control of spatial attention}, author = {Yasaman Bagherzadeh and Daniel Baldauf and Dimitrios Pantazis and Robert Desimone}, doi = {10.1016/j.neuron.2019.11.001}, year = {2020}, date = {2020-01-01}, journal = {Neuron}, volume = {105}, pages = {1--11}, publisher = {Elsevier Inc.}, abstract = {Decreases in alpha synchronization are correlated with enhanced attention, whereas alpha increases are correlated with inattention. However, correlation is not causality, and synchronization may be a byproduct of attention rather than a cause. To test for a causal role of alpha synchrony in attention, we used MEG neurofeedback to train subjects to manipulate the ratio of alpha power over the left versus right parietal cortex. We found that a comparable alpha asymmetry developed over the visual cortex. The alpha training led to corresponding asymmetrical changes in visually evoked responses to probes presented in the two hemifields during training. Thus, reduced alpha was associated with enhanced sensory processing. Testing after training showed a persistent bias in attention in the expected directions. The results support the proposal that alpha synchrony plays a causal role in modulating attention and visual processing, and alpha training could be used for testing hypotheses about synchrony.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Decreases in alpha synchronization are correlated with enhanced attention, whereas alpha increases are correlated with inattention. However, correlation is not causality, and synchronization may be a byproduct of attention rather than a cause. To test for a causal role of alpha synchrony in attention, we used MEG neurofeedback to train subjects to manipulate the ratio of alpha power over the left versus right parietal cortex. We found that a comparable alpha asymmetry developed over the visual cortex. The alpha training led to corresponding asymmetrical changes in visually evoked responses to probes presented in the two hemifields during training. Thus, reduced alpha was associated with enhanced sensory processing. Testing after training showed a persistent bias in attention in the expected directions. The results support the proposal that alpha synchrony plays a causal role in modulating attention and visual processing, and alpha training could be used for testing hypotheses about synchrony. |
Iske Bakker-Marshall; Atsuko Takashima; Jan-Mathijs Schoffelen; Janet G van Hell; Gabriele Janzen; James M McQueen Theta-band oscillations in the middle temporal gyrus reflect novel word consolidation Journal Article Journal of Cognitive Neuroscience, 30 (5), pp. 621–633, 2018. @article{BakkerMarshall2018, title = {Theta-band oscillations in the middle temporal gyrus reflect novel word consolidation}, author = {Iske Bakker-Marshall and Atsuko Takashima and Jan-Mathijs Schoffelen and Janet G van Hell and Gabriele Janzen and James M McQueen}, doi = {10.1162/jocn}, year = {2018}, date = {2018-01-01}, journal = {Journal of Cognitive Neuroscience}, volume = {30}, number = {5}, pages = {621--633}, abstract = {Like many other types of memory formation, novel word learning benefits from an offline consolidation period after the initial encoding phase. A previous EEG study has shown that retrieval of novel words elicited more word-like-induced electrophysiological brain activity in the theta band after consolidation [Bakker, I., Takashima, A., van Hell, J. G., Janzen, G., & McQueen, J. M. Changes in theta and beta oscillations as signatures of novel word consolidation. Journal of Cognitive Neuroscience, 27, 1286–1297, 2015]. This suggests that theta-band oscillations play a role in lexicalization, but it has not been demonstrated that this effect is directly caused by the formation of lexical representations. This study used magnetoencephalography to localize the theta consolidation effect to the left posterior middle temporal gyrus (pMTG), a region known to be involved in lexical storage. Both untrained novel words and words learned immediately before test elicited lower theta power during retrieval than existing words in this region. After a 24-hr consolidation period, the difference between novel and existing words decreased significantly, most strongly in the left pMTG. The magnitude of the decrease after consolidation correlated with an increase in behavioral competition effects between novel words and existing words with similar spelling, reflecting functional integration into the mental lexicon. These results thus provide new evidence that consolidation aids the development of lexical representations mediated by the left pMTG. Theta synchronizationmay enable lexical access by facilitating the simultaneous activation of distributed semantic, phonological, and orthographic representations that are bound together in the pMTG.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Like many other types of memory formation, novel word learning benefits from an offline consolidation period after the initial encoding phase. A previous EEG study has shown that retrieval of novel words elicited more word-like-induced electrophysiological brain activity in the theta band after consolidation [Bakker, I., Takashima, A., van Hell, J. G., Janzen, G., & McQueen, J. M. Changes in theta and beta oscillations as signatures of novel word consolidation. Journal of Cognitive Neuroscience, 27, 1286–1297, 2015]. This suggests that theta-band oscillations play a role in lexicalization, but it has not been demonstrated that this effect is directly caused by the formation of lexical representations. This study used magnetoencephalography to localize the theta consolidation effect to the left posterior middle temporal gyrus (pMTG), a region known to be involved in lexical storage. Both untrained novel words and words learned immediately before test elicited lower theta power during retrieval than existing words in this region. After a 24-hr consolidation period, the difference between novel and existing words decreased significantly, most strongly in the left pMTG. The magnitude of the decrease after consolidation correlated with an increase in behavioral competition effects between novel words and existing words with similar spelling, reflecting functional integration into the mental lexicon. These results thus provide new evidence that consolidation aids the development of lexical representations mediated by the left pMTG. Theta synchronizationmay enable lexical access by facilitating the simultaneous activation of distributed semantic, phonological, and orthographic representations that are bound together in the pMTG. |
Daniel Baldauf; Robert Desimone Neural mechanisms of object-based attention Journal Article Science, 344 (6182), pp. 424–427, 2014. @article{Baldauf2014, title = {Neural mechanisms of object-based attention}, author = {Daniel Baldauf and Robert Desimone}, doi = {10.1126/science.1247003}, year = {2014}, date = {2014-01-01}, journal = {Science}, volume = {344}, number = {6182}, pages = {424--427}, abstract = {How we attend to objects and their features that cannot be separated by location is not understood. We presented two temporally and spatially overlapping streams of objects, faces versus houses, and used magnetoencephalography and functional magnetic resonance imaging to separate neuronal responses to attended and unattended objects. Attention to faces versus houses enhanced the sensory responses in the fusiform face area (FFA) and parahippocampal place area (PPA), respectively. The increases in sensory responses were accompanied by induced gamma synchrony between the inferior frontal junction, IFJ, and either FFA or PPA, depending on which object was attended. The IFJ appeared to be the driver of the synchrony, as gamma phases were advanced by 20 ms in IFJ compared to FFA or PPA. Thus, the IFJ may direct the flow of visual processing during object-based attention, at least in part through coupled oscillations with specialized areas such as FFA and PPA. W}, keywords = {}, pubstate = {published}, tppubtype = {article} } How we attend to objects and their features that cannot be separated by location is not understood. We presented two temporally and spatially overlapping streams of objects, faces versus houses, and used magnetoencephalography and functional magnetic resonance imaging to separate neuronal responses to attended and unattended objects. Attention to faces versus houses enhanced the sensory responses in the fusiform face area (FFA) and parahippocampal place area (PPA), respectively. The increases in sensory responses were accompanied by induced gamma synchrony between the inferior frontal junction, IFJ, and either FFA or PPA, depending on which object was attended. The IFJ appeared to be the driver of the synchrony, as gamma phases were advanced by 20 ms in IFJ compared to FFA or PPA. Thus, the IFJ may direct the flow of visual processing during object-based attention, at least in part through coupled oscillations with specialized areas such as FFA and PPA. W |
Snigdha Banerjee; Adam C Snyder; Sophie Molholm; John J Foxe Journal of Neuroscience, 31 (27), pp. 9923–9932, 2011. @article{Banerjee2011, title = {Oscillatory alpha-band mechanisms and the deployment of spatial attention to anticipated auditory and visual target locations: Supramodal or sensory-specific control mechanisms?}, author = {Snigdha Banerjee and Adam C Snyder and Sophie Molholm and John J Foxe}, doi = {10.1523/JNEUROSCI.4660-10.2011}, year = {2011}, date = {2011-01-01}, journal = {Journal of Neuroscience}, volume = {31}, number = {27}, pages = {9923--9932}, abstract = {Oscillatory alpha-band activity (8-15 Hz) over parieto-occipital cortex in humans plays an important role in suppression of processing for inputs at to-be-ignored regions of space, with increased alpha-band power observed over cortex contralateral to locations expected to contain distractors. It is unclear whether similar processes operate during deployment of spatial attention in other sensory modalities. Evidence from lesion patients suggests that parietal regions house supramodal representations of space. The parietal lobes are prominent generators of alpha oscillations, raising the possibility that alpha is a neural signature of supramodal spatial attention. Furthermore, when spatial attention is deployed within vision, processing of task-irrelevant auditory inputs at attended locations is also enhanced, pointing to automatic links between spatial deployments across senses. Here, we asked whether lateralized alpha-band activity is also evident in a purely auditory spatial-cueing task and whether it had the same underlying generator configuration as in a purely visuospatial task. If common to both sensory systems, this would provide strong support for "supramodal" attention theory. Alternately, alpha-band differences between auditory and visual tasks would support a sensory-specific account. Lateralized shifts in alpha-band activity were indeed observed during a purely auditory spatial task. Crucially, there were clear differences in scalp topographies of this alpha activity depending on the sensory system within which spatial attention was deployed. Findings suggest that parietally generated alpha-band mechanisms are central to attentional deployments across modalities but that they are invoked in a sensory-specific manner. The data support an "interactivity account," whereby a supramodal system interacts with sensory-specific control systems during deployment of spatial attention.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Oscillatory alpha-band activity (8-15 Hz) over parieto-occipital cortex in humans plays an important role in suppression of processing for inputs at to-be-ignored regions of space, with increased alpha-band power observed over cortex contralateral to locations expected to contain distractors. It is unclear whether similar processes operate during deployment of spatial attention in other sensory modalities. Evidence from lesion patients suggests that parietal regions house supramodal representations of space. The parietal lobes are prominent generators of alpha oscillations, raising the possibility that alpha is a neural signature of supramodal spatial attention. Furthermore, when spatial attention is deployed within vision, processing of task-irrelevant auditory inputs at attended locations is also enhanced, pointing to automatic links between spatial deployments across senses. Here, we asked whether lateralized alpha-band activity is also evident in a purely auditory spatial-cueing task and whether it had the same underlying generator configuration as in a purely visuospatial task. If common to both sensory systems, this would provide strong support for "supramodal" attention theory. Alternately, alpha-band differences between auditory and visual tasks would support a sensory-specific account. Lateralized shifts in alpha-band activity were indeed observed during a purely auditory spatial task. Crucially, there were clear differences in scalp topographies of this alpha activity depending on the sensory system within which spatial attention was deployed. Findings suggest that parietally generated alpha-band mechanisms are central to attentional deployments across modalities but that they are invoked in a sensory-specific manner. The data support an "interactivity account," whereby a supramodal system interacts with sensory-specific control systems during deployment of spatial attention. |
Snigdha Banerjee; Hans Peter Frey; Sophie Molholm; John J Foxe European Journal of Neuroscience, 41 (6), pp. 818–834, 2015. @article{Banerjee2015, title = {Interests shape how adolescents pay attention: The interaction of motivation and top-down attentional processes in biasing sensory activations to anticipated events}, author = {Snigdha Banerjee and Hans Peter Frey and Sophie Molholm and John J Foxe}, doi = {10.1111/ejn.12810}, year = {2015}, date = {2015-01-01}, journal = {European Journal of Neuroscience}, volume = {41}, number = {6}, pages = {818--834}, abstract = {The voluntary allocation of attention to environmental inputs is a crucial mechanism of healthy cognitive functioning, and is probably influenced by an observer's level of interest in a stimulus. For example, an individual who is passionate about soccer but bored by botany will obviously be more attentive at a soccer match than an orchid show. The influence of monetary rewards on attention has been examined, but the impact of more common motivating factors (i.e. the level of interest in the materials under observation) remains unclear, especially during development. Here, stimulus sets were designed based on survey measures of the level of interest of adolescent participants in several item classes. High-density electroencephalography was recorded during a cued spatial attention task in which stimuli of high or low interest were presented in separate blocks. The motivational impact on performance of a spatial attention task was assessed, along with event-related potential measures of anticipatory top-down attention. As predicted, performance was improved for the spatial target detection of high interest items. Further, the impact of motivation was observed in parieto-occipital processes associated with anticipatory top-down spatial attention. The anticipatory activity over these regions was also increased for high vs. low interest stimuli, irrespective of the direction of spatial attention. The results also showed stronger anticipatory attentional and motivational modulations over the right vs. left parieto-occipital cortex. These data suggest that motivation enhances top-down attentional processes, and can independently shape activations in sensory regions in anticipation of events. They also suggest that attentional functions across hemispheres may not fully mature until late adolescence.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The voluntary allocation of attention to environmental inputs is a crucial mechanism of healthy cognitive functioning, and is probably influenced by an observer's level of interest in a stimulus. For example, an individual who is passionate about soccer but bored by botany will obviously be more attentive at a soccer match than an orchid show. The influence of monetary rewards on attention has been examined, but the impact of more common motivating factors (i.e. the level of interest in the materials under observation) remains unclear, especially during development. Here, stimulus sets were designed based on survey measures of the level of interest of adolescent participants in several item classes. High-density electroencephalography was recorded during a cued spatial attention task in which stimuli of high or low interest were presented in separate blocks. The motivational impact on performance of a spatial attention task was assessed, along with event-related potential measures of anticipatory top-down attention. As predicted, performance was improved for the spatial target detection of high interest items. Further, the impact of motivation was observed in parieto-occipital processes associated with anticipatory top-down spatial attention. The anticipatory activity over these regions was also increased for high vs. low interest stimuli, irrespective of the direction of spatial attention. The results also showed stronger anticipatory attentional and motivational modulations over the right vs. left parieto-occipital cortex. These data suggest that motivation enhances top-down attentional processes, and can independently shape activations in sensory regions in anticipation of events. They also suggest that attentional functions across hemispheres may not fully mature until late adolescence. |
Mareike Bayer; Valentina Rossi; Naomi Vanlessen; Annika Grass; Annekathrin Schacht; Gilles Pourtois Independent effects of motivation and spatial attention in the human visual cortex Journal Article Social Cognitive and Affective Neuroscience, 12 (1), pp. 146–156, 2017. @article{Bayer2017a, title = {Independent effects of motivation and spatial attention in the human visual cortex}, author = {Mareike Bayer and Valentina Rossi and Naomi Vanlessen and Annika Grass and Annekathrin Schacht and Gilles Pourtois}, doi = {10.1093/scan/nsw162}, year = {2017}, date = {2017-01-01}, journal = {Social Cognitive and Affective Neuroscience}, volume = {12}, number = {1}, pages = {146--156}, abstract = {Motivation and attention constitute major determinants of human perception and action. Nonetheless, it remains a matter of debate whether motivation effects on the visual cortex depend on the spatial attention system, or rely on independent pathways. This study investigated the impact of motivation and spatial attention on the activity of the human primary and extrastriate visual cortex by employing a factorial manipulation of the two factors in a cued pattern discrimination task. During stimulus presentation, we recorded event-related potentials and pupillary responses. Motivational relevance increased the amplitudes of the C1 component at ∼70 ms after stimulus onset. This modulation occurred independently of spatial attention effects, which were evident at the P1 level. Furthermore, motivation and spatial attention had independent effects on preparatory activation as measured by the contingent negative variation; and pupil data showed increased activation in response to incentive targets. Taken together, these findings suggest independent pathways for the influence of motivation and spatial attention on the activity of the human visual cortex.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Motivation and attention constitute major determinants of human perception and action. Nonetheless, it remains a matter of debate whether motivation effects on the visual cortex depend on the spatial attention system, or rely on independent pathways. This study investigated the impact of motivation and spatial attention on the activity of the human primary and extrastriate visual cortex by employing a factorial manipulation of the two factors in a cued pattern discrimination task. During stimulus presentation, we recorded event-related potentials and pupillary responses. Motivational relevance increased the amplitudes of the C1 component at ∼70 ms after stimulus onset. This modulation occurred independently of spatial attention effects, which were evident at the P1 level. Furthermore, motivation and spatial attention had independent effects on preparatory activation as measured by the contingent negative variation; and pupil data showed increased activation in response to incentive targets. Taken together, these findings suggest independent pathways for the influence of motivation and spatial attention on the activity of the human visual cortex. |
Mareike Bayer; Katja Ruthmann; Annekathrin Schacht The impact of personal relevance on emotion processing: Evidence from event-related potentials and pupillary responses Journal Article Social Cognitive and Affective Neuroscience, 12 (9), pp. 1470–1479, 2017. @article{Bayer2017b, title = {The impact of personal relevance on emotion processing: Evidence from event-related potentials and pupillary responses}, author = {Mareike Bayer and Katja Ruthmann and Annekathrin Schacht}, doi = {10.1093/scan/nsx075}, year = {2017}, date = {2017-01-01}, journal = {Social Cognitive and Affective Neuroscience}, volume = {12}, number = {9}, pages = {1470--1479}, abstract = {Emotional stimuli attract attention and lead to increased activity in the visual cortex. The present study investigated the impact of personal relevance on emotion processing by presenting emotional words within sentences that referred to participants' significant others or to unknown agents. In event-related potentials, personal relevance increased visual cortex activity within 100 ms after stimulus onset and the amplitudes of the Late Positive Complex (LPC). Moreover, personally relevant contexts gave rise to augmented pupillary responses and higher arousal ratings, suggesting a general boost of attention and arousal. Finally, personal relevance increased emotion-related ERP effects starting around 200 ms after word onset; effects for negative words compared to neutral words were prolonged in duration. Source localizations of these interactions revealed activations in prefrontal regions, in the visual cortex and in the fusiform gyrus. Taken together, these results demonstrate the high impact of personal relevance on reading in general and on emotion processing in particular.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Emotional stimuli attract attention and lead to increased activity in the visual cortex. The present study investigated the impact of personal relevance on emotion processing by presenting emotional words within sentences that referred to participants' significant others or to unknown agents. In event-related potentials, personal relevance increased visual cortex activity within 100 ms after stimulus onset and the amplitudes of the Late Positive Complex (LPC). Moreover, personally relevant contexts gave rise to augmented pupillary responses and higher arousal ratings, suggesting a general boost of attention and arousal. Finally, personal relevance increased emotion-related ERP effects starting around 200 ms after word onset; effects for negative words compared to neutral words were prolonged in duration. Source localizations of these interactions revealed activations in prefrontal regions, in the visual cortex and in the fusiform gyrus. Taken together, these results demonstrate the high impact of personal relevance on reading in general and on emotion processing in particular. |
Brett C Bays; Kristina M Visscher; Christophe C Le Dantec; Aaron R Seitz Alpha-band EEG activity in perceptual learning Journal Article Journal of Vision, 15 (10), pp. 1–12, 2015. @article{Bays2015, title = {Alpha-band EEG activity in perceptual learning}, author = {Brett C Bays and Kristina M Visscher and Christophe C {Le Dantec} and Aaron R Seitz}, doi = {10.1167/15.10.7}, year = {2015}, date = {2015-01-01}, journal = {Journal of Vision}, volume = {15}, number = {10}, pages = {1--12}, abstract = {In studies of perceptual learning (PL), subjects are typically highly trained across many sessions to achieve perceptual benefits on the stimuli in those tasks. There is currently significant debate regarding what sources of brain plasticity underlie these PL-based learning improvements. Here we investigate the hypothesis that PL, among other mechanisms, leads to task automaticity, especially in the presence of the trained stimuli. To investigate this hypothesis, we trained participants for eight sessions to find an oriented target in a field of near-oriented distractors and examined alpha-band activity, which modulates with attention to visual stimuli, as a possible measure of automaticity. Alpha-band activity was acquired via electroencephalogram (EEG), before and after training, as participants performed the task with trained and untrained stimuli. Results show that participants underwent significant learning in this task (as assessed by threshold, accuracy, and reaction time improvements) and that alpha power increased during the pre-stimulus period and then underwent greater desynchronization at the time of stimulus presentation following training. However, these changes in alpha-band activity were not specific to the trained stimuli, with similar patterns of posttraining alpha power for trained and untrained stimuli. These data are consistent with the view that participants were more efficient at focusing resources at the time of stimulus presentation and are consistent with a greater automaticity of task performance. These findings have implications for PL, as transfer effects from trained to untrained stimuli may partially depend on differential effort of the individual at the time of stimulus processing.}, keywords = {}, pubstate = {published}, tppubtype = {article} } In studies of perceptual learning (PL), subjects are typically highly trained across many sessions to achieve perceptual benefits on the stimuli in those tasks. There is currently significant debate regarding what sources of brain plasticity underlie these PL-based learning improvements. Here we investigate the hypothesis that PL, among other mechanisms, leads to task automaticity, especially in the presence of the trained stimuli. To investigate this hypothesis, we trained participants for eight sessions to find an oriented target in a field of near-oriented distractors and examined alpha-band activity, which modulates with attention to visual stimuli, as a possible measure of automaticity. Alpha-band activity was acquired via electroencephalogram (EEG), before and after training, as participants performed the task with trained and untrained stimuli. Results show that participants underwent significant learning in this task (as assessed by threshold, accuracy, and reaction time improvements) and that alpha power increased during the pre-stimulus period and then underwent greater desynchronization at the time of stimulus presentation following training. However, these changes in alpha-band activity were not specific to the trained stimuli, with similar patterns of posttraining alpha power for trained and untrained stimuli. These data are consistent with the view that participants were more efficient at focusing resources at the time of stimulus presentation and are consistent with a greater automaticity of task performance. These findings have implications for PL, as transfer effects from trained to untrained stimuli may partially depend on differential effort of the individual at the time of stimulus processing. |
Jeffrey S Bedwell; Christopher C Spencer; Chi C Chan; Pamela D Butler; Pejman Sehatpour; Joseph Schmidt The P1 visual-evoked potential, red light, and transdiagnostic psychiatric symptoms Journal Article Brain Research, 1687 , pp. 144–154, 2018. @article{Bedwell2018, title = {The P1 visual-evoked potential, red light, and transdiagnostic psychiatric symptoms}, author = {Jeffrey S Bedwell and Christopher C Spencer and Chi C Chan and Pamela D Butler and Pejman Sehatpour and Joseph Schmidt}, doi = {10.1016/j.brainres.2018.03.002}, year = {2018}, date = {2018-01-01}, journal = {Brain Research}, volume = {1687}, pages = {144--154}, publisher = {Elsevier B.V.}, abstract = {A reduced P1 visual-evoked potential amplitude has been reported across several psychiatric disorders, including schizophrenia-spectrum, bipolar, and depressive disorders. In addition, a difference in P1 amplitude change to a red background compared to its opponent color, green, has been found in schizophrenia-spectrum samples. The current study examined whether specific psychiatric symptoms that related to these P1 abnormalities in earlier studies would be replicated when using a broad transdiagnostic sample. The final sample consisted of 135 participants: 26 with bipolar disorders, 25 with schizophrenia-spectrum disorders, 19 with unipolar depression, 62 with no current psychiatric disorder, and 3 with disorders in other categories. Low (8%) and high (64%) contrast check arrays were presented on gray, green, and red background conditions during electroencephalogram, while an eye tracker monitored visual fixation on the stimuli. Linear regressions across the entire sample (N = 135) found that greater severity of both clinician-rated and self-reported delusions/magical thinking correlated with a reduced P1 amplitude on the low contrast gray (neutral) background condition. In addition, across the entire sample, higher self-reported constricted affect was associated with a larger decrease in P1 amplitude (averaged across contrast conditions) to the red, compared to green, background. All relationships remained statistically significant after covarying for diagnostic class, suggesting that they are relatively transdiagnostic in nature. These findings indicate that early visual processing abnormalities may be more directly related to specific transdiagnostic symptoms such as delusions and constricted affect rather than specific psychiatric diagnoses or broad symptom factor scales.}, keywords = {}, pubstate = {published}, tppubtype = {article} } A reduced P1 visual-evoked potential amplitude has been reported across several psychiatric disorders, including schizophrenia-spectrum, bipolar, and depressive disorders. In addition, a difference in P1 amplitude change to a red background compared to its opponent color, green, has been found in schizophrenia-spectrum samples. The current study examined whether specific psychiatric symptoms that related to these P1 abnormalities in earlier studies would be replicated when using a broad transdiagnostic sample. The final sample consisted of 135 participants: 26 with bipolar disorders, 25 with schizophrenia-spectrum disorders, 19 with unipolar depression, 62 with no current psychiatric disorder, and 3 with disorders in other categories. Low (8%) and high (64%) contrast check arrays were presented on gray, green, and red background conditions during electroencephalogram, while an eye tracker monitored visual fixation on the stimuli. Linear regressions across the entire sample (N = 135) found that greater severity of both clinician-rated and self-reported delusions/magical thinking correlated with a reduced P1 amplitude on the low contrast gray (neutral) background condition. In addition, across the entire sample, higher self-reported constricted affect was associated with a larger decrease in P1 amplitude (averaged across contrast conditions) to the red, compared to green, background. All relationships remained statistically significant after covarying for diagnostic class, suggesting that they are relatively transdiagnostic in nature. These findings indicate that early visual processing abnormalities may be more directly related to specific transdiagnostic symptoms such as delusions and constricted affect rather than specific psychiatric diagnoses or broad symptom factor scales. |
Christian Bellebaum; Klaus-Peter Hoffmann; Irene Daum Post-saccadic updating of visual space in the posterior parietal cortex in humans Journal Article Behavioural Brain Research, 163 (2), pp. 194–203, 2005. @article{Bellebaum2005a, title = {Post-saccadic updating of visual space in the posterior parietal cortex in humans}, author = {Christian Bellebaum and Klaus-Peter Hoffmann and Irene Daum}, doi = {10.1016/j.bbr.2005.05.007}, year = {2005}, date = {2005-01-01}, journal = {Behavioural Brain Research}, volume = {163}, number = {2}, pages = {194--203}, abstract = {Updating of visual space takes place in the posterior parietal cortex to guarantee spatial constancy across eye movements. However, the timing of updating with respect to saccadic eye movements remains a matter of debate. In the present study, event-related potentials (ERPs) were recorded in 15 volunteers during a saccadic double-step task to elucidate the time course of the updating process. In the experimental condition updating of visual space was required, because both saccade targets had already disappeared before the first saccade was executed. A similar task without updating requirements served as control condition. ERP analysis revealed a significantly larger slow positive wave in the retino-spatial dissonance condition compared to the control condition, starting between 150 and 200 ms after first saccade onset. Source analysis showed an asymmetry with respect to the direction of the first saccade. Whereas the source was restricted to the right PPC in trials with leftward first saccades, left and right PPC were involved in rightward trials. The results of the present study suggest that updating of visual space in a saccadic double-step task occurs not earlier than 150 ms after the onset of the first saccade. We conclude that extraretinal information about the first saccade is integrated with motor information about the second saccade in the inter-saccade interval.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Updating of visual space takes place in the posterior parietal cortex to guarantee spatial constancy across eye movements. However, the timing of updating with respect to saccadic eye movements remains a matter of debate. In the present study, event-related potentials (ERPs) were recorded in 15 volunteers during a saccadic double-step task to elucidate the time course of the updating process. In the experimental condition updating of visual space was required, because both saccade targets had already disappeared before the first saccade was executed. A similar task without updating requirements served as control condition. ERP analysis revealed a significantly larger slow positive wave in the retino-spatial dissonance condition compared to the control condition, starting between 150 and 200 ms after first saccade onset. Source analysis showed an asymmetry with respect to the direction of the first saccade. Whereas the source was restricted to the right PPC in trials with leftward first saccades, left and right PPC were involved in rightward trials. The results of the present study suggest that updating of visual space in a saccadic double-step task occurs not earlier than 150 ms after the onset of the first saccade. We conclude that extraretinal information about the first saccade is integrated with motor information about the second saccade in the inter-saccade interval. |
Sonya Bells; Jérémie Lefebvre; Giulia Longoni; Sridar Narayanan; Douglas L Arnold; Eleun Ann Yeh; Donald J Mabbott White matter plasticity and maturation in human cognition Journal Article Glia, 67 (11), pp. 2020–2037, 2019. @article{Bells2019, title = {White matter plasticity and maturation in human cognition}, author = {Sonya Bells and Jérémie Lefebvre and Giulia Longoni and Sridar Narayanan and Douglas L Arnold and Eleun Ann Yeh and Donald J Mabbott}, doi = {10.1002/glia.23661}, year = {2019}, date = {2019-01-01}, journal = {Glia}, volume = {67}, number = {11}, pages = {2020--2037}, publisher = {John Wiley and Sons Inc.}, abstract = {White matter plasticity likely plays a critical role in supporting cognitive development. However, few studies have used the imaging methods specific to white matter tissue structure or experimental designs sensitive to change in white matter necessary to elucidate these relations. Here we briefly review novel imaging approaches that provide more specific information regarding white matter microstructure. Furthermore, we highlight recent studies that provide greater clarity regarding the relations between changes in white matter and cognition maturation in both healthy children and adolescents and those with white matter insult. Finally, we examine the hypothesis that white matter is linked to cognitive function via its impact on neural synchro- nization. We test this hypothesis in a population of children and adolescents with recurrent demyelinating syndromes. Specifically, we evaluate group differences in white matter microstructure within the optic radiation; and neural phase synchrony in visual cortex during a visual task between 25 patients and 28 typically developing age-matched controls. Children and adolescents with demyelinating syndromes show evidence of myelin and axonal compromise and this compromise predicts reduced phase synchrony during a visual task compared to typically developing controls. We investigate one plausible mechanism at play in this relationship using a computational model of gamma generation in early visual cortical areas. Overall, our findings show a fundamental connection between white matter microstructure and neural synchronization that may be critical for cognitive processing. In the future, longitudinal or interventional studies can build upon our knowledge of these exciting relations between white matter, neural communication, and cognition.}, keywords = {}, pubstate = {published}, tppubtype = {article} } White matter plasticity likely plays a critical role in supporting cognitive development. However, few studies have used the imaging methods specific to white matter tissue structure or experimental designs sensitive to change in white matter necessary to elucidate these relations. Here we briefly review novel imaging approaches that provide more specific information regarding white matter microstructure. Furthermore, we highlight recent studies that provide greater clarity regarding the relations between changes in white matter and cognition maturation in both healthy children and adolescents and those with white matter insult. Finally, we examine the hypothesis that white matter is linked to cognitive function via its impact on neural synchro- nization. We test this hypothesis in a population of children and adolescents with recurrent demyelinating syndromes. Specifically, we evaluate group differences in white matter microstructure within the optic radiation; and neural phase synchrony in visual cortex during a visual task between 25 patients and 28 typically developing age-matched controls. Children and adolescents with demyelinating syndromes show evidence of myelin and axonal compromise and this compromise predicts reduced phase synchrony during a visual task compared to typically developing controls. We investigate one plausible mechanism at play in this relationship using a computational model of gamma generation in early visual cortical areas. Overall, our findings show a fundamental connection between white matter microstructure and neural synchronization that may be critical for cognitive processing. In the future, longitudinal or interventional studies can build upon our knowledge of these exciting relations between white matter, neural communication, and cognition. |
Sonya Bells; Silvia L Isabella; Donald C Brien; Brian C Coe; Douglas P Munoz; Donald J Mabbott; Douglas O Cheyne Mapping neural dynamics underlying saccade preparation and execution and their relation to reaction time and direction errors Journal Article Human Brain Mapping, 41 (7), pp. 1934–1949, 2020. @article{Bells2020, title = {Mapping neural dynamics underlying saccade preparation and execution and their relation to reaction time and direction errors}, author = {Sonya Bells and Silvia L Isabella and Donald C Brien and Brian C Coe and Douglas P Munoz and Donald J Mabbott and Douglas O Cheyne}, doi = {10.1002/hbm.24922}, year = {2020}, date = {2020-01-01}, journal = {Human Brain Mapping}, volume = {41}, number = {7}, pages = {1934--1949}, abstract = {Our ability to control and inhibit automatic behaviors is crucial for negotiating complex environments, all of which require rapid communication between sensory, motor, and cognitive networks. Here, we measured neuromagnetic brain activity to investigate the neural timing of cortical areas needed for inhibitory control, while 14 healthy young adults performed an interleaved prosaccade (look at a peripheral visual stimulus) and antisaccade (look away from stimulus) task. Analysis of how neural activity relates to saccade reaction time (SRT) and occurrence of direction errors (look at stimulus on antisaccade trials) provides insight into inhibitory control. Neuromagnetic source activity was used to extract stimulus-aligned and saccade-aligned activity to examine temporal differences between prosaccade and antisaccade trials in brain regions associated with saccade control. For stimulus-aligned antisaccade trials, a longer SRT was associated with delayed onset of neural activity within the ipsilateral parietal eye field (PEF) and bilateral frontal eye field (FEF). Saccade-aligned activity demonstrated peak activation 10ms before saccade-onset within the contralateral PEF for prosaccade trials and within the bilateral FEF for antisaccade trials. In addition, failure to inhibit prosaccades on anti-saccade trials was associated with increased activity prior to saccade onset within the FEF contralateral to the peripheral stimulus. This work on dynamic activity adds to our knowledge that direction errors were due, at least in part, to a failure to inhibit automatic prosaccades. These findings provide novel evidence in humans regarding the temporal dynamics within oculomotor areas needed for saccade programming and the role frontal brain regions have on top-down inhibitory control.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Our ability to control and inhibit automatic behaviors is crucial for negotiating complex environments, all of which require rapid communication between sensory, motor, and cognitive networks. Here, we measured neuromagnetic brain activity to investigate the neural timing of cortical areas needed for inhibitory control, while 14 healthy young adults performed an interleaved prosaccade (look at a peripheral visual stimulus) and antisaccade (look away from stimulus) task. Analysis of how neural activity relates to saccade reaction time (SRT) and occurrence of direction errors (look at stimulus on antisaccade trials) provides insight into inhibitory control. Neuromagnetic source activity was used to extract stimulus-aligned and saccade-aligned activity to examine temporal differences between prosaccade and antisaccade trials in brain regions associated with saccade control. For stimulus-aligned antisaccade trials, a longer SRT was associated with delayed onset of neural activity within the ipsilateral parietal eye field (PEF) and bilateral frontal eye field (FEF). Saccade-aligned activity demonstrated peak activation 10ms before saccade-onset within the contralateral PEF for prosaccade trials and within the bilateral FEF for antisaccade trials. In addition, failure to inhibit prosaccades on anti-saccade trials was associated with increased activity prior to saccade onset within the FEF contralateral to the peripheral stimulus. This work on dynamic activity adds to our knowledge that direction errors were due, at least in part, to a failure to inhibit automatic prosaccades. These findings provide novel evidence in humans regarding the temporal dynamics within oculomotor areas needed for saccade programming and the role frontal brain regions have on top-down inhibitory control. |
Daniel Belyusar; Adam C Snyder; Hans Peter Frey; Mark R Harwood; Josh Wallman; John J Foxe Oscillatory alpha-band suppression mechanisms during the rapid attentional shifts required to perform an anti-saccade task Journal Article NeuroImage, 65 , pp. 395–407, 2013. @article{Belyusar2013, title = {Oscillatory alpha-band suppression mechanisms during the rapid attentional shifts required to perform an anti-saccade task}, author = {Daniel Belyusar and Adam C Snyder and Hans Peter Frey and Mark R Harwood and Josh Wallman and John J Foxe}, doi = {10.1016/j.neuroimage.2012.09.061}, year = {2013}, date = {2013-01-01}, journal = {NeuroImage}, volume = {65}, pages = {395--407}, abstract = {Neuroimaging has demonstrated anatomical overlap between covert and overt attention systems, although behavioral and electrophysiological studies have suggested that the two systems do not rely on entirely identical circuits or mechanisms. In a parallel line of research, topographically-specific modulations of alpha-band power (~. 8-14. Hz) have been consistently correlated with anticipatory states during tasks requiring covert attention shifts. These tasks, however, typically employ cue-target-interval paradigms where attentional processes are examined across relatively protracted periods of time and not at the rapid timescales implicated during overt attention tasks. The anti-saccade task, where one must first covertly attend for a peripheral target, before executing a rapid overt attention shift (i.e. a saccade) to the opposite side of space, is particularly well-suited for examining the rapid dynamics of overt attentional deployments. Here, we asked whether alpha-band oscillatory mechanisms would also be associated with these very rapid overt shifts, potentially representing a common neural mechanism across overt and covert attention systems. High-density electroencephalography in conjunction with infra-red eye-tracking was recorded while participants engaged in both pro- and anti-saccade task blocks. Alpha power, time-locked to saccade onset, showed three distinct phases of significantly lateralized topographic shifts, all occurring within a period of less than 1. s, closely reflecting the temporal dynamics of anti-saccade performance. Only two such phases were observed during the pro-saccade task. These data point to substantially more rapid temporal dynamics of alpha-band suppressive mechanisms than previously established, and implicate oscillatory alpha-band activity as a common mechanism across both overt and covert attentional deployments.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Neuroimaging has demonstrated anatomical overlap between covert and overt attention systems, although behavioral and electrophysiological studies have suggested that the two systems do not rely on entirely identical circuits or mechanisms. In a parallel line of research, topographically-specific modulations of alpha-band power (~. 8-14. Hz) have been consistently correlated with anticipatory states during tasks requiring covert attention shifts. These tasks, however, typically employ cue-target-interval paradigms where attentional processes are examined across relatively protracted periods of time and not at the rapid timescales implicated during overt attention tasks. The anti-saccade task, where one must first covertly attend for a peripheral target, before executing a rapid overt attention shift (i.e. a saccade) to the opposite side of space, is particularly well-suited for examining the rapid dynamics of overt attentional deployments. Here, we asked whether alpha-band oscillatory mechanisms would also be associated with these very rapid overt shifts, potentially representing a common neural mechanism across overt and covert attention systems. High-density electroencephalography in conjunction with infra-red eye-tracking was recorded while participants engaged in both pro- and anti-saccade task blocks. Alpha power, time-locked to saccade onset, showed three distinct phases of significantly lateralized topographic shifts, all occurring within a period of less than 1. s, closely reflecting the temporal dynamics of anti-saccade performance. Only two such phases were observed during the pro-saccade task. These data point to substantially more rapid temporal dynamics of alpha-band suppressive mechanisms than previously established, and implicate oscillatory alpha-band activity as a common mechanism across both overt and covert attentional deployments. |
Lysianne Beynel; Alan Chauvin; Nathalie Guyader; Sylvain Harquel; Thierry Bougerol; Christian Marendaz; David Szekely What saccadic eye movements tell us about TMS-induced neuromodulation of the DLPFC and mood changes: A pilot study in bipolar disorders Journal Article Frontiers in Integrative Neuroscience, 8 , pp. 1–8, 2014. @article{Beynel2014, title = {What saccadic eye movements tell us about TMS-induced neuromodulation of the DLPFC and mood changes: A pilot study in bipolar disorders}, author = {Lysianne Beynel and Alan Chauvin and Nathalie Guyader and Sylvain Harquel and Thierry Bougerol and Christian Marendaz and David Szekely}, doi = {10.3389/fnint.2014.00065}, year = {2014}, date = {2014-01-01}, journal = {Frontiers in Integrative Neuroscience}, volume = {8}, pages = {1--8}, abstract = {The study assumed that the antisaccade (AS) task is a relevant psychophysical tool to assess (i) short-term neuromodulation of the dorsolateral prefrontal cortex (DLPFC) induced by intermittent theta burst stimulation (iTBS); and (ii) mood change occurring during the course of the treatment. Saccadic inhibition is known to strongly involve the DLPFC, whose neuromodulation with iTBS requires less stimulation time and lower stimulation intensity, as well as results in longer aftereffects than the conventional repetitive transcranial magnetic stimulation (rTMS). Active or sham iTBS was applied every day for 3 weeks over the left DLPFC of 12 drug-resistant bipolar depressed patients. To assess the iTBS-induced short-term neuromodulation, the saccadic task was performed just before (S1) and just after (S2) the iTBS session, the first day of each week. Mood was evaluated through Montgomery and Asberg Depression Rating Scale (MADRS) scores and the difference in scores between the beginning and the end of treatment was correlated with AS performance change between these two periods. As expected, only patients from the active group improved their performance from S1 to S2 and mood improvement was significantly correlated with AS performance improvement. In addition, the AS task also discriminated depressive bipolar patients from healthy control subjects. Therefore, the AS task could be a relevant and useful tool for clinicians to assess if the Transcranial magnetic stimulation (TMS)-induced short-term neuromodulation of the DLPFC occurs as well as a “trait vs. state” objective marker of depressive mood disorder.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The study assumed that the antisaccade (AS) task is a relevant psychophysical tool to assess (i) short-term neuromodulation of the dorsolateral prefrontal cortex (DLPFC) induced by intermittent theta burst stimulation (iTBS); and (ii) mood change occurring during the course of the treatment. Saccadic inhibition is known to strongly involve the DLPFC, whose neuromodulation with iTBS requires less stimulation time and lower stimulation intensity, as well as results in longer aftereffects than the conventional repetitive transcranial magnetic stimulation (rTMS). Active or sham iTBS was applied every day for 3 weeks over the left DLPFC of 12 drug-resistant bipolar depressed patients. To assess the iTBS-induced short-term neuromodulation, the saccadic task was performed just before (S1) and just after (S2) the iTBS session, the first day of each week. Mood was evaluated through Montgomery and Asberg Depression Rating Scale (MADRS) scores and the difference in scores between the beginning and the end of treatment was correlated with AS performance change between these two periods. As expected, only patients from the active group improved their performance from S1 to S2 and mood improvement was significantly correlated with AS performance improvement. In addition, the AS task also discriminated depressive bipolar patients from healthy control subjects. Therefore, the AS task could be a relevant and useful tool for clinicians to assess if the Transcranial magnetic stimulation (TMS)-induced short-term neuromodulation of the DLPFC occurs as well as a “trait vs. state” objective marker of depressive mood disorder. |
Nicholas S Bland; Jason B Mattingley; Martin V Sale Gamma coherence mediates interhemispheric integration during multiple object tracking Journal Article Journal of Neurophysiology, 123 (5), pp. 1630–1644, 2020. @article{Bland2020, title = {Gamma coherence mediates interhemispheric integration during multiple object tracking}, author = {Nicholas S Bland and Jason B Mattingley and Martin V Sale}, doi = {10.1152/jn.00755.2019}, year = {2020}, date = {2020-01-01}, journal = {Journal of Neurophysiology}, volume = {123}, number = {5}, pages = {1630--1644}, abstract = {Our ability to track the paths of multiple visual objects moving between the hemifields requires effective integration of information between the two cerebral hemispheres. Coherent neural oscillations in the gamma band (35-70 Hz) are hypothesized to drive this information transfer. Here we manipulated the need for interhemispheric integration using a novel multiple object tracking (MOT) task in which stimuli either moved between the two visual hemifields, requiring interhemispheric integration, or moved within separate visual hemifields. We used electroencephalography (EEG) to measure interhemispheric coherence during the task. Human observers (21 women; 20 men) were poorer at tracking objects between versus within hemifields, reflecting a cost of interhemispheric integration. Critically, gamma coherence was greater in trials requiring interhemispheric integration, particularly between sensors over parieto-occipital areas. In approximately half of the participants, the observed cost of integration was associated with a failure of the cerebral hemispheres to become coherent in the gamma band. Moreover, individual differences in this integration cost correlated with endogenous gamma coherence at these same sensors, although with generally opposing relationships for the real and imaginary part of coherence. The real part (capturing synchronization with a near-zero phase lag) benefited between-hemifield tracking; imaginary coherence was detrimental. Finally, instantaneous phase coherence over the tracking period uniquely predicted between-hemifield tracking performance, suggesting that effective integration benefits from sustained interhemispheric synchronization. Our results show that gamma coherence mediates interhemispheric integration during MOT and add to a growing body of work demonstrating that coherence drives communication across cortically distributed neural networks. NEW & NOTEWORTHY Using a multiple object tracking paradigm, we were able to manipulate the need for interhemispheric integration on a per-trial basis, while also having an objective measure of integration efficacy (i.e., tracking performance). We show that tracking performance reflects a cost of integration, which correlates with individual differences in interhemispheric EEG coherence. Gamma coherence appears to uniquely benefit between-hemifield tracking, predicting performance both across participants and across trials.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Our ability to track the paths of multiple visual objects moving between the hemifields requires effective integration of information between the two cerebral hemispheres. Coherent neural oscillations in the gamma band (35-70 Hz) are hypothesized to drive this information transfer. Here we manipulated the need for interhemispheric integration using a novel multiple object tracking (MOT) task in which stimuli either moved between the two visual hemifields, requiring interhemispheric integration, or moved within separate visual hemifields. We used electroencephalography (EEG) to measure interhemispheric coherence during the task. Human observers (21 women; 20 men) were poorer at tracking objects between versus within hemifields, reflecting a cost of interhemispheric integration. Critically, gamma coherence was greater in trials requiring interhemispheric integration, particularly between sensors over parieto-occipital areas. In approximately half of the participants, the observed cost of integration was associated with a failure of the cerebral hemispheres to become coherent in the gamma band. Moreover, individual differences in this integration cost correlated with endogenous gamma coherence at these same sensors, although with generally opposing relationships for the real and imaginary part of coherence. The real part (capturing synchronization with a near-zero phase lag) benefited between-hemifield tracking; imaginary coherence was detrimental. Finally, instantaneous phase coherence over the tracking period uniquely predicted between-hemifield tracking performance, suggesting that effective integration benefits from sustained interhemispheric synchronization. Our results show that gamma coherence mediates interhemispheric integration during MOT and add to a growing body of work demonstrating that coherence drives communication across cortically distributed neural networks. NEW & NOTEWORTHY Using a multiple object tracking paradigm, we were able to manipulate the need for interhemispheric integration on a per-trial basis, while also having an objective measure of integration efficacy (i.e., tracking performance). We show that tracking performance reflects a cost of integration, which correlates with individual differences in interhemispheric EEG coherence. Gamma coherence appears to uniquely benefit between-hemifield tracking, predicting performance both across participants and across trials. |
Annabelle Blangero; Simon P Kelly Neural signature of value-based sensorimotor prioritization in humans Journal Article Journal of Neuroscience, 37 (44), pp. 10725–10737, 2017. @article{Blangero2017, title = {Neural signature of value-based sensorimotor prioritization in humans}, author = {Annabelle Blangero and Simon P Kelly}, doi = {10.1523/JNEUROSCI.1164-17.2017}, year = {2017}, date = {2017-01-01}, journal = {Journal of Neuroscience}, volume = {37}, number = {44}, pages = {10725--10737}, abstract = {In situations in which impending sensory events demand fast action choices, we must be ready to prioritize higher-value courses of action to avoid missed opportunities. When such a situation first presents itself, stimulus-action contingencies and their relative value must be encoded to establish a value-biased state of preparation for an impending sensorimotor decision. Here, we sought to identify neurophysiological signatures of such processes in the human brain (both female and male). We devised a task requiring fast action choices based on the discrimination of a simple visual cue in which the differently valued sensory alternatives were presented 750-800 ms before as peripheral "targets" that specified the stimulus-action mapping for the upcoming decision. In response to the targets, we identified a discrete, transient, spatially selective signal in the event-related potential (ERP), which scaled with relative value and strongly predicted the degree of behavioral bias in the upcoming decision both across and within subjects. This signal is not compatible with any hitherto known ERP signature of spatial selection and also bears novel distinctions with respect to characterizations of value-sensitive, spatially selective activity found in sensorimotor areas of nonhuman primates. Specifically, a series of follow-up experiments revealed that the signal was reliably invoked regardless of response laterality, response modality, sensory feature, and reward valence. It was absent, however, when the response deadline was relaxed and the strategic need for biasing removed. Therefore, more than passively representing value or salience, the signal appears to play a versatile and active role in adaptive sensorimotor prioritization.}, keywords = {}, pubstate = {published}, tppubtype = {article} } In situations in which impending sensory events demand fast action choices, we must be ready to prioritize higher-value courses of action to avoid missed opportunities. When such a situation first presents itself, stimulus-action contingencies and their relative value must be encoded to establish a value-biased state of preparation for an impending sensorimotor decision. Here, we sought to identify neurophysiological signatures of such processes in the human brain (both female and male). We devised a task requiring fast action choices based on the discrimination of a simple visual cue in which the differently valued sensory alternatives were presented 750-800 ms before as peripheral "targets" that specified the stimulus-action mapping for the upcoming decision. In response to the targets, we identified a discrete, transient, spatially selective signal in the event-related potential (ERP), which scaled with relative value and strongly predicted the degree of behavioral bias in the upcoming decision both across and within subjects. This signal is not compatible with any hitherto known ERP signature of spatial selection and also bears novel distinctions with respect to characterizations of value-sensitive, spatially selective activity found in sensorimotor areas of nonhuman primates. Specifically, a series of follow-up experiments revealed that the signal was reliably invoked regardless of response laterality, response modality, sensory feature, and reward valence. It was absent, however, when the response deadline was relaxed and the strategic need for biasing removed. Therefore, more than passively representing value or salience, the signal appears to play a versatile and active role in adaptive sensorimotor prioritization. |
Indu P Bodala; Junhua Li; Nitish V Thakor; Hasan Al-Nashash EEG and eye tracking demonstrate vigilance enhancement with challenge integration Journal Article Frontiers in Human Neuroscience, 10 , pp. 1–12, 2016. @article{Bodala2016, title = {EEG and eye tracking demonstrate vigilance enhancement with challenge integration}, author = {Indu P Bodala and Junhua Li and Nitish V Thakor and Hasan Al-Nashash}, doi = {10.3389/fnhum.2016.00273}, year = {2016}, date = {2016-01-01}, journal = {Frontiers in Human Neuroscience}, volume = {10}, pages = {1--12}, abstract = {Maintaining vigilance is possibly the first requirement for surveillance tasks where personnel are faced with monotonous yet intensive monitoring tasks. Decrement in vigilance in such situations could result in dangerous consequences such as accidents, loss of life and system failure. In this paper, we investigate the possibility to enhance vigilance or sustained attention using ‘challenge integration', a strategy that integrates a primary task with challenging stimuli. A primary surveillance task (identifying an intruder in a simulated factory environment) and a challenge stimulus (periods of rain obscuring the surveillance scene) were employed to test the changes in vigilance levels. The effect of integrating challenging events (resulting from artificially simulated rain) into the task were compared to the initial monotonous phase. EEG and eye tracking data is collected and analyzed for n = 12 subjects. Frontal midline theta power and frontal theta to parietal alpha power ratio which are used as measures of engagement and attention allocation show an increase due to challenge integration (p textless 0.05 in each case). Relative delta band power of EEG also shows statistically significant suppression on the frontoparietal and occipital cortices due to challenge integration (p textless 0.05). Saccade amplitude, saccade velocity and blink rate obtained from eye tracking data exhibit statistically significant changes during the challenge phase of the experiment (p textless 0.05 in each case). From the correlation analysis between the statistically significant measures of eye tracking and EEG, we infer that saccade amplitude and saccade velocity decrease with vigilance decrement along with frontal midline theta and frontal theta to parietal alpha ratio. Conversely, blink rate and relative delta power increase with vigilance decrement. However, these measures exhibit a reverse trend when challenge stimulus appears in the task suggesting vigilance enhancement. Moreover, the mean reaction time is lower for the challenge integrated phase (RT mean = 3.65 ± 1.4 secs) compared to initial monotonous phase without challenge (RT mean = 4.6 ± 2.7 secs). Our work shows that vigilance level, as assessed by response of these vital signs, is enhanced by challenge integration.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Maintaining vigilance is possibly the first requirement for surveillance tasks where personnel are faced with monotonous yet intensive monitoring tasks. Decrement in vigilance in such situations could result in dangerous consequences such as accidents, loss of life and system failure. In this paper, we investigate the possibility to enhance vigilance or sustained attention using ‘challenge integration', a strategy that integrates a primary task with challenging stimuli. A primary surveillance task (identifying an intruder in a simulated factory environment) and a challenge stimulus (periods of rain obscuring the surveillance scene) were employed to test the changes in vigilance levels. The effect of integrating challenging events (resulting from artificially simulated rain) into the task were compared to the initial monotonous phase. EEG and eye tracking data is collected and analyzed for n = 12 subjects. Frontal midline theta power and frontal theta to parietal alpha power ratio which are used as measures of engagement and attention allocation show an increase due to challenge integration (p textless 0.05 in each case). Relative delta band power of EEG also shows statistically significant suppression on the frontoparietal and occipital cortices due to challenge integration (p textless 0.05). Saccade amplitude, saccade velocity and blink rate obtained from eye tracking data exhibit statistically significant changes during the challenge phase of the experiment (p textless 0.05 in each case). From the correlation analysis between the statistically significant measures of eye tracking and EEG, we infer that saccade amplitude and saccade velocity decrease with vigilance decrement along with frontal midline theta and frontal theta to parietal alpha ratio. Conversely, blink rate and relative delta power increase with vigilance decrement. However, these measures exhibit a reverse trend when challenge stimulus appears in the task suggesting vigilance enhancement. Moreover, the mean reaction time is lower for the challenge integrated phase (RT mean = 3.65 ± 1.4 secs) compared to initial monotonous phase without challenge (RT mean = 4.6 ± 2.7 secs). Our work shows that vigilance level, as assessed by response of these vital signs, is enhanced by challenge integration. |
Louisa Bogaerts; Craig G Richter; Ayelet N Landau; Ram Frost Beta-band activity is a signature of statistical learning Journal Article Journal of Neuroscience, 40 (39), pp. 7523–7530, 2020. @article{Bogaerts2020, title = {Beta-band activity is a signature of statistical learning}, author = {Louisa Bogaerts and Craig G Richter and Ayelet N Landau and Ram Frost}, doi = {10.1523/JNEUROSCI.0771-20.2020}, year = {2020}, date = {2020-01-01}, journal = {Journal of Neuroscience}, volume = {40}, number = {39}, pages = {7523--7530}, abstract = {Through statistical learning (SL), cognitive systems may discover the underlying regularities in the environment. Testing human adults (n = 35, 21 females), we document, in the context of a classical visual SL task, divergent rhythmic EEG activity in the interstimulus delay periods within patterns versus between patterns (i.e., pattern transitions). Our findings reveal increased oscillatory activity in the beta band (∼20 Hz) at triplet transitions that indexes learning: It emerges with increased pattern repetitions; and importantly, it is highly correlated with behavioral learning outcomes. These findings hold the promise of converging on an online measure of learning regularities and provide important theoretical insights regarding the mechanisms of SL and prediction.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Through statistical learning (SL), cognitive systems may discover the underlying regularities in the environment. Testing human adults (n = 35, 21 females), we document, in the context of a classical visual SL task, divergent rhythmic EEG activity in the interstimulus delay periods within patterns versus between patterns (i.e., pattern transitions). Our findings reveal increased oscillatory activity in the beta band (∼20 Hz) at triplet transitions that indexes learning: It emerges with increased pattern repetitions; and importantly, it is highly correlated with behavioral learning outcomes. These findings hold the promise of converging on an online measure of learning regularities and provide important theoretical insights regarding the mechanisms of SL and prediction. |
Adam Borowicz Using a multichannel Wiener filter to remove eye-blink artifacts from EEG data Journal Article Biomedical Signal Processing and Control, 45 , pp. 246–255, 2018. @article{Borowicz2018, title = {Using a multichannel Wiener filter to remove eye-blink artifacts from EEG data}, author = {Adam Borowicz}, doi = {10.1016/j.bspc.2018.05.012}, year = {2018}, date = {2018-01-01}, journal = {Biomedical Signal Processing and Control}, volume = {45}, pages = {246--255}, publisher = {Elsevier Ltd}, abstract = {This paper presents a novel method for removing ocular artifacts from EEG recordings. The proposed approach is based on time-domain linear filtering. Instead of directly estimating the artifact-free signal, we propose to obtain the eye-blink signal first, using a multichannel Wiener filter (MWF) and a small subset of the frontal electrodes, so that extra EOG sensors are unnecessary. Then, the estimate of the eye-blink signal is subtracted from the noisy EEG signal in accordance with principles of regression analysis. We have performed numerical simulations so as to compare our approach to the independent component analysis (ICA) that is commonly used in EEG enhancement. Our experiments show that the MWF-based approach can perform better than the ICA in terms of eye-blink cancellation and signal distortions. Besides that, the proposed approach is conceptually simpler and better suited to real-time applications.}, keywords = {}, pubstate = {published}, tppubtype = {article} } This paper presents a novel method for removing ocular artifacts from EEG recordings. The proposed approach is based on time-domain linear filtering. Instead of directly estimating the artifact-free signal, we propose to obtain the eye-blink signal first, using a multichannel Wiener filter (MWF) and a small subset of the frontal electrodes, so that extra EOG sensors are unnecessary. Then, the estimate of the eye-blink signal is subtracted from the noisy EEG signal in accordance with principles of regression analysis. We have performed numerical simulations so as to compare our approach to the independent component analysis (ICA) that is commonly used in EEG enhancement. Our experiments show that the MWF-based approach can perform better than the ICA in terms of eye-blink cancellation and signal distortions. Besides that, the proposed approach is conceptually simpler and better suited to real-time applications. |
S E Bosch; Sebastiaan F W Neggers; Stefan Van der Stigchel The role of the frontal eye fields in oculomotor competition: Image-guided TMS enhances contralateral target selection Journal Article Cerebral Cortex, 23 (4), pp. 824–832, 2013. @article{Bosch2013, title = {The role of the frontal eye fields in oculomotor competition: Image-guided TMS enhances contralateral target selection}, author = {S E Bosch and Sebastiaan F W Neggers and Stefan {Van der Stigchel}}, doi = {10.1093/cercor/bhs075}, year = {2013}, date = {2013-01-01}, journal = {Cerebral Cortex}, volume = {23}, number = {4}, pages = {824--832}, abstract = {In order to execute a correct eye movement to a target in a search display, a saccade program toward the target element must be activated, while saccade programs toward distracting elements must be inhibited. The aim of the present study was to elucidate the role of the frontal eye fields (FEFs) in oculomotor competition. Functional magnetic resonance imaging-guided single-pulse transcranial magnetic stimulation (TMS) was administered over either the left FEF, the right FEF, or the vertex (control site) at 3 time intervals after target presentation, while subjects performed an oculomotor capture task. When TMS was applied over the FEF contralateral to the visual field where a target was presented, there was less interference of an ipsilateral distractor compared with FEF stimulation ipsilateral to the target's visual field or TMS over vertex. Furthermore, TMS over the FEFs decreased latencies of saccades to the contralateral visual field, irrespective of whether the saccade was directed to the target or to the distractor. These findings show that single-pulse TMS over the FEFs enhances the selection of a target in the contralateral visual field and decreases saccade latencies to the contralateral visual field.}, keywords = {}, pubstate = {published}, tppubtype = {article} } In order to execute a correct eye movement to a target in a search display, a saccade program toward the target element must be activated, while saccade programs toward distracting elements must be inhibited. The aim of the present study was to elucidate the role of the frontal eye fields (FEFs) in oculomotor competition. Functional magnetic resonance imaging-guided single-pulse transcranial magnetic stimulation (TMS) was administered over either the left FEF, the right FEF, or the vertex (control site) at 3 time intervals after target presentation, while subjects performed an oculomotor capture task. When TMS was applied over the FEF contralateral to the visual field where a target was presented, there was less interference of an ipsilateral distractor compared with FEF stimulation ipsilateral to the target's visual field or TMS over vertex. Furthermore, TMS over the FEFs decreased latencies of saccades to the contralateral visual field, irrespective of whether the saccade was directed to the target or to the distractor. These findings show that single-pulse TMS over the FEFs enhances the selection of a target in the contralateral visual field and decreases saccade latencies to the contralateral visual field. |
Mathieu Bourguignon; Martijn Baart; Efthymia C Kapnoula; Nicola Molinaro Lip-reading enables the brain to synthesize auditory features of unknown silent speech Journal Article Journal of Neuroscience, 40 (5), pp. 1053–1065, 2020. @article{Bourguignon2020, title = {Lip-reading enables the brain to synthesize auditory features of unknown silent speech}, author = {Mathieu Bourguignon and Martijn Baart and Efthymia C Kapnoula and Nicola Molinaro}, doi = {10.1523/JNEUROSCI.1101-19.2019}, year = {2020}, date = {2020-01-01}, journal = {Journal of Neuroscience}, volume = {40}, number = {5}, pages = {1053--1065}, abstract = {Lip-reading is crucial for understanding speech in challenging conditions. But how the brain extracts meaning from, silent, visual speech is still under debate. Lip-reading in silence activates the auditory cortices, but it is not known whether such activation reflects immediate synthesis of the corresponding auditory stimulus or imagery of unrelated sounds. To disentangle these possibilities, we used magnetoencephalography to evaluate how cortical activity in 28 healthy adult humans (17 females) entrained to the auditory speech envelope and lip movements (mouth opening) when listening to a spoken story without visual input (audio-only), and when seeing a silent video of a speaker articulating another story (video-only). In video-only, auditory cortical activity entrained to the absent auditory signal at frequencies textless1 Hz more than to the seen lip movements. This entrainment process was characterized by an auditory-speech-to-brain delay of ~70 ms in the left hemisphere, compared with ~20 ms in audio-only. Entrainment to mouth opening was found in the right angular gyrus at textless1 Hz, and in early visual cortices at 1– 8 Hz. These findings demonstrate that the brain can use a silent lip-read signal to synthesize a coarse-grained auditory speech representation in early auditory cortices. Our data indicate the following underlying oscillatory mechanism: seeing lip movements first modulates neuronal activity in early visual cortices at frequencies that match articulatory lip movements; the right angular gyrus then extracts slower features of lip movements, mapping them onto the corresponding speech sound features; this information is fed to auditory cortices, most likely facilitating speech parsing.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Lip-reading is crucial for understanding speech in challenging conditions. But how the brain extracts meaning from, silent, visual speech is still under debate. Lip-reading in silence activates the auditory cortices, but it is not known whether such activation reflects immediate synthesis of the corresponding auditory stimulus or imagery of unrelated sounds. To disentangle these possibilities, we used magnetoencephalography to evaluate how cortical activity in 28 healthy adult humans (17 females) entrained to the auditory speech envelope and lip movements (mouth opening) when listening to a spoken story without visual input (audio-only), and when seeing a silent video of a speaker articulating another story (video-only). In video-only, auditory cortical activity entrained to the absent auditory signal at frequencies textless1 Hz more than to the seen lip movements. This entrainment process was characterized by an auditory-speech-to-brain delay of ~70 ms in the left hemisphere, compared with ~20 ms in audio-only. Entrainment to mouth opening was found in the right angular gyrus at textless1 Hz, and in early visual cortices at 1– 8 Hz. These findings demonstrate that the brain can use a silent lip-read signal to synthesize a coarse-grained auditory speech representation in early auditory cortices. Our data indicate the following underlying oscillatory mechanism: seeing lip movements first modulates neuronal activity in early visual cortices at frequencies that match articulatory lip movements; the right angular gyrus then extracts slower features of lip movements, mapping them onto the corresponding speech sound features; this information is fed to auditory cortices, most likely facilitating speech parsing. |
Emanuel N van den Broeke; D M Hartgerink; J Butler; Julien Lambert; André Mouraux Central sensitization increases the pupil dilation elicited by mechanical pinprick stimulation Journal Article Journal of Neurophysiology, 53 (9), pp. 1689–1699, 2013. @article{Broeke2013, title = {Central sensitization increases the pupil dilation elicited by mechanical pinprick stimulation}, author = {Emanuel N van den Broeke and D M Hartgerink and J Butler and Julien Lambert and André Mouraux}, doi = {10.1017/CBO9781107415324.004}, year = {2013}, date = {2013-01-01}, journal = {Journal of Neurophysiology}, volume = {53}, number = {9}, pages = {1689--1699}, abstract = {High frequency electrical stimulation (HFS) of skin nociceptors triggers central sensitization (CS), manifested as increased pinprick sensitivity of the skin surrounding the site of HFS. Our aim was to assess the effect of CS on pinprick‐evoked pupil dilation responses (PDRs) and pinprick‐evoked brain potentials (PEPs). We hypothesized that the increase in the positive wave of PEPs following HFS would result from an enhanced pinprick‐evoked phasic response of the locus coeruleus‐noradrenergic system (LC‐NS), indicated by enhanced PDRs. In fourteen healthy volunteers, 64 and 96 mN pinprick stimuli were delivered to the left and right forearms, before and twenty minutes after applying HFS to one of the two forearms. Both PEPs and pinprick‐evoked PDRs were recorded. After HFS, pinprick stimuli were perceived as more intense at the HFS treated arm compared to baseline and control site, and this increase was similar for both stimulation intensities. Importantly, the pinprick‐ evoked PDR was also increased and the increase was stronger for 64 as compared to 96 mN stimulation. This is in line with our previous results showing a stronger increase of the PEP positivity at 64 vs. 96 mN stimulation and suggests that the increase in PEP positivity observed in previous studies could relate, at least in part, to enhance LC‐NS activity. However, there was no increase of the PEP positivity in the present study, indicating that enhanced LC‐NS activity is not the only determinant of the HFS‐induced enhancement of PEPs. Altogether, our results indicate that PDRs are more sensitive for detecting CS than PEPs.}, keywords = {}, pubstate = {published}, tppubtype = {article} } High frequency electrical stimulation (HFS) of skin nociceptors triggers central sensitization (CS), manifested as increased pinprick sensitivity of the skin surrounding the site of HFS. Our aim was to assess the effect of CS on pinprick‐evoked pupil dilation responses (PDRs) and pinprick‐evoked brain potentials (PEPs). We hypothesized that the increase in the positive wave of PEPs following HFS would result from an enhanced pinprick‐evoked phasic response of the locus coeruleus‐noradrenergic system (LC‐NS), indicated by enhanced PDRs. In fourteen healthy volunteers, 64 and 96 mN pinprick stimuli were delivered to the left and right forearms, before and twenty minutes after applying HFS to one of the two forearms. Both PEPs and pinprick‐evoked PDRs were recorded. After HFS, pinprick stimuli were perceived as more intense at the HFS treated arm compared to baseline and control site, and this increase was similar for both stimulation intensities. Importantly, the pinprick‐ evoked PDR was also increased and the increase was stronger for 64 as compared to 96 mN stimulation. This is in line with our previous results showing a stronger increase of the PEP positivity at 64 vs. 96 mN stimulation and suggests that the increase in PEP positivity observed in previous studies could relate, at least in part, to enhance LC‐NS activity. However, there was no increase of the PEP positivity in the present study, indicating that enhanced LC‐NS activity is not the only determinant of the HFS‐induced enhancement of PEPs. Altogether, our results indicate that PDRs are more sensitive for detecting CS than PEPs. |
Méadhbh B Brosnan; Kristina Sabaroedin; Tim Silk; Sila Genc; Daniel P Newman; Gerard M Loughnane; Alex Fornito; Redmond G O'Connell; Mark A Bellgrove Evidence accumulation during perceptual decisions in humans varies as a function of dorsal frontoparietal organization Journal Article Nature Human Behaviour, 4 (8), pp. 844–855, 2020. @article{Brosnan2020, title = {Evidence accumulation during perceptual decisions in humans varies as a function of dorsal frontoparietal organization}, author = {Méadhbh B Brosnan and Kristina Sabaroedin and Tim Silk and Sila Genc and Daniel P Newman and Gerard M Loughnane and Alex Fornito and Redmond G O'Connell and Mark A Bellgrove}, doi = {10.1038/s41562-020-0863-4}, year = {2020}, date = {2020-01-01}, journal = {Nature Human Behaviour}, volume = {4}, number = {8}, pages = {844--855}, publisher = {Springer US}, abstract = {Animal neurophysiological studies have identified neural signals within dorsal frontoparietal areas that trace a perceptual decision by accumulating sensory evidence over time and trigger action upon reaching a threshold. Although analogous accumulation-to-bound signals are identifiable on extracranial human electroencephalography, their cortical origins remain unknown. Here neural metrics of human evidence accumulation, predictive of the speed of perceptual reports, were isolated using electroencephalography and related to dorsal frontoparietal network (dFPN) connectivity using diffusion and resting-state functional magnetic resonance imaging. The build-up rate of evidence accumulation mediated the relationship between the white matter macrostructure of dFPN pathways and the efficiency of perceptual reports. This association between steeper build-up rates of evidence accumulation and the dFPN was recapitulated in the resting-state networks. Stronger connectivity between dFPN regions is thus associated with faster evidence accumulation and speeded perceptual decisions. Our findings identify an integrated network for perceptual decisions that may be targeted for neurorehabilitation in cognitive disorders.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Animal neurophysiological studies have identified neural signals within dorsal frontoparietal areas that trace a perceptual decision by accumulating sensory evidence over time and trigger action upon reaching a threshold. Although analogous accumulation-to-bound signals are identifiable on extracranial human electroencephalography, their cortical origins remain unknown. Here neural metrics of human evidence accumulation, predictive of the speed of perceptual reports, were isolated using electroencephalography and related to dorsal frontoparietal network (dFPN) connectivity using diffusion and resting-state functional magnetic resonance imaging. The build-up rate of evidence accumulation mediated the relationship between the white matter macrostructure of dFPN pathways and the efficiency of perceptual reports. This association between steeper build-up rates of evidence accumulation and the dFPN was recapitulated in the resting-state networks. Stronger connectivity between dFPN regions is thus associated with faster evidence accumulation and speeded perceptual decisions. Our findings identify an integrated network for perceptual decisions that may be targeted for neurorehabilitation in cognitive disorders. |
Harriet R Brown; Karl J Friston The functional anatomy of attention: A DCM study Journal Article Frontiers in Human Neuroscience, 7 (December), 2013. @article{Brown2013, title = {The functional anatomy of attention: A DCM study}, author = {Harriet R Brown and Karl J Friston}, doi = {10.3389/fnhum.2013.00784}, year = {2013}, date = {2013-01-01}, journal = {Frontiers in Human Neuroscience}, volume = {7}, number = {December}, abstract = {Recent formulations of attention—in terms of predictive coding—associate attentional gain with the expected precision of sensory information. Formal models of the Posner paradigm suggest that validity effects can be explained in a principled (Bayes optimal) fashion in terms of a cue-dependent setting of precision or gain on the sensory channels reporting anticipated target locations, which is updated selectively by invalid targets. This normative model is equipped with a biologically plausible process theory in the form of predictive coding, where precision is encoded by the gain of superficial pyramidal cells reporting prediction error. We used dynamic causal modeling to assess the evidence in magnetoencephalographic responses for cue-dependent and top-down updating of superficial pyramidal cell gain. Bayesian model comparison suggested that it is almost certain that differences in superficial pyramidal cells gain—and its top-down modulation—contribute to observed responses; and we could be more than 80% certain that anticipatory effects on post-synaptic gain are limited to visual (extrastriate) sources. These empirical results speak to the role of attention in optimizing perceptual inference and its formulation in terms of predictive coding.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Recent formulations of attention—in terms of predictive coding—associate attentional gain with the expected precision of sensory information. Formal models of the Posner paradigm suggest that validity effects can be explained in a principled (Bayes optimal) fashion in terms of a cue-dependent setting of precision or gain on the sensory channels reporting anticipated target locations, which is updated selectively by invalid targets. This normative model is equipped with a biologically plausible process theory in the form of predictive coding, where precision is encoded by the gain of superficial pyramidal cells reporting prediction error. We used dynamic causal modeling to assess the evidence in magnetoencephalographic responses for cue-dependent and top-down updating of superficial pyramidal cell gain. Bayesian model comparison suggested that it is almost certain that differences in superficial pyramidal cells gain—and its top-down modulation—contribute to observed responses; and we could be more than 80% certain that anticipatory effects on post-synaptic gain are limited to visual (extrastriate) sources. These empirical results speak to the role of attention in optimizing perceptual inference and its formulation in terms of predictive coding. |
Maximilian Bruchmann; Sebastian Schindler; Thomas Straube The spatial frequency spectrum of fearful faces modulates early and mid-latency ERPs but not the N170 Journal Article Psychophysiology, 57 , pp. 1–13, 2020. @article{Bruchmann2020, title = {The spatial frequency spectrum of fearful faces modulates early and mid-latency ERPs but not the N170}, author = {Maximilian Bruchmann and Sebastian Schindler and Thomas Straube}, doi = {10.1111/psyp.13597}, year = {2020}, date = {2020-01-01}, journal = {Psychophysiology}, volume = {57}, pages = {1--13}, abstract = {Prioritized processing of fearful compared to neutral faces is reflected in behavioral advantages such as lower detection thresholds, but also in enhanced early and late event-related potentials (ERPs). Behavioral advantages have recently been associated with the spatial frequency spectrum of fearful faces, better fitting the human contrast sensitivity function than the spectrum of neutral faces. However, it is unclear whether and to which extent early and late ERP differences are due to low-level spatial frequency spectrum information or high-level representations of the facial expression. In this pre-registered EEG study (N = 38), the effects of fearful-specific spatial frequencies on event-related ERPs were investigated by presenting faces with fearful and neutral expressions whose spatial frequency spectra were manipulated so as to contain either the average power spectra of neutral, fearful, or both expressions combined. We found an enlarged N170 to fearful versus neutral faces, not interacting with spatial frequency. Interactions of emotional expression and spatial frequencies were observed for the P1 and Early Posterior Negativity (EPN). For both components, larger emotion differences were observed when the spectrum contained neutral as opposed to fearful frequencies. Importantly, for the EPN, fearful and neutral expressions did not differ anymore when inserting fearful frequencies into neutral expressions, whereas typical emotion differences were found when faces contained average or neutral frequencies. Our findings show that N170 emotional modulations are unaffected by expression-specific spatial frequencies. However, expression-specific spatial frequencies alter early and mid-latency ERPs. Most notably, the EPN to neutral expressions is boosted by adding fearful spectra—but not vice versa.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Prioritized processing of fearful compared to neutral faces is reflected in behavioral advantages such as lower detection thresholds, but also in enhanced early and late event-related potentials (ERPs). Behavioral advantages have recently been associated with the spatial frequency spectrum of fearful faces, better fitting the human contrast sensitivity function than the spectrum of neutral faces. However, it is unclear whether and to which extent early and late ERP differences are due to low-level spatial frequency spectrum information or high-level representations of the facial expression. In this pre-registered EEG study (N = 38), the effects of fearful-specific spatial frequencies on event-related ERPs were investigated by presenting faces with fearful and neutral expressions whose spatial frequency spectra were manipulated so as to contain either the average power spectra of neutral, fearful, or both expressions combined. We found an enlarged N170 to fearful versus neutral faces, not interacting with spatial frequency. Interactions of emotional expression and spatial frequencies were observed for the P1 and Early Posterior Negativity (EPN). For both components, larger emotion differences were observed when the spectrum contained neutral as opposed to fearful frequencies. Importantly, for the EPN, fearful and neutral expressions did not differ anymore when inserting fearful frequencies into neutral expressions, whereas typical emotion differences were found when faces contained average or neutral frequencies. Our findings show that N170 emotional modulations are unaffected by expression-specific spatial frequencies. However, expression-specific spatial frequencies alter early and mid-latency ERPs. Most notably, the EPN to neutral expressions is boosted by adding fearful spectra—but not vice versa. |
Antimo Buonocore; Olaf Dimigen; David Melcher Post-saccadic face processing is modulated by pre-saccadic preview: Evidence from fixation-related potentials Journal Article Journal of Neuroscience, 40 (11), pp. 2305–2313, 2020. @article{Buonocore2020, title = {Post-saccadic face processing is modulated by pre-saccadic preview: Evidence from fixation-related potentials}, author = {Antimo Buonocore and Olaf Dimigen and David Melcher}, doi = {10.1523/JNEUROSCI.0861-19.2020}, year = {2020}, date = {2020-01-01}, journal = {Journal of Neuroscience}, volume = {40}, number = {11}, pages = {2305--2313}, abstract = {Humans actively sample their environment with saccadic eye movements to bring relevant information into high-acuity foveal vision. Despite being lower in resolution, peripheral information is also available before each saccade. How the pre-saccadic extrafoveal preview of a visual object influences its post-saccadic processing is still an unanswered question. The current study investigated this question by simultaneously recording behavior and fixation-related brain potentials while human subjects made saccades to face stimuli. We manipulated the relationship between pre-saccadic "previews" and post-saccadic images to explicitly isolate the influences of the former. Subjects performed a gender discrimination task on a newly foveated face under three preview conditions: scrambled face, incongruent face (different identity from the foveated face), and congruent face (same identity). As expected, reaction times were faster after a congruent-face preview compared with a scrambled-face preview. Importantly, intact face previews (either incongruent or congruent) resulted in a massive reduction of post-saccadic neural responses. Specifically, we analyzed the classic face-selective N170 component at occipitotemporal electroencephalogram electrodes, which was still present in our experiments with active looking. However, the post-saccadic N170 was strongly attenuated following intact-face previews compared with the scrambled condition. This large and long-lasting decrease in evoked activity is consistent with a trans-saccadic mechanism of prediction that influences category-specific neural processing at the start of a new fixation. These findings constrain theories of visual stability and show that the extrafoveal preview methodology can be a useful tool to investigate its underlying mechanisms.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Humans actively sample their environment with saccadic eye movements to bring relevant information into high-acuity foveal vision. Despite being lower in resolution, peripheral information is also available before each saccade. How the pre-saccadic extrafoveal preview of a visual object influences its post-saccadic processing is still an unanswered question. The current study investigated this question by simultaneously recording behavior and fixation-related brain potentials while human subjects made saccades to face stimuli. We manipulated the relationship between pre-saccadic "previews" and post-saccadic images to explicitly isolate the influences of the former. Subjects performed a gender discrimination task on a newly foveated face under three preview conditions: scrambled face, incongruent face (different identity from the foveated face), and congruent face (same identity). As expected, reaction times were faster after a congruent-face preview compared with a scrambled-face preview. Importantly, intact face previews (either incongruent or congruent) resulted in a massive reduction of post-saccadic neural responses. Specifically, we analyzed the classic face-selective N170 component at occipitotemporal electroencephalogram electrodes, which was still present in our experiments with active looking. However, the post-saccadic N170 was strongly attenuated following intact-face previews compared with the scrambled condition. This large and long-lasting decrease in evoked activity is consistent with a trans-saccadic mechanism of prediction that influences category-specific neural processing at the start of a new fixation. These findings constrain theories of visual stability and show that the extrafoveal preview methodology can be a useful tool to investigate its underlying mechanisms. |
Melanie R Burke; R O Coats Dissociation of the rostral and dorsolateral prefrontal cortex during sequence learning in saccades: A TMS investigation Journal Article Experimental Brain Research, 234 (2), pp. 597–604, 2016. @article{Burke2016, title = {Dissociation of the rostral and dorsolateral prefrontal cortex during sequence learning in saccades: A TMS investigation}, author = {Melanie R Burke and R O Coats}, doi = {10.1007/s00221-015-4495-2}, year = {2016}, date = {2016-01-01}, journal = {Experimental Brain Research}, volume = {234}, number = {2}, pages = {597--604}, publisher = {Springer Berlin Heidelberg}, abstract = {This experiment sought to find whether differences exist between the dorsolateral prefrontal cortex (DLPFC) and the medial rostral prefrontal cortex (MRPFC) for performing stimulus-independent and stimulus-oriented tasks, respectively. To find a causal relationship in these areas, we employed the use of trans-cranial magnetic stimulation (TMS). Prefrontal areas were stimulated whilst participants performed random or predictable sequence learning tasks at stimulus onset (1st presentation of the sequence only for both Random and Predictable), or during the inter-sequence interval. Overall, we found that during the predictable task a significant decrease in saccade latency, gain and duration was found when compared to the randomised conditions, as expected and observed previously. However, TMS stimulation in DLPFC during the delay in the predictive sequence learning task reduced this predictive ability by delaying the saccadic onset and generating abnormal reductions in saccadic gains during prediction. In contrast, we found that stimulation during a delay in MRPFC reversed the normal effects on peak velocity of the task with the predictive task revealing higher peak velocity than the randomised task. These findings provide causal evidence for independent functions of DLPFC and MRPFC in performing stimulus-independent processing during sequence learning in saccades.}, keywords = {}, pubstate = {published}, tppubtype = {article} } This experiment sought to find whether differences exist between the dorsolateral prefrontal cortex (DLPFC) and the medial rostral prefrontal cortex (MRPFC) for performing stimulus-independent and stimulus-oriented tasks, respectively. To find a causal relationship in these areas, we employed the use of trans-cranial magnetic stimulation (TMS). Prefrontal areas were stimulated whilst participants performed random or predictable sequence learning tasks at stimulus onset (1st presentation of the sequence only for both Random and Predictable), or during the inter-sequence interval. Overall, we found that during the predictable task a significant decrease in saccade latency, gain and duration was found when compared to the randomised conditions, as expected and observed previously. However, TMS stimulation in DLPFC during the delay in the predictive sequence learning task reduced this predictive ability by delaying the saccadic onset and generating abnormal reductions in saccadic gains during prediction. In contrast, we found that stimulation during a delay in MRPFC reversed the normal effects on peak velocity of the task with the predictive task revealing higher peak velocity than the randomised task. These findings provide causal evidence for independent functions of DLPFC and MRPFC in performing stimulus-independent processing during sequence learning in saccades. |
Florence Campana; Ignacio Rebollo; Anne E Urai; Valentin Wyart; Catherine Tallon-Baudry Conscious vision proceeds from global to local content in goal-directed tasks and spontaneous vision Journal Article Journal of Neuroscience, 36 (19), pp. 5200–5213, 2016. @article{Campana2016, title = {Conscious vision proceeds from global to local content in goal-directed tasks and spontaneous vision}, author = {Florence Campana and Ignacio Rebollo and Anne E Urai and Valentin Wyart and Catherine Tallon-Baudry}, doi = {10.1523/JNEUROSCI.3619-15.2016}, year = {2016}, date = {2016-01-01}, journal = {Journal of Neuroscience}, volume = {36}, number = {19}, pages = {5200--5213}, abstract = {The reverse hierarchy theory (Hochstein and Ahissar, 2002) makes strong, but so far untested, predictions on conscious vision. In this theory, local details encoded in lower-order visual areas are unconsciously processed before being automatically and rapidly combined into global information in higher-order visual areas, where conscious percepts emerge. Contingent on current goals, local details can afterward be consciously retrieved. This model therefore predicts that (1) global information is perceived faster than local details, (2) global information is computed regardless of task demands during early visual processing, and (3) spontaneous vision is dominated by global percepts. We designed novel textured stimuli that are, as opposed to the classic Navon's letters, truly hierarchical (i.e., where global information is solely defined by local information but where local and global orientations can still be manipulated separately). In line with the predictions, observers were systematically faster reporting global than local properties of those stimuli. Second, global information could be decoded from magneto-encephalographic data during early visual processing regardless of task demands. Last, spontaneous subjective reports were dominated by global information and the frequency and speed of spontaneous global perception correlated with the accuracy and speed in the global task. No such correlation was observed for local information. We therefore show that information at different levels of the visual hierarchy is not equally likely to become conscious; rather, conscious percepts emerge preferentially at a global level. We further show that spontaneous reports can be reliable and are tightly linked to objective performance at the global level.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The reverse hierarchy theory (Hochstein and Ahissar, 2002) makes strong, but so far untested, predictions on conscious vision. In this theory, local details encoded in lower-order visual areas are unconsciously processed before being automatically and rapidly combined into global information in higher-order visual areas, where conscious percepts emerge. Contingent on current goals, local details can afterward be consciously retrieved. This model therefore predicts that (1) global information is perceived faster than local details, (2) global information is computed regardless of task demands during early visual processing, and (3) spontaneous vision is dominated by global percepts. We designed novel textured stimuli that are, as opposed to the classic Navon's letters, truly hierarchical (i.e., where global information is solely defined by local information but where local and global orientations can still be manipulated separately). In line with the predictions, observers were systematically faster reporting global than local properties of those stimuli. Second, global information could be decoded from magneto-encephalographic data during early visual processing regardless of task demands. Last, spontaneous subjective reports were dominated by global information and the frequency and speed of spontaneous global perception correlated with the accuracy and speed in the global task. No such correlation was observed for local information. We therefore show that information at different levels of the visual hierarchy is not equally likely to become conscious; rather, conscious percepts emerge preferentially at a global level. We further show that spontaneous reports can be reliable and are tightly linked to objective performance at the global level. |
Almudena Capilla; Pascal Belin; Joachim Gross The early spatio-temporal correlates and task independence of cerebral voice processing studied with MEG Journal Article Cerebral Cortex, 23 (6), pp. 1388–1395, 2013. @article{Capilla2013, title = {The early spatio-temporal correlates and task independence of cerebral voice processing studied with MEG}, author = {Almudena Capilla and Pascal Belin and Joachim Gross}, doi = {10.1093/cercor/bhs119}, year = {2013}, date = {2013-01-01}, journal = {Cerebral Cortex}, volume = {23}, number = {6}, pages = {1388--1395}, abstract = {Functional magnetic resonance imaging studies have repeatedly provided evidence for temporal voice areas (TVAs) with particular sensitivity to human voices along bilateral mid/anterior superior temporal sulci and superior temporal gyri (STS/STG). In contrast, electrophysiological studies of the spatio-temporal correlates of cerebral voice processing have yielded contradictory results, finding the earliest correlates either at ∼300-400 ms, or earlier at ∼200 ms ("fronto-temporal positivity to voice", FTPV). These contradictory results are likely the consequence of different stimulus sets and attentional demands. Here, we recorded magnetoencephalography activity while participants listened to diverse types of vocal and non-vocal sounds and performed different tasks varying in attentional demands. Our results confirm the existence of an early voice-preferential magnetic response (FTPVm, the magnetic counterpart of the FTPV) peaking at about 220 ms and distinguishing between vocal and non-vocal sounds as early as 150 ms after stimulus onset. The sources underlying the FTPVm were localized along bilateral mid-STS/STG, largely overlapping with the TVAs. The FTPVm was consistently observed across different stimulus subcategories, including speech and non-speech vocal sounds, and across different tasks. These results demonstrate the early, largely automatic recruitment of focal, voice-selective cerebral mechanisms with a time-course comparable to that of face processing.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Functional magnetic resonance imaging studies have repeatedly provided evidence for temporal voice areas (TVAs) with particular sensitivity to human voices along bilateral mid/anterior superior temporal sulci and superior temporal gyri (STS/STG). In contrast, electrophysiological studies of the spatio-temporal correlates of cerebral voice processing have yielded contradictory results, finding the earliest correlates either at ∼300-400 ms, or earlier at ∼200 ms ("fronto-temporal positivity to voice", FTPV). These contradictory results are likely the consequence of different stimulus sets and attentional demands. Here, we recorded magnetoencephalography activity while participants listened to diverse types of vocal and non-vocal sounds and performed different tasks varying in attentional demands. Our results confirm the existence of an early voice-preferential magnetic response (FTPVm, the magnetic counterpart of the FTPV) peaking at about 220 ms and distinguishing between vocal and non-vocal sounds as early as 150 ms after stimulus onset. The sources underlying the FTPVm were localized along bilateral mid-STS/STG, largely overlapping with the TVAs. The FTPVm was consistently observed across different stimulus subcategories, including speech and non-speech vocal sounds, and across different tasks. These results demonstrate the early, largely automatic recruitment of focal, voice-selective cerebral mechanisms with a time-course comparable to that of face processing. |
Nathan Caruana; Peter de Lissa; Genevieve McArthur The neural time course of evaluating self-initiated joint attention bids Journal Article Brain and Cognition, 98 , pp. 43–52, 2015. @article{Caruana2015a, title = {The neural time course of evaluating self-initiated joint attention bids}, author = {Nathan Caruana and Peter de Lissa and Genevieve McArthur}, doi = {10.1016/j.bandc.2015.06.001}, year = {2015}, date = {2015-01-01}, journal = {Brain and Cognition}, volume = {98}, pages = {43--52}, publisher = {Elsevier Inc.}, abstract = {Background: During interactions with other people, we constantly evaluate the significance of our social partner's gaze shifts in order to coordinate our behaviour with their perspective. In this study, we used event-related potentials (ERPs) to investigate the neural time course of evaluating gaze shifts that signal the success of self-initiated joint attention bids. Method: Nineteen participants were allocated to a "social" condition, in which they played a cooperative game with an anthropomorphic virtual character whom they believed was controlled by a human partner in a nearby laboratory. Participants were required to initiate joint attention towards a target. In response, the virtual partner shifted his gaze congruently towards the target - thus achieving joint attention - or incongruently towards a different location. Another 19 participants completed the same task in a non-social "control" condition, in which arrows, believed to be controlled by a computer program, pointed at a location that was either congruent or incongruent with the participant's target fixation. Results: In the social condition, ERPs to the virtual partner's incongruent gaze shifts evoked significantly larger P350 and P500 peaks compared to congruent gaze shifts. This P350 and P500 morphology was absent in both the congruent and incongruent control conditions. Discussion: These findings are consistent with previous claims that gaze shifts differing in their social significance modulate central-parietal ERPs 350. ms following the onset of the gaze shift. Our control data highlights the social specificity of the observed P350 effect, ruling out explanations pertaining to attention modulation or error detection.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Background: During interactions with other people, we constantly evaluate the significance of our social partner's gaze shifts in order to coordinate our behaviour with their perspective. In this study, we used event-related potentials (ERPs) to investigate the neural time course of evaluating gaze shifts that signal the success of self-initiated joint attention bids. Method: Nineteen participants were allocated to a "social" condition, in which they played a cooperative game with an anthropomorphic virtual character whom they believed was controlled by a human partner in a nearby laboratory. Participants were required to initiate joint attention towards a target. In response, the virtual partner shifted his gaze congruently towards the target - thus achieving joint attention - or incongruently towards a different location. Another 19 participants completed the same task in a non-social "control" condition, in which arrows, believed to be controlled by a computer program, pointed at a location that was either congruent or incongruent with the participant's target fixation. Results: In the social condition, ERPs to the virtual partner's incongruent gaze shifts evoked significantly larger P350 and P500 peaks compared to congruent gaze shifts. This P350 and P500 morphology was absent in both the congruent and incongruent control conditions. Discussion: These findings are consistent with previous claims that gaze shifts differing in their social significance modulate central-parietal ERPs 350. ms following the onset of the gaze shift. Our control data highlights the social specificity of the observed P350 effect, ruling out explanations pertaining to attention modulation or error detection. |
Nathan Caruana; Peter de Lissa; Genevieve McArthur Beliefs about human agency influence the neural processing of gaze during joint attention Journal Article Social Neuroscience, 12 (2), pp. 194–206, 2017. @article{Caruana2017b, title = {Beliefs about human agency influence the neural processing of gaze during joint attention}, author = {Nathan Caruana and Peter de Lissa and Genevieve McArthur}, doi = {10.1080/17470919.2016.1160953}, year = {2017}, date = {2017-01-01}, journal = {Social Neuroscience}, volume = {12}, number = {2}, pages = {194--206}, publisher = {Routledge}, abstract = {The current study measured adults' P350 and N170 ERPs while they interacted with a character in a virtual reality paradigm. Some participants believed the character was controlled by a human ("avatar" condition}, keywords = {}, pubstate = {published}, tppubtype = {article} } The current study measured adults' P350 and N170 ERPs while they interacted with a character in a virtual reality paradigm. Some participants believed the character was controlled by a human ("avatar" condition |
Nathan Caruana; Genevieve McArthur The mind minds minds: The effect of intentional stance on the neural encoding of joint attention Journal Article Cognitive, Affective and Behavioral Neuroscience, 19 (6), pp. 1479–1491, 2019. @article{Caruana2019a, title = {The mind minds minds: The effect of intentional stance on the neural encoding of joint attention}, author = {Nathan Caruana and Genevieve McArthur}, doi = {10.3758/s13415-019-00734-y}, year = {2019}, date = {2019-01-01}, journal = {Cognitive, Affective and Behavioral Neuroscience}, volume = {19}, number = {6}, pages = {1479--1491}, publisher = {Cognitive, Affective, & Behavioral Neuroscience}, abstract = {Recent neuroimaging studies have observed that the neural processing of social cues from a virtual reality character appears to be affected by "intentional stance" (i.e., attributing mental states, agency, and "humanness"). However, this effect could also be explained by individual differences or perceptual effects resulting from the design of these studies. The current study used a new design that measured centro-parietal P250, P350, and N170 event-related potentials (ERPs) in 20 healthy adults while they initiated gaze-related joint attention with a virtual character (“Alan”) in two conditions. In one condition, they were told that Alan was controlled by a human; in the other, they were told that he was controlled by a computer. When participants believed Alan was human, his congruent gaze shifts, which resulted in joint attention, generated significantly larger P250 ERPs than his incongruent gaze shifts. In contrast, his incongruent gaze shifts triggered significantly larger increases in P350 ERPs than his congruent gaze shifts. These findings support previous studies suggesting that intentional stance affects the neural processing of social cues from a virtual character. The outcomes also suggest the use of the P250 and P350 ERPs as objective indices of social engagement during the design of socially approachable robots and virtual agents.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Recent neuroimaging studies have observed that the neural processing of social cues from a virtual reality character appears to be affected by "intentional stance" (i.e., attributing mental states, agency, and "humanness"). However, this effect could also be explained by individual differences or perceptual effects resulting from the design of these studies. The current study used a new design that measured centro-parietal P250, P350, and N170 event-related potentials (ERPs) in 20 healthy adults while they initiated gaze-related joint attention with a virtual character (“Alan”) in two conditions. In one condition, they were told that Alan was controlled by a human; in the other, they were told that he was controlled by a computer. When participants believed Alan was human, his congruent gaze shifts, which resulted in joint attention, generated significantly larger P250 ERPs than his incongruent gaze shifts. In contrast, his incongruent gaze shifts triggered significantly larger increases in P350 ERPs than his congruent gaze shifts. These findings support previous studies suggesting that intentional stance affects the neural processing of social cues from a virtual character. The outcomes also suggest the use of the P250 and P350 ERPs as objective indices of social engagement during the design of socially approachable robots and virtual agents. |
Marta Castellano; Michael Plöchl; Raul Vicente; Gordon Pipa Neuronal oscillations form parietal/frontal networks during contour integration Journal Article Frontiers in Integrative Neuroscience, 8 , pp. 1–13, 2014. @article{Castellano2014, title = {Neuronal oscillations form parietal/frontal networks during contour integration}, author = {Marta Castellano and Michael Plöchl and Raul Vicente and Gordon Pipa}, doi = {10.3389/fnint.2014.00064}, year = {2014}, date = {2014-01-01}, journal = {Frontiers in Integrative Neuroscience}, volume = {8}, pages = {1--13}, abstract = {The ability to integrate visual features into a global coherent percept that can be further categorized and manipulated are fundamental abilities of the neural system. While the processing of visual information involves activation of early visual cortices, the recruitment of parietal and frontal cortices has been shown to be crucial for perceptual processes. Yet is it not clear how both cortical and long-range oscillatory activity leads to the integration of visual features into a coherent percept. Here, we will investigate perceptual grouping through the analysis of a contour categorization task, where the local elements that form contour must be linked into a coherent structure, which is then further processed and manipulated to perform the categorization task. The contour formation in our visual stimulus is a dynamic process where, for the first time, visual perception of contours is disentangled from the onset of visual stimulation or from motor preparation, cognitive processes that until now have been behaviorally attached to perceptual processes. Our main finding is that, while local and long-range synchronization at several frequencies seem to be an ongoing phenomena, categorization of a contour could only be predicted through local oscillatory activity within parietal/frontal sources, which in turn, would synchronize at gamma (textgreater30 Hz) frequency. Simultaneously, fronto-parietal beta (13-30 Hz) phase locking forms a network spanning across neural sources that are not category specific. Both long range networks, i.e., the gamma network that is category specific, and the beta network that is not category specific, are functionally distinct but spatially overlapping. Altogether, we show that a critical mechanism underlying contour categorization involves oscillatory activity within parietal/frontal cortices, as well as its synchronization across distal cortical sites.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The ability to integrate visual features into a global coherent percept that can be further categorized and manipulated are fundamental abilities of the neural system. While the processing of visual information involves activation of early visual cortices, the recruitment of parietal and frontal cortices has been shown to be crucial for perceptual processes. Yet is it not clear how both cortical and long-range oscillatory activity leads to the integration of visual features into a coherent percept. Here, we will investigate perceptual grouping through the analysis of a contour categorization task, where the local elements that form contour must be linked into a coherent structure, which is then further processed and manipulated to perform the categorization task. The contour formation in our visual stimulus is a dynamic process where, for the first time, visual perception of contours is disentangled from the onset of visual stimulation or from motor preparation, cognitive processes that until now have been behaviorally attached to perceptual processes. Our main finding is that, while local and long-range synchronization at several frequencies seem to be an ongoing phenomena, categorization of a contour could only be predicted through local oscillatory activity within parietal/frontal sources, which in turn, would synchronize at gamma (textgreater30 Hz) frequency. Simultaneously, fronto-parietal beta (13-30 Hz) phase locking forms a network spanning across neural sources that are not category specific. Both long range networks, i.e., the gamma network that is category specific, and the beta network that is not category specific, are functionally distinct but spatially overlapping. Altogether, we show that a critical mechanism underlying contour categorization involves oscillatory activity within parietal/frontal cortices, as well as its synchronization across distal cortical sites. |
Dario Cazzoli; René M Müri; Christopher Kennard; Clive R Rosenthal The role of the right posterior parietal cortex in letter migration between words Journal Article Journal of Cognitive Neuroscience, 27 (2), pp. 377–386, 2015. @article{Cazzoli2015a, title = {The role of the right posterior parietal cortex in letter migration between words}, author = {Dario Cazzoli and René M Müri and Christopher Kennard and Clive R Rosenthal}, year = {2015}, date = {2015-01-01}, journal = {Journal of Cognitive Neuroscience}, volume = {27}, number = {2}, pages = {377--386}, abstract = {When briefly presented with pairs of words, skilled readers can sometimes report words with migrated letters (e.g., they report hunt when presented with the words hint and hurt). This and other letter migration phenomena have been often used to investigate factors that influence reading such as letter position coding. However, the neural basis of letter migration is poorly understood. Previous evidence has implicated the right posterior parietal cortex (PPC) in processing visuospatial attributes and lexical properties during word reading. The aim of this study was to assess this putative role by combining an inhibitory TMS protocol with a letter migration paradigm, which was designed to examine the contributions of visuospatial attributes and lexical factors. Temporary interference with the right PPC led to three specific effects on letter migration. First, the number of letter migrations was significantly increased only in the group with active stimulation (vs. a sham stimulation group or a control group without stimulation), and there was no significant effect on other error types. Second, this effect occurred only when letter migration could result in a meaningful word (migration vs. control context). Third, the effect of active stimulation on the number of letter migrations was lateralized to target words presented on the left. Our study thus demonstrates that the right PPC plays a specific and causal role in the phenomenon of letter migration. The nature of this role cannot be explained solely in terms of visuospatial attention, rather it involves an interplay between visuospatial attentional and word reading-specific factors.}, keywords = {}, pubstate = {published}, tppubtype = {article} } When briefly presented with pairs of words, skilled readers can sometimes report words with migrated letters (e.g., they report hunt when presented with the words hint and hurt). This and other letter migration phenomena have been often used to investigate factors that influence reading such as letter position coding. However, the neural basis of letter migration is poorly understood. Previous evidence has implicated the right posterior parietal cortex (PPC) in processing visuospatial attributes and lexical properties during word reading. The aim of this study was to assess this putative role by combining an inhibitory TMS protocol with a letter migration paradigm, which was designed to examine the contributions of visuospatial attributes and lexical factors. Temporary interference with the right PPC led to three specific effects on letter migration. First, the number of letter migrations was significantly increased only in the group with active stimulation (vs. a sham stimulation group or a control group without stimulation), and there was no significant effect on other error types. Second, this effect occurred only when letter migration could result in a meaningful word (migration vs. control context). Third, the effect of active stimulation on the number of letter migrations was lateralized to target words presented on the left. Our study thus demonstrates that the right PPC plays a specific and causal role in the phenomenon of letter migration. The nature of this role cannot be explained solely in terms of visuospatial attention, rather it involves an interplay between visuospatial attentional and word reading-specific factors. |
Simon Majed Ceh; Sonja Annerer-Walcher; Christof Körner; Christian Rominger; Silvia Erika Kober; Andreas Fink; Mathias Benedek Neurophysiological indicators of internal attention: An electroencephalography–eye-tracking coregistration study Journal Article Brain and Behavior, 10 (10), pp. 1–14, 2020. @article{Ceh2020, title = {Neurophysiological indicators of internal attention: An electroencephalography–eye-tracking coregistration study}, author = {Simon Majed Ceh and Sonja Annerer-Walcher and Christof Körner and Christian Rominger and Silvia Erika Kober and Andreas Fink and Mathias Benedek}, doi = {10.1002/brb3.1790}, year = {2020}, date = {2020-01-01}, journal = {Brain and Behavior}, volume = {10}, number = {10}, pages = {1--14}, abstract = {Introduction: Many goal-directed and spontaneous everyday activities (e.g., planning, mind wandering) rely on an internal focus of attention. Internally directed cognition (IDC) was shown to differ from externally directed cognition in a range of neurophysiological indicators such as electroencephalogram (EEG) alpha activity and eye behavior. Methods: In this EEG–eye-tracking coregistration study, we investigated effects of attention direction on EEG alpha activity and various relevant eye parameters. We used an established paradigm to manipulate internal attention demands in the visual domain within tasks by means of conditional stimulus masking. Results: Consistent with previous research, IDC involved relatively higher EEG alpha activity (lower alpha desynchronization) at posterior cortical sites. Moreover, IDC was characterized by greater pupil diameter (PD), fewer microsaccades, fixations, and saccades. These findings show that internal versus external cognition is associated with robust differences in several indicators at the neural and perceptual level. In a second line of analysis, we explored the intrinsic temporal covariation between EEG alpha activity and eye parameters during rest. This analysis revealed a positive correlation of EEG alpha power with PD especially in bilateral parieto-occipital regions. Conclusion: Together, these findings suggest that EEG alpha activity and PD represent time-sensitive indicators of internal attention demands, which may be involved in a neurophysiological gating mechanism serving to shield internal cognition from irrelevant sensory information.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Introduction: Many goal-directed and spontaneous everyday activities (e.g., planning, mind wandering) rely on an internal focus of attention. Internally directed cognition (IDC) was shown to differ from externally directed cognition in a range of neurophysiological indicators such as electroencephalogram (EEG) alpha activity and eye behavior. Methods: In this EEG–eye-tracking coregistration study, we investigated effects of attention direction on EEG alpha activity and various relevant eye parameters. We used an established paradigm to manipulate internal attention demands in the visual domain within tasks by means of conditional stimulus masking. Results: Consistent with previous research, IDC involved relatively higher EEG alpha activity (lower alpha desynchronization) at posterior cortical sites. Moreover, IDC was characterized by greater pupil diameter (PD), fewer microsaccades, fixations, and saccades. These findings show that internal versus external cognition is associated with robust differences in several indicators at the neural and perceptual level. In a second line of analysis, we explored the intrinsic temporal covariation between EEG alpha activity and eye parameters during rest. This analysis revealed a positive correlation of EEG alpha power with PD especially in bilateral parieto-occipital regions. Conclusion: Together, these findings suggest that EEG alpha activity and PD represent time-sensitive indicators of internal attention demands, which may be involved in a neurophysiological gating mechanism serving to shield internal cognition from irrelevant sensory information. |
Chang-Mao Chao; Philip Tseng; Tzu Yu Hsu; Jia-Han Su; Ovid J L Tzeng; Daisy L Hung; Neil G Muggleton; Chi-Hung Juan Predictability of saccadic behaviors is modified by transcranial magnetic stimulation over human posterior parietal cortex Journal Article Human Brain Mapping, 32 (11), pp. 1961–1972, 2011. @article{Chao2011, title = {Predictability of saccadic behaviors is modified by transcranial magnetic stimulation over human posterior parietal cortex}, author = {Chang-Mao Chao and Philip Tseng and Tzu Yu Hsu and Jia-Han Su and Ovid J L Tzeng and Daisy L Hung and Neil G Muggleton and Chi-Hung Juan}, doi = {10.1002/hbm.21162}, year = {2011}, date = {2011-01-01}, journal = {Human Brain Mapping}, volume = {32}, number = {11}, pages = {1961--1972}, abstract = {Predictability in the visual environment provides a powerful cue for efficient processing of scenes and objects. Recently, studies have suggested that the directionality and magnitude of saccade curvature can be informative as to how the visual system processes predictive information. The pres-ent study investigated the role of the right posterior parietal cortex (rPPC) in shaping saccade curva-tures in the context of predictive and non-predictive visual cues. We used an orienting paradigm that incorporated manipulation of target location predictability and delivered transcranial magnetic stimulation (TMS) over rPPC. Participants were presented with either an informative or uninforma-tive cue to upcoming target locations. Our results showed that rPPC TMS generally increased sac-cade latency and saccade error rates. Intriguingly, rPPC TMS increased curvatures away from the distractor only when the target location was unpredictable and decreased saccadic errors towards the distractor. These effects on curvature and accuracy were not present when the target location was predictable. These results dissociate the strong contingency between saccade latency and saccade curvature and also indicate that rPPC plays an important role in allocating and suppressing attention to distractors when the target demands visual disambiguation. Furthermore, the present study sug-gests that, like the frontal eye fields, rPPC is critically involved in determining saccade curvature and the generation of saccadic behaviors under conditions of differing target predictability.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Predictability in the visual environment provides a powerful cue for efficient processing of scenes and objects. Recently, studies have suggested that the directionality and magnitude of saccade curvature can be informative as to how the visual system processes predictive information. The pres-ent study investigated the role of the right posterior parietal cortex (rPPC) in shaping saccade curva-tures in the context of predictive and non-predictive visual cues. We used an orienting paradigm that incorporated manipulation of target location predictability and delivered transcranial magnetic stimulation (TMS) over rPPC. Participants were presented with either an informative or uninforma-tive cue to upcoming target locations. Our results showed that rPPC TMS generally increased sac-cade latency and saccade error rates. Intriguingly, rPPC TMS increased curvatures away from the distractor only when the target location was unpredictable and decreased saccadic errors towards the distractor. These effects on curvature and accuracy were not present when the target location was predictable. These results dissociate the strong contingency between saccade latency and saccade curvature and also indicate that rPPC plays an important role in allocating and suppressing attention to distractors when the target demands visual disambiguation. Furthermore, the present study sug-gests that, like the frontal eye fields, rPPC is critically involved in determining saccade curvature and the generation of saccadic behaviors under conditions of differing target predictability. |
Magdalena Chechlacz; Glyn W Humphreys; Stamatios N Sotiropoulos; Christopher Kennard; Dario Cazzoli Structural organization of the corpus callosum predicts attentional shifts after continuous theta burst stimulation Journal Article Journal of Neuroscience, 35 (46), pp. 15353–15368, 2015. @article{Chechlacz2015, title = {Structural organization of the corpus callosum predicts attentional shifts after continuous theta burst stimulation}, author = {Magdalena Chechlacz and Glyn W Humphreys and Stamatios N Sotiropoulos and Christopher Kennard and Dario Cazzoli}, doi = {10.1523/JNEUROSCI.2610-15.2015}, year = {2015}, date = {2015-01-01}, journal = {Journal of Neuroscience}, volume = {35}, number = {46}, pages = {15353--15368}, abstract = {Repetitive transcranial magnetic stimulation (rTMS) applied over the right posterior parietal cortex (PPC) in healthy participants has been shown to trigger a significant rightward shift in the spatial allocation of visual attention, temporarily mimicking spatial deficits observed in neglect. In contrast, rTMS applied over the left PPC triggers a weaker or null attentional shift. However, large interindividual differences in responses to rTMS have been reported. Studies measuring changes in brain activation suggest that the effects of rTMS may depend on both interhemispheric and intrahemispheric interactions between cortical loci controlling visual attention. Here, we investigated whether variability in the structural organization of human white matter pathways subserving visual attention, as assessed by diffusion magnetic resonance imaging and tractography, could explain interindividual differences in the effects of rTMS. Most participants showed a rightward shift in the allocation of spatial attention after rTMS over the right intraparietal sulcus (IPS), but the size of this effect varied largely across participants. Conversely, rTMS over the left IPS resulted in strikingly opposed individual responses, with some participants responding with rightward and some with leftward attentional shifts. We demonstrate that microstructural and macrostructural variability within the corpus callosum, consistent with differential effects on cross-hemispheric interactions, predicts both the extent and the direction of the response to rTMS. Together, our findings suggest that the corpus callosum may have a dual inhibitory and excitatory function in maintaining the interhemispheric dynamics that underlie the allocation of spatial attention.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Repetitive transcranial magnetic stimulation (rTMS) applied over the right posterior parietal cortex (PPC) in healthy participants has been shown to trigger a significant rightward shift in the spatial allocation of visual attention, temporarily mimicking spatial deficits observed in neglect. In contrast, rTMS applied over the left PPC triggers a weaker or null attentional shift. However, large interindividual differences in responses to rTMS have been reported. Studies measuring changes in brain activation suggest that the effects of rTMS may depend on both interhemispheric and intrahemispheric interactions between cortical loci controlling visual attention. Here, we investigated whether variability in the structural organization of human white matter pathways subserving visual attention, as assessed by diffusion magnetic resonance imaging and tractography, could explain interindividual differences in the effects of rTMS. Most participants showed a rightward shift in the allocation of spatial attention after rTMS over the right intraparietal sulcus (IPS), but the size of this effect varied largely across participants. Conversely, rTMS over the left IPS resulted in strikingly opposed individual responses, with some participants responding with rightward and some with leftward attentional shifts. We demonstrate that microstructural and macrostructural variability within the corpus callosum, consistent with differential effects on cross-hemispheric interactions, predicts both the extent and the direction of the response to rTMS. Together, our findings suggest that the corpus callosum may have a dual inhibitory and excitatory function in maintaining the interhemispheric dynamics that underlie the allocation of spatial attention. |
Jing Chen; Matteo Valsecchi; Karl R Gegenfurtner LRP predicts smooth pursuit eye movement onset during the ocular tracking of self-generated movements Journal Article Journal of Neurophysiology, 116 (1), pp. 18–29, 2016. @article{Chen2016d, title = {LRP predicts smooth pursuit eye movement onset during the ocular tracking of self-generated movements}, author = {Jing Chen and Matteo Valsecchi and Karl R Gegenfurtner}, doi = {10.1152/jn.00184.2016}, year = {2016}, date = {2016-01-01}, journal = {Journal of Neurophysiology}, volume = {116}, number = {1}, pages = {18--29}, abstract = {Several studies indicated that human observers are very efficient at tracking self-generated hand movements with their gaze, yet it is not clear whether this is simply a byproduct of the predictability of self-generated actions or if it results from a deeper coupling of the somatomotor and oculomotor systems. In a first behavioral experiment we compared pursuit performance as observers either followed their own finger or tracked a dot whose motion was externally generated but mimicked their finger motion. We found that even when the dot motion was completely predictable both in terms of onset time and in terms of kinematics, pursuit was not identical to the one produced as the observers tracked their finger, as evidenced by increased rate of catch-up saccades and by the fact that in the initial phase of the movement gaze was lagging behind the dot, whereas it was ahead of the finger. In a second experiment we recorded EEG in the attempt to find a direct link between the finger motor preparation, indexed by the lateralized readiness potential (LRP), and the latency of smooth pursuit. After taking into account finger movement onset variability, we observed larger LRP amplitudes associated with earlier smooth pursuit onset across trials. The same held across subjects, where average LRP onset correlated with average eye latency. The evidence from both experiments concurs to indicate that a strong coupling exists between the motor systems leading to eye and finger movements and that simple top-down predictive signals are unlikely to support optimal coordination.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Several studies indicated that human observers are very efficient at tracking self-generated hand movements with their gaze, yet it is not clear whether this is simply a byproduct of the predictability of self-generated actions or if it results from a deeper coupling of the somatomotor and oculomotor systems. In a first behavioral experiment we compared pursuit performance as observers either followed their own finger or tracked a dot whose motion was externally generated but mimicked their finger motion. We found that even when the dot motion was completely predictable both in terms of onset time and in terms of kinematics, pursuit was not identical to the one produced as the observers tracked their finger, as evidenced by increased rate of catch-up saccades and by the fact that in the initial phase of the movement gaze was lagging behind the dot, whereas it was ahead of the finger. In a second experiment we recorded EEG in the attempt to find a direct link between the finger motor preparation, indexed by the lateralized readiness potential (LRP), and the latency of smooth pursuit. After taking into account finger movement onset variability, we observed larger LRP amplitudes associated with earlier smooth pursuit onset across trials. The same held across subjects, where average LRP onset correlated with average eye latency. The evidence from both experiments concurs to indicate that a strong coupling exists between the motor systems leading to eye and finger movements and that simple top-down predictive signals are unlikely to support optimal coordination. |
Jing Chen; Matteo Valsecchi; Karl R Gegenfurtner Enhanced brain responses to color during smooth-pursuit eye movements Journal Article Journal of Neurophysiology, 118 , pp. 749–754, 2017. @article{Chen2017a, title = {Enhanced brain responses to color during smooth-pursuit eye movements}, author = {Jing Chen and Matteo Valsecchi and Karl R Gegenfurtner}, doi = {10.1152/jn.00208.2017}, year = {2017}, date = {2017-01-01}, journal = {Journal of Neurophysiology}, volume = {118}, pages = {749--754}, abstract = {Eye movements alter visual perceptions in a number of ways. During smooth pursuit eye movements, previous studies reported decreased detection threshold for colored stimuli and for high-spatial-frequency luminance stimuli, suggesting a boost in the parvocellular system. The present study investigated the underlying neural mechanism using EEG in human participants. Participants followed a moving target with smooth pursuit eye movements while steady-state visually Evoked potentials (SSVEPs) were elicited by equiluminant red-green flickering gratings in the background. SSVEP responses to color gratings were 18.9% higher during smooth pursuit than during fixation. There was no enhancement of SSVEPs by smooth pursuit when the flickering grating was defined by luminance instead of color. This result provides physiological evidence that the chromatic response in the visual system is boosted by the execution of smooth pursuit eye movements in humans. Since the response improvement is thought to be due to an improved response in the parvocellular system, SSVEPs to equiluminant stimuli could provide a direct test of parvocellular signaling, especially in populations where an explicit behavioral response from the participant is not feasible.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Eye movements alter visual perceptions in a number of ways. During smooth pursuit eye movements, previous studies reported decreased detection threshold for colored stimuli and for high-spatial-frequency luminance stimuli, suggesting a boost in the parvocellular system. The present study investigated the underlying neural mechanism using EEG in human participants. Participants followed a moving target with smooth pursuit eye movements while steady-state visually Evoked potentials (SSVEPs) were elicited by equiluminant red-green flickering gratings in the background. SSVEP responses to color gratings were 18.9% higher during smooth pursuit than during fixation. There was no enhancement of SSVEPs by smooth pursuit when the flickering grating was defined by luminance instead of color. This result provides physiological evidence that the chromatic response in the visual system is boosted by the execution of smooth pursuit eye movements in humans. Since the response improvement is thought to be due to an improved response in the parvocellular system, SSVEPs to equiluminant stimuli could provide a direct test of parvocellular signaling, especially in populations where an explicit behavioral response from the participant is not feasible. |
Jing Chen; Matteo Valsecchi; Karl R Gegenfurtner Saccadic suppression measured by steady-state visual evoked potentials Journal Article Journal of Neurophysiology, 122 (1), pp. 251–258, 2019. @article{Chen2019e, title = {Saccadic suppression measured by steady-state visual evoked potentials}, author = {Jing Chen and Matteo Valsecchi and Karl R Gegenfurtner}, doi = {10.1152/jn.00712.2018}, year = {2019}, date = {2019-01-01}, journal = {Journal of Neurophysiology}, volume = {122}, number = {1}, pages = {251--258}, abstract = {Visual sensitivity is severely impaired during the execution of saccadic eye movements. This phenomenon has been extensively characterized in human psychophysics and nonhuman primate single-neuron studies, but a physiological characterization in humans is less established. Here, we used a method based on steadystate visually evoked potential (SSVEP), an oscillatory brain response to periodic visual stimulation, to examine how saccades affect visual sensitivity. Observers made horizontal saccades back and forth, while horizontal black-and-white gratings flickered at 5-30 Hz in the background. We analyzed EEG epochs with a length of 0.3 s either centered at saccade onset (saccade epochs) or centered at fixations half a second before the saccade (fixation epochs). Compared with fixation epochs, saccade epochs showed a broadband power increase, which most likely resulted from saccade-related EEG activity. The execution of saccades, however, led to an average reduction of 57% in the SSVEP amplitude at the stimulation frequency. This result provides additional evidence for an active saccadic suppression in the early visual cortex in humans. Compared with previous functional MRI and EEG studies, an advantage of this approach lies in its capability to trace the temporal dynamics of neural activity throughout the time course of a saccade. In contrast to previous electrophysiological studies in nonhuman primates, we did not find any evidence for postsaccadic enhancement, even though simulation results show that our method would have been able to detect it. We conclude that SSVEP is a useful technique to investigate the neural correlates of visual perception during saccadic eye movements in humans.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Visual sensitivity is severely impaired during the execution of saccadic eye movements. This phenomenon has been extensively characterized in human psychophysics and nonhuman primate single-neuron studies, but a physiological characterization in humans is less established. Here, we used a method based on steadystate visually evoked potential (SSVEP), an oscillatory brain response to periodic visual stimulation, to examine how saccades affect visual sensitivity. Observers made horizontal saccades back and forth, while horizontal black-and-white gratings flickered at 5-30 Hz in the background. We analyzed EEG epochs with a length of 0.3 s either centered at saccade onset (saccade epochs) or centered at fixations half a second before the saccade (fixation epochs). Compared with fixation epochs, saccade epochs showed a broadband power increase, which most likely resulted from saccade-related EEG activity. The execution of saccades, however, led to an average reduction of 57% in the SSVEP amplitude at the stimulation frequency. This result provides additional evidence for an active saccadic suppression in the early visual cortex in humans. Compared with previous functional MRI and EEG studies, an advantage of this approach lies in its capability to trace the temporal dynamics of neural activity throughout the time course of a saccade. In contrast to previous electrophysiological studies in nonhuman primates, we did not find any evidence for postsaccadic enhancement, even though simulation results show that our method would have been able to detect it. We conclude that SSVEP is a useful technique to investigate the neural correlates of visual perception during saccadic eye movements in humans. |
Hui-Yan Chiau; Neil G Muggleton; Chi-Hung Juan Exploring the contributions of the supplementary eye field to subliminal inhibition using double-pulse transcranial magnetic stimulation Journal Article Human Brain Mapping, 38 , pp. 339–351, 2017. @article{Chiau2017, title = {Exploring the contributions of the supplementary eye field to subliminal inhibition using double-pulse transcranial magnetic stimulation}, author = {Hui-Yan Chiau and Neil G Muggleton and Chi-Hung Juan}, doi = {10.1002/hbm.23364}, year = {2017}, date = {2017-01-01}, journal = {Human Brain Mapping}, volume = {38}, pages = {339--351}, abstract = {It is widely accepted that the supplementary eye fields (SEF) are involved in the control of voluntary eye movements. However, recent evidence suggests that SEF may also be important for unconscious and involuntary motor processes. Indeed, Sumner et al. ([2007]: Neuron 54:697-711) showed that patients with micro-lesions of the SEF demonstrated an absence of subliminal inhibition as evoked by masked-prime stimuli. Here, we used double-pulse transcranial magnetic stimulation (TMS) in healthy volunteers to investigate the role of SEF in subliminal priming. We applied double-pulse TMS at two time windows in a masked-prime task: the first during an early phase, 20-70 ms after the onset of the mask but before target presentation, during which subliminal inhibition is present; and the second during a late phase, 20-70 ms after target onset, during which the saccade is being prepared. We found no effect of TMS with the early time window of stimulation, whereas a reduction in the benefit of an incompatible subliminal prime stimulus was found when SEF TMS was applied at the late time window. These findings suggest that there is a role for SEF related to the effects of subliminal primes on eye movements, but the results do not support a role in inhibiting the primed tendency.}, keywords = {}, pubstate = {published}, tppubtype = {article} } It is widely accepted that the supplementary eye fields (SEF) are involved in the control of voluntary eye movements. However, recent evidence suggests that SEF may also be important for unconscious and involuntary motor processes. Indeed, Sumner et al. ([2007]: Neuron 54:697-711) showed that patients with micro-lesions of the SEF demonstrated an absence of subliminal inhibition as evoked by masked-prime stimuli. Here, we used double-pulse transcranial magnetic stimulation (TMS) in healthy volunteers to investigate the role of SEF in subliminal priming. We applied double-pulse TMS at two time windows in a masked-prime task: the first during an early phase, 20-70 ms after the onset of the mask but before target presentation, during which subliminal inhibition is present; and the second during a late phase, 20-70 ms after target onset, during which the saccade is being prepared. We found no effect of TMS with the early time window of stimulation, whereas a reduction in the benefit of an incompatible subliminal prime stimulus was found when SEF TMS was applied at the late time window. These findings suggest that there is a role for SEF related to the effects of subliminal primes on eye movements, but the results do not support a role in inhibiting the primed tendency. |
Hak Soo Choi; Shinjung Kim; Donghoon Lee; Chang Seok Kim; Myung Yung Jeong Synchronized tracking of brain cognitive processing using EEG and vision signals Journal Article Applied Spectroscopy Reviews, 51 (7-9), pp. 592–602, 2016. @article{Choi2016, title = {Synchronized tracking of brain cognitive processing using EEG and vision signals}, author = {Hak Soo Choi and Shinjung Kim and Donghoon Lee and Chang Seok Kim and Myung Yung Jeong}, doi = {10.1080/05704928.2016.1166373}, year = {2016}, date = {2016-01-01}, journal = {Applied Spectroscopy Reviews}, volume = {51}, number = {7-9}, pages = {592--602}, publisher = {Taylor & Francis}, abstract = {Many efforts have been made to understand the neural mechanisms of the human brain. However, visualization of human brain processing has been a main challenge in the field. It is still largely unknown how the human brain allocates attention to target objects while excluding unrelated information in a complex visual environment. Using simultaneous electroencephalogram and eye tracking measurements, in this study, we analyzed two brain regions separately to detect the brain wave activity during visual information processing. We observed an activation difference between sensory (P100) and cognitive (P300) processing, and the behavioral response was improved by providing valid cue-target location information. Furthermore, neural processing was evaluated according to the specific area of brain activation and eye movements during cognitive processing. Our results demonstrate the correlation between behavior performance and visual stimuli and suggest an advantage of combined paradigms for efficient visual information processing.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Many efforts have been made to understand the neural mechanisms of the human brain. However, visualization of human brain processing has been a main challenge in the field. It is still largely unknown how the human brain allocates attention to target objects while excluding unrelated information in a complex visual environment. Using simultaneous electroencephalogram and eye tracking measurements, in this study, we analyzed two brain regions separately to detect the brain wave activity during visual information processing. We observed an activation difference between sensory (P100) and cognitive (P300) processing, and the behavioral response was improved by providing valid cue-target location information. Furthermore, neural processing was evaluated according to the specific area of brain activation and eye movements during cognitive processing. Our results demonstrate the correlation between behavior performance and visual stimuli and suggest an advantage of combined paradigms for efficient visual information processing. |
Hoseok Choi; Seho Lee; Jeyeon Lee; Kyeongran Min; Seokbeen Lim; Jinsick Park; Kyoung ha Ahn; In Young Kim; Kyoung-Min Lee; Dong Pyo Jang Long-term evaluation and feasibility study of the insulated screw electrode for ECoG recording Journal Article Journal of Neuroscience Methods, 308 , pp. 261–268, 2018. @article{Choi2018, title = {Long-term evaluation and feasibility study of the insulated screw electrode for ECoG recording}, author = {Hoseok Choi and Seho Lee and Jeyeon Lee and Kyeongran Min and Seokbeen Lim and Jinsick Park and Kyoung ha Ahn and In Young Kim and Kyoung-Min Lee and Dong Pyo Jang}, doi = {10.1016/j.jneumeth.2018.06.027}, year = {2018}, date = {2018-01-01}, journal = {Journal of Neuroscience Methods}, volume = {308}, pages = {261--268}, publisher = {Elsevier}, abstract = {Background: A screw-shaped electrode can offer a compromise between signal quality and invasiveness. However, the standard screw electrode can be vulnerable to electrical noise while directly contact with the skull or skin, and the feasibility and stability for chronic implantation in primate have not been fully evaluated. New Method: We designed a novel screw electrocorticogram (ECoG) electrode composed of three parts: recording electrode, insulator, and nut. The recording electrode was made of titanium with high biocompatibility and high electrical conductivity. Zirconia is used for insulator and nut to prevent electrical noise. Result: In computer simulations, the screw ECoG with insulator showed a significantly higher performance in signal acquisition compared to the condition without insulator. In a non-human primate, using screw ECoG, clear visual-evoked potential (VEP) waveforms were obtained, VEP components were reliably maintained, and the electrode's impedance was stable during the whole evaluation period. Moreover, it showed higher SNR and wider frequency band compared to the electroencephalogram (EEG). We also observed the screw ECoG has a higher sensitivity that captures different responses on various stimuli than the EEG. Comparison: The screw ECoG showed reliable electrical characteristic and biocompatibility for three months, that shows great promise for chronic implants. These results contrasted with previous reports that general screw electrode was only applicable for acute applications. Conclusion: The suggested electrode can offer whole-brain monitoring with high signal quality and minimal invasiveness. The screw ECoG can be used to provide more in-depth understanding, not only relationship between functional networks and cognitive behavior, but also pathomechanisms in brain diseases.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Background: A screw-shaped electrode can offer a compromise between signal quality and invasiveness. However, the standard screw electrode can be vulnerable to electrical noise while directly contact with the skull or skin, and the feasibility and stability for chronic implantation in primate have not been fully evaluated. New Method: We designed a novel screw electrocorticogram (ECoG) electrode composed of three parts: recording electrode, insulator, and nut. The recording electrode was made of titanium with high biocompatibility and high electrical conductivity. Zirconia is used for insulator and nut to prevent electrical noise. Result: In computer simulations, the screw ECoG with insulator showed a significantly higher performance in signal acquisition compared to the condition without insulator. In a non-human primate, using screw ECoG, clear visual-evoked potential (VEP) waveforms were obtained, VEP components were reliably maintained, and the electrode's impedance was stable during the whole evaluation period. Moreover, it showed higher SNR and wider frequency band compared to the electroencephalogram (EEG). We also observed the screw ECoG has a higher sensitivity that captures different responses on various stimuli than the EEG. Comparison: The screw ECoG showed reliable electrical characteristic and biocompatibility for three months, that shows great promise for chronic implants. These results contrasted with previous reports that general screw electrode was only applicable for acute applications. Conclusion: The suggested electrode can offer whole-brain monitoring with high signal quality and minimal invasiveness. The screw ECoG can be used to provide more in-depth understanding, not only relationship between functional networks and cognitive behavior, but also pathomechanisms in brain diseases. |
Rajib Chowdhury; A F M Saifuddin Saif Efficient method to improve human brain sensor activities using proposed neuroheadset device embedded with sensors: A comprehensive study Journal Article International Journal of Software Engineering and Computer Systems, 53 (1), pp. 52–56, 2019. @article{Chowdhury2019, title = {Efficient method to improve human brain sensor activities using proposed neuroheadset device embedded with sensors: A comprehensive study}, author = {Rajib Chowdhury and A F M {Saifuddin Saif}}, year = {2019}, date = {2019-01-01}, journal = {International Journal of Software Engineering and Computer Systems}, volume = {53}, number = {1}, pages = {52--56}, abstract = {The main purpose of this research is to investigate the human brain sensor activities related prior researches towards the needs of an efficient method to improve the human brain sensor activities. Human brain activities mainly measured by brain signal acquired from the brain sensor electrodes positioned on several parts of the brain cortex. Although previous researches investigated human brain activities in various aspects, the improvement of the human brain sensor activities is still unsolved. In today's world, it is very crucial need for improving the sensor activities of the human brain using that human brain improved signal externally. This research demonstrated a comprehensive critical analysis of human brain activities related prior researches to claim for an efficient method integrated with proposed neuroheadset device. This research presented a comprehensive review in various aspects like previous methods, existing frameworks analysis and existing results analysis with the discussion to establish an efficient method for acquiring human brain signal, improving the acquired signal and developing the sensor activities of the human brain using that human brain improved signal. Demonstrated critical review has expected for constituting an efficient method to improve the performance of maneuverability, visualization, subliminal activities and so forth on human brain activities.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The main purpose of this research is to investigate the human brain sensor activities related prior researches towards the needs of an efficient method to improve the human brain sensor activities. Human brain activities mainly measured by brain signal acquired from the brain sensor electrodes positioned on several parts of the brain cortex. Although previous researches investigated human brain activities in various aspects, the improvement of the human brain sensor activities is still unsolved. In today's world, it is very crucial need for improving the sensor activities of the human brain using that human brain improved signal externally. This research demonstrated a comprehensive critical analysis of human brain activities related prior researches to claim for an efficient method integrated with proposed neuroheadset device. This research presented a comprehensive review in various aspects like previous methods, existing frameworks analysis and existing results analysis with the discussion to establish an efficient method for acquiring human brain signal, improving the acquired signal and developing the sensor activities of the human brain using that human brain improved signal. Demonstrated critical review has expected for constituting an efficient method to improve the performance of maneuverability, visualization, subliminal activities and so forth on human brain activities. |
Evy Cleeren; Cindy Casteels; Karolien Goffin; Peter Janssen; Wim Van Paesschen Ictal perfusion changes associated with seizure progression in the amygdala kindling model in the rhesus monkey Journal Article Epilepsia, 56 (9), pp. 1366–1375, 2015. @article{Cleeren2015, title = {Ictal perfusion changes associated with seizure progression in the amygdala kindling model in the rhesus monkey}, author = {Evy Cleeren and Cindy Casteels and Karolien Goffin and Peter Janssen and Wim {Van Paesschen}}, doi = {10.1111/epi.13077}, year = {2015}, date = {2015-01-01}, journal = {Epilepsia}, volume = {56}, number = {9}, pages = {1366--1375}, abstract = {OBJECTIVE: Amygdala kindling is a widely used animal model for studying mesial temporal lobe epileptogenesis. In the macaque monkey, electrical amygdala kindling develops slowly and provides an opportunity for investigating ictal perfusion changes during epileptogenesis. METHODS: Two rhesus monkeys were electrically kindled through chronically implanted electrodes in the right amygdala over a period of 16 and 17 months. Ictal perfusion single photon emission computed tomography (SPECT) imaging was performed during each of the four predefined clinical stages. RESULTS: Afterdischarge duration increased slowly over 477 days for monkey K and 515 days for monkey S (18 ± 8 s in stage I; 52 ± 13 s in stage IV). During this time, the animals progressed through four clinical stages ranging from interrupting ongoing behavior to bilateral convulsions. Ictal SPECT perfusion imaging showed well-localized but widely distributed regions of hyperperfusion and hypoperfusion, in both cortical and subcortical structures, at every seizure stage. A large portion of the ictal network was involved in the early stages of epileptogenesis and subsequently expanded over time as seizure severity evolved. SIGNIFICANCE: Our data indicate that the different mesial temporal lobe seizure types occur within a common network affecting several parts of the brain, and that seizure severity may be determined by seizure-induced epileptogenesis within a bihemispheric network that is implicated from the start of the process.}, keywords = {}, pubstate = {published}, tppubtype = {article} } OBJECTIVE: Amygdala kindling is a widely used animal model for studying mesial temporal lobe epileptogenesis. In the macaque monkey, electrical amygdala kindling develops slowly and provides an opportunity for investigating ictal perfusion changes during epileptogenesis. METHODS: Two rhesus monkeys were electrically kindled through chronically implanted electrodes in the right amygdala over a period of 16 and 17 months. Ictal perfusion single photon emission computed tomography (SPECT) imaging was performed during each of the four predefined clinical stages. RESULTS: Afterdischarge duration increased slowly over 477 days for monkey K and 515 days for monkey S (18 ± 8 s in stage I; 52 ± 13 s in stage IV). During this time, the animals progressed through four clinical stages ranging from interrupting ongoing behavior to bilateral convulsions. Ictal SPECT perfusion imaging showed well-localized but widely distributed regions of hyperperfusion and hypoperfusion, in both cortical and subcortical structures, at every seizure stage. A large portion of the ictal network was involved in the early stages of epileptogenesis and subsequently expanded over time as seizure severity evolved. SIGNIFICANCE: Our data indicate that the different mesial temporal lobe seizure types occur within a common network affecting several parts of the brain, and that seizure severity may be determined by seizure-induced epileptogenesis within a bihemispheric network that is implicated from the start of the process. |
Moreno I Coco; Antje Nuthmann; Olaf Dimigen Fixation-related brain potentials during semantic integration of object–scene information Journal Article Journal of Cognitive Neuroscience, 32 (4), pp. 571–589, 2019. @article{Coco2019, title = {Fixation-related brain potentials during semantic integration of object–scene information}, author = {Moreno I Coco and Antje Nuthmann and Olaf Dimigen}, doi = {10.1162/jocn_a_01504}, year = {2019}, date = {2019-11-01}, journal = {Journal of Cognitive Neuroscience}, volume = {32}, number = {4}, pages = {571--589}, publisher = {MIT Press - Journals}, abstract = {In vision science, a particularly controversial topic is whether and how quickly the semantic information about objects is available outside foveal vision. Here, we aimed at contributing to this debate by coregistering eye movements and EEG while participants viewed photographs of indoor scenes that contained a semantically consistent or inconsistent target object. Linear deconvolution modeling was used to analyze the ERPs evoked by scene onset as well as the fixation-related potentials (FRPs) elicited by the fixation on the target object (t) and by the preceding fixation (t − 1). Object–scene consistency did not influence the probability of immediate target fixation or the ERP evoked by scene onset, which suggests that object–scene semantics was not accessed immediately. However, during the subsequent scene exploration, inconsistent objects were prioritized over consistent objects in extrafoveal vision (i.e., looked at earlier) and were more effortful to process in foveal vision (i.e., looked at longer). In FRPs, we demonstrate a fixation-related N300/N400 effect, whereby inconsistent objects elicit a larger frontocentral negativity than consistent objects. In line with the behavioral findings, this effect was already seen in FRPs aligned to the pretarget fixation t − 1 and persisted throughout fixation t, indicating that the extraction of object semantics can already begin in extrafoveal vision. Taken together, the results emphasize the usefulness of combined EEG/eye movement recordings for understanding the mechanisms of object–scene integration during natural viewing.}, keywords = {}, pubstate = {published}, tppubtype = {article} } In vision science, a particularly controversial topic is whether and how quickly the semantic information about objects is available outside foveal vision. Here, we aimed at contributing to this debate by coregistering eye movements and EEG while participants viewed photographs of indoor scenes that contained a semantically consistent or inconsistent target object. Linear deconvolution modeling was used to analyze the ERPs evoked by scene onset as well as the fixation-related potentials (FRPs) elicited by the fixation on the target object (t) and by the preceding fixation (t − 1). Object–scene consistency did not influence the probability of immediate target fixation or the ERP evoked by scene onset, which suggests that object–scene semantics was not accessed immediately. However, during the subsequent scene exploration, inconsistent objects were prioritized over consistent objects in extrafoveal vision (i.e., looked at earlier) and were more effortful to process in foveal vision (i.e., looked at longer). In FRPs, we demonstrate a fixation-related N300/N400 effect, whereby inconsistent objects elicit a larger frontocentral negativity than consistent objects. In line with the behavioral findings, this effect was already seen in FRPs aligned to the pretarget fixation t − 1 and persisted throughout fixation t, indicating that the extraction of object semantics can already begin in extrafoveal vision. Taken together, the results emphasize the usefulness of combined EEG/eye movement recordings for understanding the mechanisms of object–scene integration during natural viewing. |
Brian A Coffman; Piyadasa Kodituwakku; Elizabeth L Kodituwakku; Lucinda Romero; Nirupama Muniswamy Sharadamma; David Stone; Julia M Stephen Primary visual response (M100) delays in adolescents with FASD as measured with MEG Journal Article Human Brain Mapping, 34 (11), pp. 2852–2862, 2013. @article{Coffman2013, title = {Primary visual response (M100) delays in adolescents with FASD as measured with MEG}, author = {Brian A Coffman and Piyadasa Kodituwakku and Elizabeth L Kodituwakku and Lucinda Romero and Nirupama Muniswamy Sharadamma and David Stone and Julia M Stephen}, doi = {10.1002/hbm.22110}, year = {2013}, date = {2013-01-01}, journal = {Human Brain Mapping}, volume = {34}, number = {11}, pages = {2852--2862}, abstract = {Fetal alcohol spectrum disorders (FASD) are debilitating, with effects of prenatal alcohol exposure persisting into adolescence and adulthood. Complete characterization of FASD is crucial for the development of diagnostic tools and intervention techniques to decrease the high cost to individual families and society of this disorder. In this experiment, we investigated visual system deficits in adolescents (12-21 years) diagnosed with an FASD by measuring the latency of patients' primary visual M100 responses using MEG. We hypothesized that patients with FASD would demonstrate delayed primary visual responses compared to controls. M100 latencies were assessed both for FASD patients and age-matched healthy controls for stimuli presented at the fovea (central stimulus) and at the periphery (peripheral stimuli; left or right of the central stimulus) in a saccade task requiring participants to direct their attention and gaze to these stimuli. Source modeling was performed on visual responses to the central and peripheral stimuli and the latency of the first prominent peak (M100) in the occipital source timecourse was identified. The peak latency of the M100 responses were delayed in FASD patients for both stimulus types (central and peripheral), but the difference in latency of primary visual responses to central vs. peripheral stimuli was significant only in FASD patients, indicating that, while FASD patients' visual systems are impaired in general, this impairment is more pronounced in the periphery. These results suggest that basic sensory deficits in this population may contribute to sensorimotor integration deficits described previously in this disorder.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Fetal alcohol spectrum disorders (FASD) are debilitating, with effects of prenatal alcohol exposure persisting into adolescence and adulthood. Complete characterization of FASD is crucial for the development of diagnostic tools and intervention techniques to decrease the high cost to individual families and society of this disorder. In this experiment, we investigated visual system deficits in adolescents (12-21 years) diagnosed with an FASD by measuring the latency of patients' primary visual M100 responses using MEG. We hypothesized that patients with FASD would demonstrate delayed primary visual responses compared to controls. M100 latencies were assessed both for FASD patients and age-matched healthy controls for stimuli presented at the fovea (central stimulus) and at the periphery (peripheral stimuli; left or right of the central stimulus) in a saccade task requiring participants to direct their attention and gaze to these stimuli. Source modeling was performed on visual responses to the central and peripheral stimuli and the latency of the first prominent peak (M100) in the occipital source timecourse was identified. The peak latency of the M100 responses were delayed in FASD patients for both stimulus types (central and peripheral), but the difference in latency of primary visual responses to central vs. peripheral stimuli was significant only in FASD patients, indicating that, while FASD patients' visual systems are impaired in general, this impairment is more pronounced in the periphery. These results suggest that basic sensory deficits in this population may contribute to sensorimotor integration deficits described previously in this disorder. |
Eleanor J Cole; Nick E Barraclough; Peter G Enticott Investigating mirror system (MS) activity in adults with ASD when inferring others' intentions using both TMS and EEG Journal Article Journal of Autism and Developmental Disorders, 48 (7), pp. 2350–2367, 2018. @article{Cole2018, title = {Investigating mirror system (MS) activity in adults with ASD when inferring others' intentions using both TMS and EEG}, author = {Eleanor J Cole and Nick E Barraclough and Peter G Enticott}, doi = {10.1007/s10803-018-3492-2}, year = {2018}, date = {2018-07-01}, journal = {Journal of Autism and Developmental Disorders}, volume = {48}, number = {7}, pages = {2350--2367}, publisher = {Springer US}, abstract = {ASD is associated with mentalizing deficits that may correspond with atypical mirror system (MS) activation. We investigated MS activity in adults with and without ASD when inferring others' intentions using TMS-induced motor evoked potentials (MEPs) and mu suppression measured by EEG. Autistic traits were measured for all participants. Our EEG data show, high levels of autistic traits predicted reduced right mu (8–10 Hz) suppression when mentalizing. Higher left mu (8–10 Hz) suppression was associated with superior mentalizing performances. Eye-tracking and TMS data showed no differences associated with autistic traits. Our data suggest ASD is associated with reduced right MS activity when mentalizing, TMS-induced MEPs and mu suppression measure different aspects of MS functioning and the MS is directly involved in inferring intentions.}, keywords = {}, pubstate = {published}, tppubtype = {article} } ASD is associated with mentalizing deficits that may correspond with atypical mirror system (MS) activation. We investigated MS activity in adults with and without ASD when inferring others' intentions using TMS-induced motor evoked potentials (MEPs) and mu suppression measured by EEG. Autistic traits were measured for all participants. Our EEG data show, high levels of autistic traits predicted reduced right mu (8–10 Hz) suppression when mentalizing. Higher left mu (8–10 Hz) suppression was associated with superior mentalizing performances. Eye-tracking and TMS data showed no differences associated with autistic traits. Our data suggest ASD is associated with reduced right MS activity when mentalizing, TMS-induced MEPs and mu suppression measure different aspects of MS functioning and the MS is directly involved in inferring intentions. |
Thérèse Collins Visual target selection and motor planning define attentional enhancement at perceptual processing stages Journal Article Frontiers in Human Neuroscience, 4 , pp. 1–10, 2010. @article{Collins2010, title = {Visual target selection and motor planning define attentional enhancement at perceptual processing stages}, author = {Thér{è}se Collins}, doi = {10.3389/neuro.09.014.2010}, year = {2010}, date = {2010-01-01}, journal = {Frontiers in Human Neuroscience}, volume = {4}, pages = {1--10}, abstract = {Extracting information from the visual field can be achieved by covertly orienting attention to different regions, or by making saccades to bring areas of interest onto the fovea. While much research has shown a link between covert attention and saccade preparation, the nature of that link remains a matter of dispute. Covert presaccadic orienting could result from target selection or from planning a motor act toward an object. We examined the contribution of visual target selection and motor preparation to attentional orienting in humans by dissociating these two habitually aligned processes with saccadic adaptation. Adaptation introduces a discrepancy between the visual target evoking a saccade and the motor metrics of that saccade, which, unbeknownst to the participant, brings the eyes to a different spatial location. We examined attentional orienting by recording event-related potentials (ERPs) to task-irrelevant visual probes flashed during saccade preparation at four equidistant locations including the visual target location and the upcoming motor endpoint. ERPs as early as 130-170 ms post-probe were modulated by attention at both the visual target and motor endpoint locations. These results indicate that both target selection and motor preparation determine the focus of spatial attention, resulting in enhanced processing of stimuli at early visual-perceptual stages.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Extracting information from the visual field can be achieved by covertly orienting attention to different regions, or by making saccades to bring areas of interest onto the fovea. While much research has shown a link between covert attention and saccade preparation, the nature of that link remains a matter of dispute. Covert presaccadic orienting could result from target selection or from planning a motor act toward an object. We examined the contribution of visual target selection and motor preparation to attentional orienting in humans by dissociating these two habitually aligned processes with saccadic adaptation. Adaptation introduces a discrepancy between the visual target evoking a saccade and the motor metrics of that saccade, which, unbeknownst to the participant, brings the eyes to a different spatial location. We examined attentional orienting by recording event-related potentials (ERPs) to task-irrelevant visual probes flashed during saccade preparation at four equidistant locations including the visual target location and the upcoming motor endpoint. ERPs as early as 130-170 ms post-probe were modulated by attention at both the visual target and motor endpoint locations. These results indicate that both target selection and motor preparation determine the focus of spatial attention, resulting in enhanced processing of stimuli at early visual-perceptual stages. |
Thérèse Collins; Pierre O Jacquet TMS over posterior parietal cortex disrupts trans-saccadic visual stability Journal Article Brain Stimulation, 11 (2), pp. 390–399, 2018. @article{Collins2018, title = {TMS over posterior parietal cortex disrupts trans-saccadic visual stability}, author = {Thér{è}se Collins and Pierre O Jacquet}, doi = {10.1016/j.brs.2017.11.019}, year = {2018}, date = {2018-01-01}, journal = {Brain Stimulation}, volume = {11}, number = {2}, pages = {390--399}, abstract = {Background: Saccadic eye movements change the retinal location of visual objects, but we do not experience the visual world as constantly moving, we perceive it as seamless and stable. This visual stability may be achieved by an internal or efference copy of each saccade that, combined with the retinal information, allows the visual system to cancel out or ignore the self-caused retinal motion. Objective: The current study investigated the underlying brain mechanisms responsible for visual stability in humans with online transcranial magnetic stimulation (TMS). Methods: We used two classic tasks that measure efference copy: the double-step task and the in-flight displacement task. The double-step task requires subjects to make two memory-guided saccades, the second of which depends on an accurate internal copy of the first. The in-flight displacement task requires subjects to report the relative location of a (possibly displaced) target across a saccade. In separate experimental sessions, subjects participated in each task while we delivered online 3-pulse TMS over frontal eye fields (FEF), posterior parietal cortex, or vertex. TMS was contingent on saccade execution. Results: Second saccades were not disrupted in the double-step task, but surprisingly, TMS over FEF modified the metrics of the ongoing saccade. Spatiotopic performance in the in-flight displacement task was altered following TMS over parietal cortex, but not FEF or vertex. Conclusion: These results suggest that TMS disrupted eye-centered position coding in the parietal cortex. Trans-saccadic correspondence, and visual stability, may therefore causally depend on parietal maps.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Background: Saccadic eye movements change the retinal location of visual objects, but we do not experience the visual world as constantly moving, we perceive it as seamless and stable. This visual stability may be achieved by an internal or efference copy of each saccade that, combined with the retinal information, allows the visual system to cancel out or ignore the self-caused retinal motion. Objective: The current study investigated the underlying brain mechanisms responsible for visual stability in humans with online transcranial magnetic stimulation (TMS). Methods: We used two classic tasks that measure efference copy: the double-step task and the in-flight displacement task. The double-step task requires subjects to make two memory-guided saccades, the second of which depends on an accurate internal copy of the first. The in-flight displacement task requires subjects to report the relative location of a (possibly displaced) target across a saccade. In separate experimental sessions, subjects participated in each task while we delivered online 3-pulse TMS over frontal eye fields (FEF), posterior parietal cortex, or vertex. TMS was contingent on saccade execution. Results: Second saccades were not disrupted in the double-step task, but surprisingly, TMS over FEF modified the metrics of the ongoing saccade. Spatiotopic performance in the in-flight displacement task was altered following TMS over parietal cortex, but not FEF or vertex. Conclusion: These results suggest that TMS disrupted eye-centered position coding in the parietal cortex. Trans-saccadic correspondence, and visual stability, may therefore causally depend on parietal maps. |
Tim H W Cornelissen; Jona Sassenhagen; Melissa L -H Võ Improving free-viewing fixation-related EEG potentials with continuous-time regression Journal Article Journal of Neuroscience Methods, 313 , pp. 77–94, 2019. @article{Cornelissen2019, title = {Improving free-viewing fixation-related EEG potentials with continuous-time regression}, author = {Tim H W Cornelissen and Jona Sassenhagen and Melissa L -H V{õ}}, doi = {10.1016/j.jneumeth.2018.12.010}, year = {2019}, date = {2019-01-01}, journal = {Journal of Neuroscience Methods}, volume = {313}, pages = {77--94}, publisher = {Elsevier}, abstract = {Background: In the analysis of combined ET-EEG data, there are several issues with estimating FRPs by averaging. Neural responses associated with fixations will likely overlap with one another in the EEG recording and neural responses change as a function of eye movement characteristics. Especially in tasks that do not constrain eye movements in any way, these issues can become confounds. New method: Here, we propose the use of regression based estimates as an alternative to averaging. Multiple regression can disentangle different influences on the EEG and correct for overlap. It thereby accounts for potential confounds in a way that averaging cannot. Specifically, we test the applicability of the rERP framework, as proposed by Smith and Kutas (2015b), (2017), or Sassenhagen (2018) to combined eye tracking and EEG data from a visual search and a scene memorization task. Results: Results show that the method successfully estimates eye movement related confounds in real experimental data, so that these potential confounds can be accounted for when estimating experimental effects. Comparison with existing methods: The rERP method successfully corrects for overlapping neural responses in instances where averaging does not. As a consequence, baselining can be applied without risking distortions. By estimating a known experimental effect, we show that rERPs provide an estimate with less variance and more accuracy than averaged FRPs. The method therefore provides a practically feasible and favorable alternative to averaging. Conclusions: We conclude that regression based ERPs provide novel opportunities for estimating fixation related EEG in free-viewing experiments.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Background: In the analysis of combined ET-EEG data, there are several issues with estimating FRPs by averaging. Neural responses associated with fixations will likely overlap with one another in the EEG recording and neural responses change as a function of eye movement characteristics. Especially in tasks that do not constrain eye movements in any way, these issues can become confounds. New method: Here, we propose the use of regression based estimates as an alternative to averaging. Multiple regression can disentangle different influences on the EEG and correct for overlap. It thereby accounts for potential confounds in a way that averaging cannot. Specifically, we test the applicability of the rERP framework, as proposed by Smith and Kutas (2015b), (2017), or Sassenhagen (2018) to combined eye tracking and EEG data from a visual search and a scene memorization task. Results: Results show that the method successfully estimates eye movement related confounds in real experimental data, so that these potential confounds can be accounted for when estimating experimental effects. Comparison with existing methods: The rERP method successfully corrects for overlapping neural responses in instances where averaging does not. As a consequence, baselining can be applied without risking distortions. By estimating a known experimental effect, we show that rERPs provide an estimate with less variance and more accuracy than averaged FRPs. The method therefore provides a practically feasible and favorable alternative to averaging. Conclusions: We conclude that regression based ERPs provide novel opportunities for estimating fixation related EEG in free-viewing experiments. |
Joshua D Cosman; Kaleb A Lowe; Wolf Zinke; Geoffrey F Woodman; Jeffrey D Schall Prefrontal control of visual distraction Journal Article Current Biology, 28 (3), pp. 414–420, 2018. @article{Cosman2018, title = {Prefrontal control of visual distraction}, author = {Joshua D Cosman and Kaleb A Lowe and Wolf Zinke and Geoffrey F Woodman and Jeffrey D Schall}, doi = {10.1016/j.cub.2017.12.023}, year = {2018}, date = {2018-02-01}, journal = {Current Biology}, volume = {28}, number = {3}, pages = {414--420}, abstract = {Avoiding distraction by conspicuous but irrelevant stimuli is critical to accomplishing daily tasks. Regions of prefrontal cortex control attention by enhancing the representation of task-relevant information in sensory cortex, which can be measured in modulation of both single neurons and event-related electrical potentials (ERPs) on the cranial surface [1, 2]. When irrelevant information is particularly conspicuous, it can distract attention and interfere with the selection of behaviorally relevant information. Such distraction can be minimized via top-down control [3–5], but the cognitive and neural mechanisms giving rise to this control over distraction remain uncertain and debated [6–9]. Bridging neurophysiology to electrophysiology, we simultaneously recorded neurons in prefrontal cortex and ERPs over extrastriate visual cortex to track the processing of salient distractors during a visual search task. Critically, when the salient distractor was successfully ignored, but not otherwise, we observed robust suppression of salient distractor representations. Like target selection, the distractor suppression was observed in prefrontal cortex before it appeared over extrastriate cortical areas. Furthermore, all prefrontal neurons that showed suppression of the task-irrelevant distractor also contributed to selecting the target. This suggests a common prefrontal mechanism is responsible for both selecting task-relevant and suppressing task-irrelevant information in sensory cortex. Taken together, our results resolve a long-standing debate over the mechanisms that prevent distraction, and provide the first evidence directly linking suppressed neural firing in prefrontal cortex with surface ERP measures of distractor suppression.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Avoiding distraction by conspicuous but irrelevant stimuli is critical to accomplishing daily tasks. Regions of prefrontal cortex control attention by enhancing the representation of task-relevant information in sensory cortex, which can be measured in modulation of both single neurons and event-related electrical potentials (ERPs) on the cranial surface [1, 2]. When irrelevant information is particularly conspicuous, it can distract attention and interfere with the selection of behaviorally relevant information. Such distraction can be minimized via top-down control [3–5], but the cognitive and neural mechanisms giving rise to this control over distraction remain uncertain and debated [6–9]. Bridging neurophysiology to electrophysiology, we simultaneously recorded neurons in prefrontal cortex and ERPs over extrastriate visual cortex to track the processing of salient distractors during a visual search task. Critically, when the salient distractor was successfully ignored, but not otherwise, we observed robust suppression of salient distractor representations. Like target selection, the distractor suppression was observed in prefrontal cortex before it appeared over extrastriate cortical areas. Furthermore, all prefrontal neurons that showed suppression of the task-irrelevant distractor also contributed to selecting the target. This suggests a common prefrontal mechanism is responsible for both selecting task-relevant and suppressing task-irrelevant information in sensory cortex. Taken together, our results resolve a long-standing debate over the mechanisms that prevent distraction, and provide the first evidence directly linking suppressed neural firing in prefrontal cortex with surface ERP measures of distractor suppression. |
Michael Dambacher; Reinhold Kliegl Synchronizing timelines: Relations between fixation durations and N400 amplitudes during sentence reading Journal Article Brain Research, 1155 (1), pp. 147–162, 2007. @article{Dambacher2007, title = {Synchronizing timelines: Relations between fixation durations and N400 amplitudes during sentence reading}, author = {Michael Dambacher and Reinhold Kliegl}, doi = {10.1016/j.brainres.2007.04.027}, year = {2007}, date = {2007-01-01}, journal = {Brain Research}, volume = {1155}, number = {1}, pages = {147--162}, abstract = {We examined relations between eye movements (single-fixation durations) and RSVP-based event-related potentials (ERPs; N400s) recorded during reading the same sentences in two independent experiments. Longer fixation durations correlated with larger N400 amplitudes. Word frequency and predictability of the fixated word as well as the predictability of the upcoming word accounted for this covariance in a path-analytic model. Moreover, larger N400 amplitudes entailed longer fixation durations on the next word, a relation accounted for by word frequency. This pattern offers a neurophysiological correlate for the lag-word frequency effect on fixation durations: word processing is reliably expressed not only in fixation durations on currently fixated words, but also in those on subsequently fixated words.}, keywords = {}, pubstate = {published}, tppubtype = {article} } We examined relations between eye movements (single-fixation durations) and RSVP-based event-related potentials (ERPs; N400s) recorded during reading the same sentences in two independent experiments. Longer fixation durations correlated with larger N400 amplitudes. Word frequency and predictability of the fixated word as well as the predictability of the upcoming word accounted for this covariance in a path-analytic model. Moreover, larger N400 amplitudes entailed longer fixation durations on the next word, a relation accounted for by word frequency. This pattern offers a neurophysiological correlate for the lag-word frequency effect on fixation durations: word processing is reliably expressed not only in fixation durations on currently fixated words, but also in those on subsequently fixated words. |
Sangita Dandekar; Claudio M Privitera; Thom Carney; Stanley A Klein Neural saccadic response estimation during natural viewing Sangita Journal Article Journal of Neurophysiology, 107 (4), pp. 1776–1790, 2011. @article{Dandekar2011, title = {Neural saccadic response estimation during natural viewing Sangita}, author = {Sangita Dandekar and Claudio M Privitera and Thom Carney and Stanley A Klein}, doi = {10.1111/j.1469-7998.1981.tb03488.x}, year = {2011}, date = {2011-01-01}, journal = {Journal of Neurophysiology}, volume = {107}, number = {4}, pages = {1776--1790}, abstract = {Studying neural activity during natural viewing conditions is not often attempted. Isolating the neural response of a single saccade is necessary to study neural activity during natural viewing; however, the close temporal spacing of saccades that occurs during natural viewing makes it difficult to determine the response to a single saccade. Herein, a general linear model (GLM) approach is applied to estimate the EEG neural saccadic response for different segments of the saccadic main sequence separately. It is determined that, in visual search conditions, neural responses estimated by conventional event-related averaging are significantly and systematically distorted relative to GLM estimates due to the close temporal spacing of saccades during visual search. Before the GLM is applied, analyses are applied that demonstrate that saccades during visual search with intersaccadic spacings as low as 100-150 ms do not exhibit significant refractory effects. Therefore, saccades displaying different intersaccadic spacings during visual search can be modeled using the same regressor in a GLM. With the use of the GLM approach, neural responses were separately estimated for five different ranges of saccade amplitudes during visual search. Occipital responses time locked to the onsets of saccades during visual search were found to account for, on average, 79 percent of the variance of EEG activity in a window 90-200 ms after the onsets of saccades for all five saccade amplitude ranges that spanned a range of 0.2-6.0 degrees. A GLM approach was also used to examine the lateralized ocular artifacts associated with saccades. Possible extensions of the methods presented here to account for the superposition of microsaccades in event-related EEG studies conducted in nominal fixation conditions are discussed.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Studying neural activity during natural viewing conditions is not often attempted. Isolating the neural response of a single saccade is necessary to study neural activity during natural viewing; however, the close temporal spacing of saccades that occurs during natural viewing makes it difficult to determine the response to a single saccade. Herein, a general linear model (GLM) approach is applied to estimate the EEG neural saccadic response for different segments of the saccadic main sequence separately. It is determined that, in visual search conditions, neural responses estimated by conventional event-related averaging are significantly and systematically distorted relative to GLM estimates due to the close temporal spacing of saccades during visual search. Before the GLM is applied, analyses are applied that demonstrate that saccades during visual search with intersaccadic spacings as low as 100-150 ms do not exhibit significant refractory effects. Therefore, saccades displaying different intersaccadic spacings during visual search can be modeled using the same regressor in a GLM. With the use of the GLM approach, neural responses were separately estimated for five different ranges of saccade amplitudes during visual search. Occipital responses time locked to the onsets of saccades during visual search were found to account for, on average, 79 percent of the variance of EEG activity in a window 90-200 ms after the onsets of saccades for all five saccade amplitude ranges that spanned a range of 0.2-6.0 degrees. A GLM approach was also used to examine the lateralized ocular artifacts associated with saccades. Possible extensions of the methods presented here to account for the superposition of microsaccades in event-related EEG studies conducted in nominal fixation conditions are discussed. |
Sangita Dandekar; Jian Ding; Claudio M Privitera; Thom Carney; Stanley A Klein The fixation and saccade P3 Journal Article PLoS ONE, 7 (11), pp. e48761, 2012. @article{Dandekar2012, title = {The fixation and saccade P3}, author = {Sangita Dandekar and Jian Ding and Claudio M Privitera and Thom Carney and Stanley A Klein}, doi = {10.1371/journal.pone.0048761}, year = {2012}, date = {2012-01-01}, journal = {PLoS ONE}, volume = {7}, number = {11}, pages = {e48761}, abstract = {Although most instances of object recognition during natural viewing occur in the presence of saccades, the neural correlates of objection recognition have almost exclusively been examined during fixation. Recent studies have indicated that there are post-saccadic modulations of neural activity immediately following eye movement landing; however, whether post-saccadic modulations affect relatively late occurring cognitive components such as the P3 has not been explored. The P3 as conventionally measured at fixation is commonly used in brain computer interfaces, hence characterizing the post-saccadic P3 could aid in the development of improved brain computer interfaces that allow for eye movements. In this study, the P3 observed after saccadic landing was compared to the P3 measured at fixation. No significant differences in P3 start time, temporal persistence, or amplitude were found between fixation and saccade trials. Importantly, sensory neural responses canceled in the target minus distracter comparisons used to identify the P3. Our results indicate that relatively late occurring cognitive neural components such as the P3 are likely less sensitive to post saccadic modulations than sensory neural components and other neural activity occurring shortly after eye movement landing. Furthermore, due to the similarity of the fixation and saccade P3, we conclude that the P3 following saccadic landing could possibly be used as a viable signal in brain computer interfaces allowing for eye movements.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Although most instances of object recognition during natural viewing occur in the presence of saccades, the neural correlates of objection recognition have almost exclusively been examined during fixation. Recent studies have indicated that there are post-saccadic modulations of neural activity immediately following eye movement landing; however, whether post-saccadic modulations affect relatively late occurring cognitive components such as the P3 has not been explored. The P3 as conventionally measured at fixation is commonly used in brain computer interfaces, hence characterizing the post-saccadic P3 could aid in the development of improved brain computer interfaces that allow for eye movements. In this study, the P3 observed after saccadic landing was compared to the P3 measured at fixation. No significant differences in P3 start time, temporal persistence, or amplitude were found between fixation and saccade trials. Importantly, sensory neural responses canceled in the target minus distracter comparisons used to identify the P3. Our results indicate that relatively late occurring cognitive neural components such as the P3 are likely less sensitive to post saccadic modulations than sensory neural components and other neural activity occurring shortly after eye movement landing. Furthermore, due to the similarity of the fixation and saccade P3, we conclude that the P3 following saccadic landing could possibly be used as a viable signal in brain computer interfaces allowing for eye movements. |
Antea D'Andrea; Federico Chella; Tom R Marshall; Vittorio Pizzella; Gian Luca Romani; Ole Jensen; Laura Marzetti NeuroImage, 188 , pp. 722–732, 2019. @article{DAndrea2019, title = {Alpha and alpha-beta phase synchronization mediate the recruitment of the visuospatial attention network through the Superior Longitudinal Fasciculus}, author = {Antea D'Andrea and Federico Chella and Tom R Marshall and Vittorio Pizzella and Gian Luca Romani and Ole Jensen and Laura Marzetti}, doi = {10.1016/j.neuroimage.2018.12.056}, year = {2019}, date = {2019-01-01}, journal = {NeuroImage}, volume = {188}, pages = {722--732}, abstract = {It is well known that attentional selection of relevant information relies on local synchronization of alpha band neuronal oscillations in visual cortices for inhibition of distracting inputs. Additionally, evidence for long-range coupling of neuronal oscillations between visual cortices and regions engaged in the anticipation of upcoming stimuli has been more recently provided. Nevertheless, on the one hand the relation between long-range functional coupling and anatomical connections is still to be assessed, and, on the other hand, the specific role of the alpha and beta frequency bands in the different processes underlying visuo-spatial attention still needs further clarification. We address these questions using measures of linear (frequency-specific) and nonlinear (cross-frequency) phase-synchronization in a cohort of 28 healthy subjects using magnetoencephalography. We show that alpha band phase-synchronization is modulated by the orienting of attention according to a parieto-occipital top-down mechanism reflecting behavior, and its hemispheric asymmetry is predicted by volume's asymmetry of specific tracts of the Superior-Longitudinal-Fasciculus. We also show that a network comprising parietal regions and the right putative Frontal-Eye-Field, but not the left, is recruited in the deployment of spatial attention through an alpha-beta cross-frequency coupling. Overall, we demonstrate that the visuospatial attention network features subsystems indexed by characteristic spectral fingerprints, playing different functional roles in the anticipation of upcoming stimuli and with diverse relation to fiber tracts.}, keywords = {}, pubstate = {published}, tppubtype = {article} } It is well known that attentional selection of relevant information relies on local synchronization of alpha band neuronal oscillations in visual cortices for inhibition of distracting inputs. Additionally, evidence for long-range coupling of neuronal oscillations between visual cortices and regions engaged in the anticipation of upcoming stimuli has been more recently provided. Nevertheless, on the one hand the relation between long-range functional coupling and anatomical connections is still to be assessed, and, on the other hand, the specific role of the alpha and beta frequency bands in the different processes underlying visuo-spatial attention still needs further clarification. We address these questions using measures of linear (frequency-specific) and nonlinear (cross-frequency) phase-synchronization in a cohort of 28 healthy subjects using magnetoencephalography. We show that alpha band phase-synchronization is modulated by the orienting of attention according to a parieto-occipital top-down mechanism reflecting behavior, and its hemispheric asymmetry is predicted by volume's asymmetry of specific tracts of the Superior-Longitudinal-Fasciculus. We also show that a network comprising parietal regions and the right putative Frontal-Eye-Field, but not the left, is recruited in the deployment of spatial attention through an alpha-beta cross-frequency coupling. Overall, we demonstrate that the visuospatial attention network features subsystems indexed by characteristic spectral fingerprints, playing different functional roles in the anticipation of upcoming stimuli and with diverse relation to fiber tracts. |
Jonathan Daume; Peng Wang; Alexander Maye; Dan Zhang; Andreas K Engel Non-rhythmic temporal prediction involves phase resets of low-frequency delta oscillations Journal Article NeuroImage, 224 , pp. 1–17, 2021. @article{Daume2021, title = {Non-rhythmic temporal prediction involves phase resets of low-frequency delta oscillations}, author = {Jonathan Daume and Peng Wang and Alexander Maye and Dan Zhang and Andreas K Engel}, doi = {10.1016/j.neuroimage.2020.117376}, year = {2021}, date = {2021-01-01}, journal = {NeuroImage}, volume = {224}, pages = {1--17}, publisher = {Elsevier Inc.}, abstract = {The phase of neural oscillatory signals aligns to the predicted onset of upcoming stimulation. Whether such phase alignments represent phase resets of underlying neural oscillations or just rhythmically evoked activity, and whether they can be observed in a rhythm-free visual context, however, remains unclear. Here, we recorded the magnetoencephalogram while participants were engaged in a temporal prediction task, judging the visual or tactile reappearance of a uniformly moving stimulus. The prediction conditions were contrasted with a control condition to dissociate phase adjustments of neural oscillations from stimulus-driven activity. We observed stronger delta band inter-trial phase consistency (ITPC) in a network of sensory, parietal and frontal brain areas, but no power increase reflecting stimulus-driven or prediction-related evoked activity. Delta ITPC further correlated with prediction performance in the cerebellum and visual cortex. Our results provide evidence that phase alignments of low-frequency neural oscillations underlie temporal predictions in a non-rhythmic visual and crossmodal context.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The phase of neural oscillatory signals aligns to the predicted onset of upcoming stimulation. Whether such phase alignments represent phase resets of underlying neural oscillations or just rhythmically evoked activity, and whether they can be observed in a rhythm-free visual context, however, remains unclear. Here, we recorded the magnetoencephalogram while participants were engaged in a temporal prediction task, judging the visual or tactile reappearance of a uniformly moving stimulus. The prediction conditions were contrasted with a control condition to dissociate phase adjustments of neural oscillations from stimulus-driven activity. We observed stronger delta band inter-trial phase consistency (ITPC) in a network of sensory, parietal and frontal brain areas, but no power increase reflecting stimulus-driven or prediction-related evoked activity. Delta ITPC further correlated with prediction performance in the cerebellum and visual cortex. Our results provide evidence that phase alignments of low-frequency neural oscillations underlie temporal predictions in a non-rhythmic visual and crossmodal context. |
Marco Davare; A Zénon; Gilles Pourtois; Michel Desmurget; Etienne Olivier Role of the medial part of the intraparietal sulcus in implementing movement direction Journal Article Cerebral Cortex, 22 (6), pp. 1382–1394, 2012. @article{Davare2012, title = {Role of the medial part of the intraparietal sulcus in implementing movement direction}, author = {Marco Davare and A Zénon and Gilles Pourtois and Michel Desmurget and Etienne Olivier}, doi = {10.1093/cercor/bhr210}, year = {2012}, date = {2012-01-01}, journal = {Cerebral Cortex}, volume = {22}, number = {6}, pages = {1382--1394}, abstract = {The contribution of the posterior parietal cortex (PPC) to visually guided movements has been originally inferred from observations made in patients suffering from optic ataxia. Subsequent electrophysiological studies in monkeys and functional imaging data in humans have corroborated the key role played by the PPC in sensorimotor transformations underlying goal-directed movements, although the exact contribution of this structure remains debated. Here, we used transcranial magnetic stimulation (TMS) to interfere transiently with the function of the left or right medial part of the intraparietal sulcus (mIPS) in healthy volunteers performing visually guided movements with the right hand. We found that a "virtual lesion" of either mIPS increased the scattering in initial movement direction (DIR), leading to longer trajectory and prolonged movement time, but only when TMS was delivered 100-160 ms before movement onset and for movements directed toward contralateral targets. Control experiments showed that deficits in DIR consequent to mIPS virtual lesions resulted from an inappropriate implementation of the motor command underlying the forthcoming movement and not from an inaccurate computation of the target localization. The present study indicates that mIPS plays a causal role in implementing specifically the direction vector of visually guided movements toward objects situated in the contralateral hemifield.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The contribution of the posterior parietal cortex (PPC) to visually guided movements has been originally inferred from observations made in patients suffering from optic ataxia. Subsequent electrophysiological studies in monkeys and functional imaging data in humans have corroborated the key role played by the PPC in sensorimotor transformations underlying goal-directed movements, although the exact contribution of this structure remains debated. Here, we used transcranial magnetic stimulation (TMS) to interfere transiently with the function of the left or right medial part of the intraparietal sulcus (mIPS) in healthy volunteers performing visually guided movements with the right hand. We found that a "virtual lesion" of either mIPS increased the scattering in initial movement direction (DIR), leading to longer trajectory and prolonged movement time, but only when TMS was delivered 100-160 ms before movement onset and for movements directed toward contralateral targets. Control experiments showed that deficits in DIR consequent to mIPS virtual lesions resulted from an inappropriate implementation of the motor command underlying the forthcoming movement and not from an inaccurate computation of the target localization. The present study indicates that mIPS plays a causal role in implementing specifically the direction vector of visually guided movements toward objects situated in the contralateral hemifield. |
Ido Davidesco; Michal Harel; Michal Ramot; Uri Kramer; Svetlana Kipervasser; Fani Andelman; Miri Y Neufeld; Gadi Goelman; Itzhak Fried; Rafael Malach Spatial and object-based attention modulates broadband high-frequency responses across the human visual cortical hierarchy Journal Article Journal of Neuroscience, 33 (3), pp. 1228–1240, 2013. @article{Davidesco2013, title = {Spatial and object-based attention modulates broadband high-frequency responses across the human visual cortical hierarchy}, author = {Ido Davidesco and Michal Harel and Michal Ramot and Uri Kramer and Svetlana Kipervasser and Fani Andelman and Miri Y Neufeld and Gadi Goelman and Itzhak Fried and Rafael Malach}, doi = {10.1523/JNEUROSCI.3181-12.2013}, year = {2013}, date = {2013-01-01}, journal = {Journal of Neuroscience}, volume = {33}, number = {3}, pages = {1228--1240}, abstract = {One of the puzzling aspects in the visual attention literature is the discrepancy between electrophysiological and fMRI findings: whereas fMRI studies reveal strong attentional modulation in the earliest visual areas, single-unit and local field potential studies yielded mixed results. In addition, it is not clear to what extent spatial attention effects extend from early to high-order visual areas. Here we addressed these issues using electrocorticography recordings in epileptic patients. The patients performed a task that allowed simultaneous manipulation ofboth spatial and object-based attention. They were presented with composite stimuli, consisting ofa small object (face or house) superimposed on a large one, and in separate blocks, were instructed to attend one ofthe objects. We found a consistent increase in broadband high-frequency (30–90Hz) power, but not in visual evoked potentials, associated with spatial attention starting withV1/V2 and continuing throughout the visual hierarchy. The magnitude ofthe attentional modulation was correlated with the spatial selectivity of each electrode and its distance from the occipital pole. Interestingly, the latency of the attentional modulation showed a significant decrease along the visual hierarchy. In addition, electrodes placed over high-order visual areas (e.g., fusiform gyrus) showed both effects of spatial and object-based attention. Overall, our results help to reconcile previous observations of discrepancy between fMRI and electrophysiology. They also imply that spatial attention effects can be found both in early and high-order visual cortical areas, in parallel with their stimulus tuning properties.}, keywords = {}, pubstate = {published}, tppubtype = {article} } One of the puzzling aspects in the visual attention literature is the discrepancy between electrophysiological and fMRI findings: whereas fMRI studies reveal strong attentional modulation in the earliest visual areas, single-unit and local field potential studies yielded mixed results. In addition, it is not clear to what extent spatial attention effects extend from early to high-order visual areas. Here we addressed these issues using electrocorticography recordings in epileptic patients. The patients performed a task that allowed simultaneous manipulation ofboth spatial and object-based attention. They were presented with composite stimuli, consisting ofa small object (face or house) superimposed on a large one, and in separate blocks, were instructed to attend one ofthe objects. We found a consistent increase in broadband high-frequency (30–90Hz) power, but not in visual evoked potentials, associated with spatial attention starting withV1/V2 and continuing throughout the visual hierarchy. The magnitude ofthe attentional modulation was correlated with the spatial selectivity of each electrode and its distance from the occipital pole. Interestingly, the latency of the attentional modulation showed a significant decrease along the visual hierarchy. In addition, electrodes placed over high-order visual areas (e.g., fusiform gyrus) showed both effects of spatial and object-based attention. Overall, our results help to reconcile previous observations of discrepancy between fMRI and electrophysiology. They also imply that spatial attention effects can be found both in early and high-order visual cortical areas, in parallel with their stimulus tuning properties. |
Federica Degno; Otto Loberg; Chuanli Zang; Manman Zhang; Nick Donnelly; Simon P Liversedge A co-registration investigation of inter-word spacing and parafoveal preview: Eye movements and fixation-related potentials Journal Article PLoS ONE, 14 (12), pp. e0225819, 2019. @article{Degno2019, title = {A co-registration investigation of inter-word spacing and parafoveal preview: Eye movements and fixation-related potentials}, author = {Federica Degno and Otto Loberg and Chuanli Zang and Manman Zhang and Nick Donnelly and Simon P Liversedge}, doi = {10.1371/journal.pone.0225819}, year = {2019}, date = {2019-01-01}, journal = {PLoS ONE}, volume = {14}, number = {12}, pages = {e0225819}, abstract = {Participants' eye movements (EMs) and EEG signal were simultaneously recorded to examine foveal and parafoveal processing during sentence reading. All the words in the sentence were manipulated for inter-word spacing (intact spaces vs. spaces replaced by a random letter) and parafoveal preview (identical preview vs. random letter string preview). We observed disruption for unspaced text and invalid preview conditions in both EMs and fixation- related potentials (FRPs). Unspaced and invalid preview conditions received longer reading times than spaced and valid preview conditions. In addition, the FRP data showed that unspaced previews disrupted reading in earlier time windows of analysis, compared to string preview conditions. Moreover, the effect of parafoveal preview was greater for spaced relative to unspaced conditions, in both EMs and FRPs. These findings replicate well-established preview effects, provide novel insight into the neural correlates of reading with and without inter-word spacing and suggest that spatial selection precedes lexical processing.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Participants' eye movements (EMs) and EEG signal were simultaneously recorded to examine foveal and parafoveal processing during sentence reading. All the words in the sentence were manipulated for inter-word spacing (intact spaces vs. spaces replaced by a random letter) and parafoveal preview (identical preview vs. random letter string preview). We observed disruption for unspaced text and invalid preview conditions in both EMs and fixation- related potentials (FRPs). Unspaced and invalid preview conditions received longer reading times than spaced and valid preview conditions. In addition, the FRP data showed that unspaced previews disrupted reading in earlier time windows of analysis, compared to string preview conditions. Moreover, the effect of parafoveal preview was greater for spaced relative to unspaced conditions, in both EMs and FRPs. These findings replicate well-established preview effects, provide novel insight into the neural correlates of reading with and without inter-word spacing and suggest that spatial selection precedes lexical processing. |
Federica Degno; Otto Loberg; Chuanli Zang; Manman Zhang; Nick Donnelly; Simon P Liversedge Parafoveal previews and lexical frequency in natural reading: Evidence from eye movements and fixation-related potentials. Journal Article Journal of Experimental Psychology: General, 148 (3), pp. 453–474, 2019. @article{Degno2019a, title = {Parafoveal previews and lexical frequency in natural reading: Evidence from eye movements and fixation-related potentials.}, author = {Federica Degno and Otto Loberg and Chuanli Zang and Manman Zhang and Nick Donnelly and Simon P Liversedge}, doi = {10.1037/xge0000494}, year = {2019}, date = {2019-01-01}, journal = {Journal of Experimental Psychology: General}, volume = {148}, number = {3}, pages = {453--474}, abstract = {Participants' eye movements and electroencephalogram (EEG) signal were recorded as they read sentences displayed according to the gaze-contingent boundary paradigm. Two target words in each sentence were manipulated for lexical frequency (high vs. low frequency) and parafoveal preview of each target word (identical vs. string of random letters vs. string of Xs). Eye movement data revealed visual parafoveal-on-foveal (PoF) effects, as well as foveal visual and orthographic preview effects and word frequency effects. Fixation-related potentials (FRPs) showed visual and orthographic PoF effects as well as foveal visual and orthographic preview effects. Our results replicated the early preview positivity effect (Dimigen, Kliegl, & Sommer, 2012) in the X-string preview condition, and revealed different neural correlates associated with a preview comprised of a string of random letters relative to a string of Xs. The former effects seem likely to reflect difficulty associated with the integration of parafoveal and foveal information, as well as feature overlap, while the latter reflect inhibition, and potentially disruption, to processing underlying reading. Interestingly, and consistent with Kretzschmar, Schlesewsky, and Staub (2015), no frequency effect was reflected in the FRP measures. The findings provide insight into the neural correlates of parafoveal processing and written word recognition in reading and demonstrate the value of utilizing ecologically valid paradigms to study well established phenomena that occur as text is read naturally.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Participants' eye movements and electroencephalogram (EEG) signal were recorded as they read sentences displayed according to the gaze-contingent boundary paradigm. Two target words in each sentence were manipulated for lexical frequency (high vs. low frequency) and parafoveal preview of each target word (identical vs. string of random letters vs. string of Xs). Eye movement data revealed visual parafoveal-on-foveal (PoF) effects, as well as foveal visual and orthographic preview effects and word frequency effects. Fixation-related potentials (FRPs) showed visual and orthographic PoF effects as well as foveal visual and orthographic preview effects. Our results replicated the early preview positivity effect (Dimigen, Kliegl, & Sommer, 2012) in the X-string preview condition, and revealed different neural correlates associated with a preview comprised of a string of random letters relative to a string of Xs. The former effects seem likely to reflect difficulty associated with the integration of parafoveal and foveal information, as well as feature overlap, while the latter reflect inhibition, and potentially disruption, to processing underlying reading. Interestingly, and consistent with Kretzschmar, Schlesewsky, and Staub (2015), no frequency effect was reflected in the FRP measures. The findings provide insight into the neural correlates of parafoveal processing and written word recognition in reading and demonstrate the value of utilizing ecologically valid paradigms to study well established phenomena that occur as text is read naturally. |
Peter De Lissa; Roberto Caldara; Victoria Nicholls; Sebastien Miellet In pursuit of visual attention: SSVEP frequency-tagging moving targets Journal Article PLoS ONE, 15 (8), pp. 1–15, 2020. @article{DeLissa2020, title = {In pursuit of visual attention: SSVEP frequency-tagging moving targets}, author = {Peter {De Lissa} and Roberto Caldara and Victoria Nicholls and Sebastien Miellet}, doi = {10.1371/journal.pone.0236967}, year = {2020}, date = {2020-01-01}, journal = {PLoS ONE}, volume = {15}, number = {8}, pages = {1--15}, abstract = {Previous research has shown that visual attention does not always exactly follow gaze direction, leading to the concepts of overt and covert attention. However, it is not yet clear how such covert shifts of visual attention to peripheral regions impact the processing of the targets we directly foveate as they move in our visual field. The current study utilised the coregistration of eye-position and EEG recordings while participants tracked moving targets that were embedded with a 30 Hz frequency tag in a Steady State Visually Evoked Potentials (SSVEP) paradigm. When the task required attention to be divided between the moving target (overt attention) and a peripheral region where a second target might appear (covert attention), the SSVEPs elicited by the tracked target at the 30 Hz frequency band were significantly, but transiently, lower than when participants did not have to covertly monitor for a second target. Our findings suggest that neural responses of overt attention are only briefly reduced when attention is divided between covert and overt areas. This neural evidence is in line with theoretical accounts describing attention as a pool of finite resources, such as the perceptual load theory. Altogether, these results have practical implications for many real-world situations where covert shifts of attention may discretely reduce visual processing of objects even when they are directly being tracked with the eyes.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Previous research has shown that visual attention does not always exactly follow gaze direction, leading to the concepts of overt and covert attention. However, it is not yet clear how such covert shifts of visual attention to peripheral regions impact the processing of the targets we directly foveate as they move in our visual field. The current study utilised the coregistration of eye-position and EEG recordings while participants tracked moving targets that were embedded with a 30 Hz frequency tag in a Steady State Visually Evoked Potentials (SSVEP) paradigm. When the task required attention to be divided between the moving target (overt attention) and a peripheral region where a second target might appear (covert attention), the SSVEPs elicited by the tracked target at the 30 Hz frequency band were significantly, but transiently, lower than when participants did not have to covertly monitor for a second target. Our findings suggest that neural responses of overt attention are only briefly reduced when attention is divided between covert and overt areas. This neural evidence is in line with theoretical accounts describing attention as a pool of finite resources, such as the perceptual load theory. Altogether, these results have practical implications for many real-world situations where covert shifts of attention may discretely reduce visual processing of objects even when they are directly being tracked with the eyes. |
Sergio Delle Monache; Francesco Lacquaniti; Gianfranco Bosco Journal of Neurophysiology, 118 (3), pp. 1809–1823, 2017. @article{DelleMonache2017, title = {Differential contributions to the interception of occluded ballistic trajectories by the temporoparietal junction, area hMT/V5+, and the intraparietal cortex}, author = {Sergio {Delle Monache} and Francesco Lacquaniti and Gianfranco Bosco}, doi = {10.1152/jn.00068.2017}, year = {2017}, date = {2017-01-01}, journal = {Journal of Neurophysiology}, volume = {118}, number = {3}, pages = {1809--1823}, abstract = {The ability to catch objects when tran- siently occluded from view suggests their motion can be extrapolated. Intraparietal cortex (IPS) plays a major role in this process along with other brain structures, depending on the task. For example, intercep- tion of objects under Earth's gravity effects may depend on time-to-contact predictions derived from integration of visual signals processed by hMT/V5⫹ with a priori knowledge of gravity residing in the temporoparietal junction (TPJ). To investigate this issue further, we disrupted TPJ, hMT/V5⫹, and IPS activities with transcranial magnetic stimulation (TMS) while subjects intercepted computer- simulated projectile trajectories perturbed randomly with either hypo- or hypergravity effects. In experiment 1, trajectories were occluded either 750 or 1,250 ms before landing. Three subject groups underwent triple-pulse TMS (tpTMS, 3 pulses at 10 Hz) on one target area (TPJ | hMT/V5⫹ | IPS) and on the vertex (control site), timed at either trajectory perturbation or occlusion. In experiment 2, trajectories were entirely visible and participants received tpTMS on TPJ and hMT/ V5+ with same timing as experiment 1. tpTMS of TPJ, hMT/V5⫹, and IPS affected differently the interceptive timing. TPJ stimulation affected preferentially responses to 1-g motion, hMT/V5+ all response types, and IPS stimulation induced opposite effects on 0-g and 2-g responses, being ineffective on 1-g responses. Only IPS stimulation was effective when applied after target disappearance, implying this area might elaborate memory representations of occluded target motion. Results are compatible with the idea that IPS, TPJ, and hMT/V5+ contribute to distinct aspects of visual motion extrapolation, perhaps through parallel processing.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The ability to catch objects when tran- siently occluded from view suggests their motion can be extrapolated. Intraparietal cortex (IPS) plays a major role in this process along with other brain structures, depending on the task. For example, intercep- tion of objects under Earth's gravity effects may depend on time-to-contact predictions derived from integration of visual signals processed by hMT/V5⫹ with a priori knowledge of gravity residing in the temporoparietal junction (TPJ). To investigate this issue further, we disrupted TPJ, hMT/V5⫹, and IPS activities with transcranial magnetic stimulation (TMS) while subjects intercepted computer- simulated projectile trajectories perturbed randomly with either hypo- or hypergravity effects. In experiment 1, trajectories were occluded either 750 or 1,250 ms before landing. Three subject groups underwent triple-pulse TMS (tpTMS, 3 pulses at 10 Hz) on one target area (TPJ | hMT/V5⫹ | IPS) and on the vertex (control site), timed at either trajectory perturbation or occlusion. In experiment 2, trajectories were entirely visible and participants received tpTMS on TPJ and hMT/ V5+ with same timing as experiment 1. tpTMS of TPJ, hMT/V5⫹, and IPS affected differently the interceptive timing. TPJ stimulation affected preferentially responses to 1-g motion, hMT/V5+ all response types, and IPS stimulation induced opposite effects on 0-g and 2-g responses, being ineffective on 1-g responses. Only IPS stimulation was effective when applied after target disappearance, implying this area might elaborate memory representations of occluded target motion. Results are compatible with the idea that IPS, TPJ, and hMT/V5+ contribute to distinct aspects of visual motion extrapolation, perhaps through parallel processing. |
Gerard Derosiere; Pierre-Alexandre Klein; Sylvie Nozaradan; Alexandre Zénon; André Mouraux; Julie Duque Visuomotor correlates of conflict expectation in the context of motor decisions Journal Article Journal of Neuroscience, 38 (44), pp. 9486–9504, 2018. @article{Derosiere2018, title = {Visuomotor correlates of conflict expectation in the context of motor decisions}, author = {Gerard Derosiere and Pierre-Alexandre Klein and Sylvie Nozaradan and Alexandre Zénon and André Mouraux and Julie Duque}, doi = {10.1523/jneurosci.0623-18.2018}, year = {2018}, date = {2018-01-01}, journal = {Journal of Neuroscience}, volume = {38}, number = {44}, pages = {9486--9504}, abstract = {Many behaviors require choosing between conflicting options competing against each other in visuomotor areas. Such choices can benefit from top-down control processes engaging frontal areas in advance of conflict when it is anticipated. Yet, very little is known about how this proactive control system shapes the visuomotor competition. Here, we used electroencephalography in human subjects (male and female) to identify the visual and motor correlates of conflict expectation in a version ofthe Eriksen Flanker task that required left or right responses according to the direction of a central target arrow surrounded by congruent or incongruent (conflicting) flankers. Visual conflict was either highly expected (it occurred in 80% of trials; mostly incongruent blocks) or very unlikely (20% of trials; mostly congruent blocks). We evaluated selective attention in the visual cortex by recording target- and flanker-related steady-state visual- evoked potentials (SSVEPs) and probed action selection by measuring response-locked potentials (RLPs) in the motor cortex. Conflict expectation enhanced accuracy in incongruent trials, but this improvement occurred at the cost ofspeed in congruent trials. Intriguingly, this behavioral adjustment occurred while visuomotor activity was less finely tuned: target-related SSVEPs were smaller while flanker related SSVEPs were higher in mostly incongruent blocks than in mostly congruent blocks, and incongruent trials were associated with larger RLPs in the ipsilateral (nonselected) motor cortex. Hence, our data suggest that conflict expectation recruits control processes that augment the tolerance for inappropriate visuomotor activations (rather than processes that down regulate their amplitude), allowing for overflow activity to occur without having it turn into the selection of an incorrect response.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Many behaviors require choosing between conflicting options competing against each other in visuomotor areas. Such choices can benefit from top-down control processes engaging frontal areas in advance of conflict when it is anticipated. Yet, very little is known about how this proactive control system shapes the visuomotor competition. Here, we used electroencephalography in human subjects (male and female) to identify the visual and motor correlates of conflict expectation in a version ofthe Eriksen Flanker task that required left or right responses according to the direction of a central target arrow surrounded by congruent or incongruent (conflicting) flankers. Visual conflict was either highly expected (it occurred in 80% of trials; mostly incongruent blocks) or very unlikely (20% of trials; mostly congruent blocks). We evaluated selective attention in the visual cortex by recording target- and flanker-related steady-state visual- evoked potentials (SSVEPs) and probed action selection by measuring response-locked potentials (RLPs) in the motor cortex. Conflict expectation enhanced accuracy in incongruent trials, but this improvement occurred at the cost ofspeed in congruent trials. Intriguingly, this behavioral adjustment occurred while visuomotor activity was less finely tuned: target-related SSVEPs were smaller while flanker related SSVEPs were higher in mostly incongruent blocks than in mostly congruent blocks, and incongruent trials were associated with larger RLPs in the ipsilateral (nonselected) motor cortex. Hence, our data suggest that conflict expectation recruits control processes that augment the tolerance for inappropriate visuomotor activations (rather than processes that down regulate their amplitude), allowing for overflow activity to occur without having it turn into the selection of an incorrect response. |
Andrea Desantis; Adrien Chan-Hon-Tong; Thérèse Collins; Hinze Hogendoorn; Patrick Cavanagh Decoding the temporal dynamics of covert spatial attention using multivariate EEG analysis: Contributions of raw amplitude and alpha power Journal Article Frontiers in Human Neuroscience, 14 , pp. 1–14, 2020. @article{Desantis2020, title = {Decoding the temporal dynamics of covert spatial attention using multivariate EEG analysis: Contributions of raw amplitude and alpha power}, author = {Andrea Desantis and Adrien Chan-Hon-Tong and Thér{è}se Collins and Hinze Hogendoorn and Patrick Cavanagh}, doi = {10.3389/fnhum.2020.570419}, year = {2020}, date = {2020-01-01}, journal = {Frontiers in Human Neuroscience}, volume = {14}, pages = {1--14}, abstract = {Attention can be oriented in space covertly without the need of eye movements. We used multivariate pattern classification analyses (MVPA) to investigate whether the time course of the deployment of covert spatial attention leading up to the observer's perceptual decision can be decoded from both EEG alpha power and raw activity traces. Decoding attention from these signals can help determine whether raw EEG signals and alpha power reflect the same or distinct features of attentional selection. Using a classical cueing task, we showed that the orientation of covert spatial attention can be decoded by both signals. However, raw activity and alpha power may reflect different features of spatial attention, with alpha power more associated with the orientation of covert attention in space and raw activity with the influence of attention on perceptual processes.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Attention can be oriented in space covertly without the need of eye movements. We used multivariate pattern classification analyses (MVPA) to investigate whether the time course of the deployment of covert spatial attention leading up to the observer's perceptual decision can be decoded from both EEG alpha power and raw activity traces. Decoding attention from these signals can help determine whether raw EEG signals and alpha power reflect the same or distinct features of attentional selection. Using a classical cueing task, we showed that the orientation of covert spatial attention can be decoded by both signals. However, raw activity and alpha power may reflect different features of spatial attention, with alpha power more associated with the orientation of covert attention in space and raw activity with the influence of attention on perceptual processes. |
Nuno Alexandre De Sá Teixeira; Gianfranco Bosco; Sergio Delle Monache; Francesco Lacquaniti Experimental Brain Research, 237 (12), pp. 3375–3390, 2019. @article{DeSaTeixeira2019, title = {The role of cortical areas hMT/V5+ and TPJ on the magnitude of representational momentum and representational gravity: A transcranial magnetic stimulation study}, author = {Nuno Alexandre {De Sá Teixeira} and Gianfranco Bosco and Sergio {Delle Monache} and Francesco Lacquaniti}, doi = {10.1007/s00221-019-05683-z}, year = {2019}, date = {2019-01-01}, journal = {Experimental Brain Research}, volume = {237}, number = {12}, pages = {3375--3390}, publisher = {Springer Berlin Heidelberg}, abstract = {The perceived vanishing location of a moving target is systematically displaced forward, in the direction of motion—representational momentum—, and downward, in the direction of gravity—representational gravity. Despite a wealth of research on the factors that modulate these phenomena, little is known regarding their neurophysiological substrates. The present experiment aims to explore which role is played by cortical areas hMT/V5+, linked to the processing of visual motion, and TPJ, thought to support the functioning of an internal model of gravity, in modulating both effects. Participants were required to perform a standard spatial localization task while the activity of the right hMT/V5+ or TPJ sites was selectively disrupted with an offline continuous theta-burst stimulation (cTBS) protocol, interspersed with control blocks with no stimulation. Eye movements were recorded during all spatial localizations. Results revealed an increase in representational gravity contingent on the disruption of the activity of hMT/V5+ and, conversely, some evidence suggested a bigger representational momentum when TPJ was stimulated. Furthermore, stimulation of hMT/V5+ led to a decreased ocular overshoot and to a time-dependent downward drift of gaze location. These outcomes suggest that a reciprocal balance between perceived kinematics and anticipated dynamics might modulate these spatial localization responses, compatible with a push–pull mechanism.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The perceived vanishing location of a moving target is systematically displaced forward, in the direction of motion—representational momentum—, and downward, in the direction of gravity—representational gravity. Despite a wealth of research on the factors that modulate these phenomena, little is known regarding their neurophysiological substrates. The present experiment aims to explore which role is played by cortical areas hMT/V5+, linked to the processing of visual motion, and TPJ, thought to support the functioning of an internal model of gravity, in modulating both effects. Participants were required to perform a standard spatial localization task while the activity of the right hMT/V5+ or TPJ sites was selectively disrupted with an offline continuous theta-burst stimulation (cTBS) protocol, interspersed with control blocks with no stimulation. Eye movements were recorded during all spatial localizations. Results revealed an increase in representational gravity contingent on the disruption of the activity of hMT/V5+ and, conversely, some evidence suggested a bigger representational momentum when TPJ was stimulated. Furthermore, stimulation of hMT/V5+ led to a decreased ocular overshoot and to a time-dependent downward drift of gaze location. These outcomes suggest that a reciprocal balance between perceived kinematics and anticipated dynamics might modulate these spatial localization responses, compatible with a push–pull mechanism. |
Joost C Dessing; Michael Vesia; Douglas J Crawford The role of areas MT+/V5 and SPOC in spatial and temporal control of manual interception: An rTMS study Journal Article Frontiers in Behavioral Neuroscience, 7 , pp. 1–13, 2013. @article{Dessing2013, title = {The role of areas MT+/V5 and SPOC in spatial and temporal control of manual interception: An rTMS study}, author = {Joost C Dessing and Michael Vesia and Douglas J Crawford}, doi = {10.3389/fnbeh.2013.00015}, year = {2013}, date = {2013-01-01}, journal = {Frontiers in Behavioral Neuroscience}, volume = {7}, pages = {1--13}, abstract = {Manual interception, such as catching or hitting an approaching ball, requires the hand to contact a moving object at the right location and at the right time. Many studies have examined the neural mechanisms underlying the spatial aspects of goal-directed reaching, but the neural basis of the spatial and temporal aspects of manual interception are largely unknown. Here, we used repetitive transcranial magnetic stimulation (rTMS) to investigate the role of the human middle temporal visual motion area (MT+/V5) and superior parieto-occipital cortex (SPOC) in the spatial and temporal control of manual interception. Participants were required to reach-to-intercept a downward moving visual target that followed an unpredictably curved trajectory, presented on a screen in the vertical plane. We found that rTMS to MT+/V5 influenced interceptive timing and positioning, whereas rTMS to SPOC only tended to increase the spatial variance in reach end points for selected target trajectories. These findings are consistent with theories arguing that distinct neural mechanisms contribute to spatial, temporal, and spatiotemporal control of manual interception.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Manual interception, such as catching or hitting an approaching ball, requires the hand to contact a moving object at the right location and at the right time. Many studies have examined the neural mechanisms underlying the spatial aspects of goal-directed reaching, but the neural basis of the spatial and temporal aspects of manual interception are largely unknown. Here, we used repetitive transcranial magnetic stimulation (rTMS) to investigate the role of the human middle temporal visual motion area (MT+/V5) and superior parieto-occipital cortex (SPOC) in the spatial and temporal control of manual interception. Participants were required to reach-to-intercept a downward moving visual target that followed an unpredictably curved trajectory, presented on a screen in the vertical plane. We found that rTMS to MT+/V5 influenced interceptive timing and positioning, whereas rTMS to SPOC only tended to increase the spatial variance in reach end points for selected target trajectories. These findings are consistent with theories arguing that distinct neural mechanisms contribute to spatial, temporal, and spatiotemporal control of manual interception. |
Christ Devia; Rocio Mayol-Troncoso; Javiera Parrini; Gricel Orellana; Aida Ruiz; Pedro E Maldonado; Jose Ignacio Egaña EEG classification during scene free-viewing for schizophrenia detection Journal Article IEEE Transactions on Neural Systems and Rehabilitation Engineering, 27 (6), pp. 1193–1199, 2019. @article{Devia2019, title = {EEG classification during scene free-viewing for schizophrenia detection}, author = {Christ Devia and Rocio Mayol-Troncoso and Javiera Parrini and Gricel Orellana and Aida Ruiz and Pedro E Maldonado and Jose Ignacio Ega{ñ}a}, doi = {10.1109/TNSRE.2019.2913799}, year = {2019}, date = {2019-01-01}, journal = {IEEE Transactions on Neural Systems and Rehabilitation Engineering}, volume = {27}, number = {6}, pages = {1193--1199}, abstract = {Currently, the diagnosis of schizophrenia is made solely based on interviews and behavioral observations by a trained psychiatrist. Technologies such as electroencephalography (EEG) are used for differential diagnosis and not to support the psychiatrist's positive diagnosis. Here, we show the potential of EEG recordings as biomarkers of the schizophrenia syndrome. We recorded EEG while schizophrenia patients freely viewed natural scenes, and we analyzed the average EEG activity locked to the image onset. We found significant differences between patients and healthy controls in occipital areas approximately 500 ms after image onset. These differences were used to train a classifier to discriminate the schizophrenia patients from the controls. The best classifier had 81% sensitivity for the detection of patients and specificity of 59% for the detection of controls, with an overall accuracy of 71%. These results indicate that EEG signals from a free-viewing paradigm discriminate patients from healthy controls and have the potential to become a tool for the psychiatrist to support the positive diagnosis of schizophrenia.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Currently, the diagnosis of schizophrenia is made solely based on interviews and behavioral observations by a trained psychiatrist. Technologies such as electroencephalography (EEG) are used for differential diagnosis and not to support the psychiatrist's positive diagnosis. Here, we show the potential of EEG recordings as biomarkers of the schizophrenia syndrome. We recorded EEG while schizophrenia patients freely viewed natural scenes, and we analyzed the average EEG activity locked to the image onset. We found significant differences between patients and healthy controls in occipital areas approximately 500 ms after image onset. These differences were used to train a classifier to discriminate the schizophrenia patients from the controls. The best classifier had 81% sensitivity for the detection of patients and specificity of 59% for the detection of controls, with an overall accuracy of 71%. These results indicate that EEG signals from a free-viewing paradigm discriminate patients from healthy controls and have the potential to become a tool for the psychiatrist to support the positive diagnosis of schizophrenia. |
Hélène Devillez; Nathalie Guyader; Anne Guérin-Dugué An eye fixation-related potentials analysis of the P300 potential for fixations onto a target object when exploring natural scenes Journal Article Journal of Vision, 15 (13), pp. 1–31, 2015. @article{Devillez2015, title = {An eye fixation-related potentials analysis of the P300 potential for fixations onto a target object when exploring natural scenes}, author = {Hél{è}ne Devillez and Nathalie Guyader and Anne Guérin-Dugué}, doi = {10.1167/15.13.20}, year = {2015}, date = {2015-01-01}, journal = {Journal of Vision}, volume = {15}, number = {13}, pages = {1--31}, abstract = {The P300 event-related potential has been extensively studied in electroencephalography with classical paradigms that force observers to not move their eyes. This potential is classically used to infer whether a target or a task-relevant stimulus was presented. Few researches have studied this potential through more ecological paradigms where observers were able to move their eyes. In this study, we examined with an ecological paradigm and an adapted methodology the P300 potential using a visual search task that involves eye movements to actively explore natural scenes and during which eye movements and electroencephalographic activity were coregistered. Averaging the electroencephalography signal time-locked to fixation onsets, a P300 potential was observed for fixations onto the target object but not for other fixations recorded for the same visual search or for fixations recorded during the free viewing without any task. Our approach consists of using control experimental conditions with similar eye movements to ensure that the P300 potential was attributable to the fact that the observer gazed at the target rather than to other factors such as eye movement pattern (the size of the previous saccade) or the ‘‘overlap issue'' between the potentials elicited by two successive fixations. We also proposed to model the time overlap issue of the potentials elicited by consecutive fixations with various durations. Our results show that the P300 potential can be studied in ecological situations without any constraint on the type of visual exploration, with some precautions in the interpretation of results due to the overlap issue.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The P300 event-related potential has been extensively studied in electroencephalography with classical paradigms that force observers to not move their eyes. This potential is classically used to infer whether a target or a task-relevant stimulus was presented. Few researches have studied this potential through more ecological paradigms where observers were able to move their eyes. In this study, we examined with an ecological paradigm and an adapted methodology the P300 potential using a visual search task that involves eye movements to actively explore natural scenes and during which eye movements and electroencephalographic activity were coregistered. Averaging the electroencephalography signal time-locked to fixation onsets, a P300 potential was observed for fixations onto the target object but not for other fixations recorded for the same visual search or for fixations recorded during the free viewing without any task. Our approach consists of using control experimental conditions with similar eye movements to ensure that the P300 potential was attributable to the fact that the observer gazed at the target rather than to other factors such as eye movement pattern (the size of the previous saccade) or the ‘‘overlap issue'' between the potentials elicited by two successive fixations. We also proposed to model the time overlap issue of the potentials elicited by consecutive fixations with various durations. Our results show that the P300 potential can be studied in ecological situations without any constraint on the type of visual exploration, with some precautions in the interpretation of results due to the overlap issue. |
Joao C Dias; Paul Sajda; J P Dmochowski; Lucas C Parra EEG precursors of detected and missed targets during free-viewing search Journal Article Journal of Vision, 13 (13), pp. 1–19, 2013. @article{Dias2013, title = {EEG precursors of detected and missed targets during free-viewing search}, author = {Joao C Dias and Paul Sajda and J P Dmochowski and Lucas C Parra}, doi = {10.1167/13.13.13}, year = {2013}, date = {2013-01-01}, journal = {Journal of Vision}, volume = {13}, number = {13}, pages = {1--19}, abstract = {When scanning a scene, the target of our search may be in plain sight and yet remain unperceived. Conversely, at other times the target may be perceived in the periphery prior to fixation. There is ample behavioral and neurophysiological evidence to suggest that in some constrained visual-search tasks, targets are detected prior to fixational eye movements. However, limited human data are available during unconstrained search to determine the time course of detection, the brain areas involved, and the neural correlates of failures to detect a foveated target. Here, we recorded and analyzed electroencephalographic (EEG) activity during free-viewing visual search, varying the task difficulty to compare neural signatures for detected and unreported ("missed") targets. When carefully controlled to remove eye-movement-related potentials, saccade-locked EEG shows that: (a) "Easy" targets may be detected as early as 150 ms prior to foveation, as indicated by a premotor potential associated with a button response; (b) object-discriminating occipital activity emerges during the saccade to target; and (c) success and failures to detect a target are accompanied by a modulation in alpha-band power over fronto-central areas as well as altered saccade dynamics. Taken together, these data suggest that target detection during free viewing can begin prior to and continue during a saccade, with failure or success in reporting a target possibly resulting from inhibition or activation of fronto-central processing areas associated with saccade control.}, keywords = {}, pubstate = {published}, tppubtype = {article} } When scanning a scene, the target of our search may be in plain sight and yet remain unperceived. Conversely, at other times the target may be perceived in the periphery prior to fixation. There is ample behavioral and neurophysiological evidence to suggest that in some constrained visual-search tasks, targets are detected prior to fixational eye movements. However, limited human data are available during unconstrained search to determine the time course of detection, the brain areas involved, and the neural correlates of failures to detect a foveated target. Here, we recorded and analyzed electroencephalographic (EEG) activity during free-viewing visual search, varying the task difficulty to compare neural signatures for detected and unreported ("missed") targets. When carefully controlled to remove eye-movement-related potentials, saccade-locked EEG shows that: (a) "Easy" targets may be detected as early as 150 ms prior to foveation, as indicated by a premotor potential associated with a button response; (b) object-discriminating occipital activity emerges during the saccade to target; and (c) success and failures to detect a target are accompanied by a modulation in alpha-band power over fronto-central areas as well as altered saccade dynamics. Taken together, these data suggest that target detection during free viewing can begin prior to and continue during a saccade, with failure or success in reporting a target possibly resulting from inhibition or activation of fronto-central processing areas associated with saccade control. |
Elisa C Dias; Abraham C Van Voorhis; Filipe Braga; Julianne Todd; Javier Lopez-Calderon; Antigona Martinez; Daniel C Javitt Impaired fixation-related theta modulation predicts reduced visual span and guided search deficits in schizophrenia Journal Article Cerebral Cortex, 30 (5), pp. 2823–2833, 2020. @article{Dias2020, title = {Impaired fixation-related theta modulation predicts reduced visual span and guided search deficits in schizophrenia}, author = {Elisa C Dias and Abraham C {Van Voorhis} and Filipe Braga and Julianne Todd and Javier Lopez-Calderon and Antigona Martinez and Daniel C Javitt}, doi = {10.1093/cercor/bhz277}, year = {2020}, date = {2020-01-01}, journal = {Cerebral Cortex}, volume = {30}, number = {5}, pages = {2823--2833}, abstract = {During normal visual behavior, individuals scan the environment through a series of saccades and fixations. At each fixation, the phase of ongoing rhythmic neural oscillations is reset, thereby increasing efficiency of subsequent visual processing. This phase-reset is reflected in the generation of a fixation-related potential (FRP). Here, we evaluate the integrity of theta phase-reset/FRP generation and Guided Visual Search task in schizophrenia. Subjects performed serial and parallel versions of the task. An initial study (15 healthy controls (HC)/15 schizophrenia patients (SCZ)) investigated behavioral performance parametrically across stimulus features and set-sizes. A subsequent study (25-HC/25-SCZ) evaluated integrity of search-related FRP generation relative to search performance and evaluated visual span size as an index of parafoveal processing. Search times were significantly increased for patients versus controls across all conditions. Furthermore, significantly, deficits were observed for fixation-related theta phase-reset across conditions, that fully predicted impaired reduced visual span and search performance and correlated with impaired visual components of neurocognitive processing. By contrast, overall search strategy was similar between groups. Deficits in theta phase-reset mechanisms are increasingly documented across sensory modalities in schizophrenia. Here, we demonstrate that deficits in fixation-related theta phase-reset during naturalistic visual processing underlie impaired efficiency of early visual function in schizophrenia.}, keywords = {}, pubstate = {published}, tppubtype = {article} } During normal visual behavior, individuals scan the environment through a series of saccades and fixations. At each fixation, the phase of ongoing rhythmic neural oscillations is reset, thereby increasing efficiency of subsequent visual processing. This phase-reset is reflected in the generation of a fixation-related potential (FRP). Here, we evaluate the integrity of theta phase-reset/FRP generation and Guided Visual Search task in schizophrenia. Subjects performed serial and parallel versions of the task. An initial study (15 healthy controls (HC)/15 schizophrenia patients (SCZ)) investigated behavioral performance parametrically across stimulus features and set-sizes. A subsequent study (25-HC/25-SCZ) evaluated integrity of search-related FRP generation relative to search performance and evaluated visual span size as an index of parafoveal processing. Search times were significantly increased for patients versus controls across all conditions. Furthermore, significantly, deficits were observed for fixation-related theta phase-reset across conditions, that fully predicted impaired reduced visual span and search performance and correlated with impaired visual components of neurocognitive processing. By contrast, overall search strategy was similar between groups. Deficits in theta phase-reset mechanisms are increasingly documented across sensory modalities in schizophrenia. Here, we demonstrate that deficits in fixation-related theta phase-reset during naturalistic visual processing underlie impaired efficiency of early visual function in schizophrenia. |
Adele Diederich; Annette Schomburg; Marieke K Van Vugt Fronto-central theta oscillations are related to oscillations in saccadic response times (SRT): An EEG and behavioral data analysis Journal Article PLoS ONE, 9 (11), pp. e112974, 2014. @article{Diederich2014, title = {Fronto-central theta oscillations are related to oscillations in saccadic response times (SRT): An EEG and behavioral data analysis}, author = {Adele Diederich and Annette Schomburg and Marieke K {Van Vugt}}, doi = {10.1371/journal.pone.0112974}, year = {2014}, date = {2014-01-01}, journal = {PLoS ONE}, volume = {9}, number = {11}, pages = {e112974}, abstract = {The phase reset hypothesis states that the phase of an ongoing neural oscillation, reflecting periodic fluctuations in neural activity between states of high and low excitability, can be shifted by the occurrence of a sensory stimulus so that the phase value become highly constant across trials (Schroeder et al., 2008). From EEG/MEG studies it has been hypothesized that coupled oscillatory activity in primary sensory cortices regulates multi sensory processing (Senkowski et al. 2008). We follow up on a study in which evidence of phase reset was found using a purely behavioral paradigm by including also EEG measures. In this paradigm, presentation of an auditory accessory stimulus was followed by a visual target with a stimulus-onset asynchrony (SOA) across a range from 0 to 404 ms in steps of 4 ms. This fine-grained stimulus presentation allowed us to do a spectral analysis on the mean SRT as a function of the SOA, which revealed distinct peak spectral components within a frequency range of 6 to 11 Hz with a modus of 7 Hz. The EEG analysis showed that the auditory stimulus caused a phase reset in 7-Hz brain oscillations in a widespread set of channels. Moreover, there was a significant difference in the average phase at which the visual target stimulus appeared between slow and fast SRT trials. This effect was evident in three different analyses, and occurred primarily in frontal and central electrodes.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The phase reset hypothesis states that the phase of an ongoing neural oscillation, reflecting periodic fluctuations in neural activity between states of high and low excitability, can be shifted by the occurrence of a sensory stimulus so that the phase value become highly constant across trials (Schroeder et al., 2008). From EEG/MEG studies it has been hypothesized that coupled oscillatory activity in primary sensory cortices regulates multi sensory processing (Senkowski et al. 2008). We follow up on a study in which evidence of phase reset was found using a purely behavioral paradigm by including also EEG measures. In this paradigm, presentation of an auditory accessory stimulus was followed by a visual target with a stimulus-onset asynchrony (SOA) across a range from 0 to 404 ms in steps of 4 ms. This fine-grained stimulus presentation allowed us to do a spectral analysis on the mean SRT as a function of the SOA, which revealed distinct peak spectral components within a frequency range of 6 to 11 Hz with a modus of 7 Hz. The EEG analysis showed that the auditory stimulus caused a phase reset in 7-Hz brain oscillations in a widespread set of channels. Moreover, there was a significant difference in the average phase at which the visual target stimulus appeared between slow and fast SRT trials. This effect was evident in three different analyses, and occurred primarily in frontal and central electrodes. |
Rosanne M van Diepen; Lee M Miller; Ali Mazaheri; Joy J Geng The role of alpha activity in spatial and feature-based attention. Journal Article eNeuro, 3 (5), pp. 1–11, 2016. @article{Diepen2016, title = {The role of alpha activity in spatial and feature-based attention.}, author = {Rosanne M van Diepen and Lee M Miller and Ali Mazaheri and Joy J Geng}, doi = {10.1523/ENEURO.0204-16.2016}, year = {2016}, date = {2016-01-01}, journal = {eNeuro}, volume = {3}, number = {5}, pages = {1--11}, abstract = {Modulations in alpha oscillations (?10 Hz) are typically studied in the context of anticipating upcoming stimuli. Alpha power decreases in sensory regions processing upcoming targets compared to regions processing distracting input, thereby likely facilitating processing of relevant information while suppressing irrelevant. In this electroencephalography study using healthy human volunteers, we examined whether modulations in alpha power also occur after the onset of a bilaterally presented target and distractor. Spatial attentionwasmanipulated through spatial cues and feature-based attention through adjusting the color-similarity of distractors to the target. Consistent with previous studies, we found that informative spatial cues induced a relative decrease of pretarget alpha power at occipital electrodes contralateral to the expected target location. Interestingly, this pattern reemerged relatively late (300–750 ms) after stimulus onset, suggesting that lateralized alpha reflects not only preparatory attention, but also ongoing attentive stimulus processing. Uninformative cues (i.e., conveying no information about the spatial location of the target) resulted in an interaction between spatial attention and feature-based attention in post-target alpha lateralization. When the target was paired with a low-similarity distractor, post-target alpha was lateralized (500–900 ms). Crucially, the lateralization was absent when target selection was ambig- uous because the distractor was highly similar to the target. Instead, during this condition, midfrontal theta was increased, indicative of reactive conflict resolution. Behaviorally, the degree of alpha lateralization was negatively correlated with the reaction time distraction cost induced by target–distractor similarity. These results suggest a pivotal role for poststimulus alpha lateralization in protecting sensory processing of target information.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Modulations in alpha oscillations (?10 Hz) are typically studied in the context of anticipating upcoming stimuli. Alpha power decreases in sensory regions processing upcoming targets compared to regions processing distracting input, thereby likely facilitating processing of relevant information while suppressing irrelevant. In this electroencephalography study using healthy human volunteers, we examined whether modulations in alpha power also occur after the onset of a bilaterally presented target and distractor. Spatial attentionwasmanipulated through spatial cues and feature-based attention through adjusting the color-similarity of distractors to the target. Consistent with previous studies, we found that informative spatial cues induced a relative decrease of pretarget alpha power at occipital electrodes contralateral to the expected target location. Interestingly, this pattern reemerged relatively late (300–750 ms) after stimulus onset, suggesting that lateralized alpha reflects not only preparatory attention, but also ongoing attentive stimulus processing. Uninformative cues (i.e., conveying no information about the spatial location of the target) resulted in an interaction between spatial attention and feature-based attention in post-target alpha lateralization. When the target was paired with a low-similarity distractor, post-target alpha was lateralized (500–900 ms). Crucially, the lateralization was absent when target selection was ambig- uous because the distractor was highly similar to the target. Instead, during this condition, midfrontal theta was increased, indicative of reactive conflict resolution. Behaviorally, the degree of alpha lateralization was negatively correlated with the reaction time distraction cost induced by target–distractor similarity. These results suggest a pivotal role for poststimulus alpha lateralization in protecting sensory processing of target information. |
Nadine Dijkstra; Luca Ambrogioni; Diego Vidaurre; Marcel van Gerven Neural dynamics of perceptual inference and its reversal during imagery Journal Article eLife, 9 , pp. 1–19, 2020. @article{Dijkstra2020, title = {Neural dynamics of perceptual inference and its reversal during imagery}, author = {Nadine Dijkstra and Luca Ambrogioni and Diego Vidaurre and Marcel van Gerven}, doi = {10.7554/eLife.53588}, year = {2020}, date = {2020-01-01}, journal = {eLife}, volume = {9}, pages = {1--19}, abstract = {After the presentation of a visual stimulus, neural processing cascades from low-level sensory areas to increasingly abstract representations in higher-level areas. It is often hypothesised that a reversal in neural processing underlies the generation of mental images as abstract representations are used to construct sensory representations in the absence of sensory input. According to predictive processing theories, such reversed processing also plays a central role in later stages of perception. Direct experimental evidence of reversals in neural information flow has been missing. Here, we used a combination of machine learning and magnetoencephalography to characterise neural dynamics in humans. We provide direct evidence for a reversal of the perceptual feed-forward cascade during imagery and show that, during perception, such reversals alternate with feed-forward processing in an 11 Hz oscillatory pattern. Together, these results show how common feedback processes support both veridical perception and mental imagery.}, keywords = {}, pubstate = {published}, tppubtype = {article} } After the presentation of a visual stimulus, neural processing cascades from low-level sensory areas to increasingly abstract representations in higher-level areas. It is often hypothesised that a reversal in neural processing underlies the generation of mental images as abstract representations are used to construct sensory representations in the absence of sensory input. According to predictive processing theories, such reversed processing also plays a central role in later stages of perception. Direct experimental evidence of reversals in neural information flow has been missing. Here, we used a combination of machine learning and magnetoencephalography to characterise neural dynamics in humans. We provide direct evidence for a reversal of the perceptual feed-forward cascade during imagery and show that, during perception, such reversals alternate with feed-forward processing in an 11 Hz oscillatory pattern. Together, these results show how common feedback processes support both veridical perception and mental imagery. |
Troy Dildine; Elizabeth Necka; Lauren Yvette Atlas Confidence in subjective pain is predicted by reaction time during decision making Journal Article Scientific Reports, 10 , pp. 1–14, 2020. @article{Dildine2020, title = {Confidence in subjective pain is predicted by reaction time during decision making}, author = {Troy Dildine and Elizabeth Necka and Lauren Yvette Atlas}, doi = {10.31234/osf.io/7cnha}, year = {2020}, date = {2020-01-01}, journal = {Scientific Reports}, volume = {10}, pages = {1--14}, publisher = {Nature Publishing Group UK}, abstract = {Self-report is the gold standard for measuring pain. However, decisions about pain can vary substantially within and between individuals. We measured whether self-reported pain is accompanied by metacognition and variations in confidence, similar to perceptual decision-making in other modalities. Eighty healthy volunteers underwent acute thermal pain and provided pain ratings followed by confidence judgments on continuous visual analogue scales. We investigated whether eye fixations and reaction time during pain rating might serve as implicit markers of confidence. Confidence varied across trials and increased confidence was associated with faster pain rating reaction times. The association between confidence and fixations varied across individuals as a function of the reliability of individuals' association between temperature and pain. Taken together, this work indicates that individuals can provide metacognitive judgments of pain and extends research on confidence in perceptual decision-making to pain.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Self-report is the gold standard for measuring pain. However, decisions about pain can vary substantially within and between individuals. We measured whether self-reported pain is accompanied by metacognition and variations in confidence, similar to perceptual decision-making in other modalities. Eighty healthy volunteers underwent acute thermal pain and provided pain ratings followed by confidence judgments on continuous visual analogue scales. We investigated whether eye fixations and reaction time during pain rating might serve as implicit markers of confidence. Confidence varied across trials and increased confidence was associated with faster pain rating reaction times. The association between confidence and fixations varied across individuals as a function of the reliability of individuals' association between temperature and pain. Taken together, this work indicates that individuals can provide metacognitive judgments of pain and extends research on confidence in perceptual decision-making to pain. |
Mithun Diwakar; Deborah L Harrington; Jun Maruta; Jamshid Ghajar; Fady El-Gabalawy; Laura Muzzatti; Maurizio Corbetta; Ming-Xiong X Huang; Roland R Lee Filling in the gaps: Anticipatory control of eye movements in chronic mild traumatic brain injury Journal Article NeuroImage: Clinical, 8 , pp. 210–223, 2015. @article{Diwakar2015, title = {Filling in the gaps: Anticipatory control of eye movements in chronic mild traumatic brain injury}, author = {Mithun Diwakar and Deborah L Harrington and Jun Maruta and Jamshid Ghajar and Fady El-Gabalawy and Laura Muzzatti and Maurizio Corbetta and Ming-Xiong X Huang and Roland R Lee}, doi = {10.1016/j.nicl.2015.04.011}, year = {2015}, date = {2015-01-01}, journal = {NeuroImage: Clinical}, volume = {8}, pages = {210--223}, publisher = {Elsevier B.V.}, abstract = {A barrier in the diagnosis of mild traumatic brain injury (mTBI) stems from the lack of measures that are adequately sensitive in detecting mild head injuries. MRI and CT are typically negative in mTBI patients with persistent symptoms of post-concussive syndrome (PCS), and characteristic difficulties in sustaining attention often go undetected on neuropsychological testing, which can be insensitive to momentary lapses in concentration. Conversely, visual tracking strongly depends on sustained attention over time and is impaired in chronic mTBI patients, especially when tracking an occluded target. This finding suggests deficient internal anticipatory control in mTBI, the neural underpinnings of which are poorly understood. The present study investigated the neuronal bases for deficient anticipatory control during visual tracking in 25 chronic mTBI patients with persistent PCS symptoms and 25 healthy control subjects. The task was performed while undergoing magnetoencephalography (MEG), which allowed us to examine whether neural dysfunction associated with anticipatory control deficits was due to altered alpha, beta, and/or gamma activity. Neuropsychological examinations characterized cognition in both groups. During MEG recordings, subjects tracked a predictably moving target that was either continuously visible or randomly occluded (gap condition). MEG source-imaging analyses tested for group differences in alpha, beta, and gamma frequency bands. The results showed executive functioning, information processing speed, and verbal memory deficits in the mTBI group. Visual tracking was impaired in the mTBI group only in the gap condition. Patients showed greater error than controls before and during target occlusion, and were slower to resynchronize with the target when it reappeared. Impaired tracking concurred with abnormal beta activity, which was suppressed in the parietal cortex, especially the right hemisphere, and enhanced in left caudate and frontaloral areas. Regional beta-amplitude demonstrated high classification accuracy (92%) compared to eye-tracking (65%) and neuropsychological variables (80%). These findings show that deficient internal anticipatory control in mTBI is associated with altered beta activity, which is remarkably sensitive given the heterogeneity of injuries.}, keywords = {}, pubstate = {published}, tppubtype = {article} } A barrier in the diagnosis of mild traumatic brain injury (mTBI) stems from the lack of measures that are adequately sensitive in detecting mild head injuries. MRI and CT are typically negative in mTBI patients with persistent symptoms of post-concussive syndrome (PCS), and characteristic difficulties in sustaining attention often go undetected on neuropsychological testing, which can be insensitive to momentary lapses in concentration. Conversely, visual tracking strongly depends on sustained attention over time and is impaired in chronic mTBI patients, especially when tracking an occluded target. This finding suggests deficient internal anticipatory control in mTBI, the neural underpinnings of which are poorly understood. The present study investigated the neuronal bases for deficient anticipatory control during visual tracking in 25 chronic mTBI patients with persistent PCS symptoms and 25 healthy control subjects. The task was performed while undergoing magnetoencephalography (MEG), which allowed us to examine whether neural dysfunction associated with anticipatory control deficits was due to altered alpha, beta, and/or gamma activity. Neuropsychological examinations characterized cognition in both groups. During MEG recordings, subjects tracked a predictably moving target that was either continuously visible or randomly occluded (gap condition). MEG source-imaging analyses tested for group differences in alpha, beta, and gamma frequency bands. The results showed executive functioning, information processing speed, and verbal memory deficits in the mTBI group. Visual tracking was impaired in the mTBI group only in the gap condition. Patients showed greater error than controls before and during target occlusion, and were slower to resynchronize with the target when it reappeared. Impaired tracking concurred with abnormal beta activity, which was suppressed in the parietal cortex, especially the right hemisphere, and enhanced in left caudate and frontaloral areas. Regional beta-amplitude demonstrated high classification accuracy (92%) compared to eye-tracking (65%) and neuropsychological variables (80%). These findings show that deficient internal anticipatory control in mTBI is associated with altered beta activity, which is remarkably sensitive given the heterogeneity of injuries. |
Marcos Domic-Siede; Martín Irani; Joaquín Valdés; Marcela Perrone-Bertolotti; Tomás Ossandón NeuroImage, 226 , pp. 1–19, 2021. @article{DomicSiede2021, title = {Theta activity from frontopolar cortex, mid-cingulate cortex and anterior cingulate cortex shows different roles in cognitive planning performance}, author = {Marcos Domic-Siede and Martín Irani and Joaquín Valdés and Marcela Perrone-Bertolotti and Tomás Ossandón}, doi = {10.1016/j.neuroimage.2020.117557}, year = {2021}, date = {2021-01-01}, journal = {NeuroImage}, volume = {226}, pages = {1--19}, publisher = {Elsevier Inc.}, abstract = {Cognitive planning, the ability to develop a sequenced plan to achieve a goal, plays a crucial role in human goal-directed behavior. However, the specific role of frontal structures in planning is unclear. We used a novel and ecological task, that allowed us to separate the planning period from the execution period. The spatio-temporal dynamics of EEG recordings showed that planning induced a progressive and sustained increase of frontal-midline theta activity (FM$theta$) over time. Source analyses indicated that this activity was generated within the prefrontal cortex. Theta activity from the right mid-Cingulate Cortex (MCC) and the left Anterior Cingulate Cortex (ACC) were correlated with an increase in the time needed for elaborating plans. On the other hand, left Frontopolar cortex (FP) theta activity exhibited a negative correlation with the time required for executing a plan. Since reaction times of planning execution correlated with correct responses, left FP theta activity might be associated with efficiency and accuracy in making a plan. Associations between theta activity from the right MCC and the left ACC with reaction times of the planning period may reflect high cognitive demand of the task, due to the engagement of attentional control and conflict monitoring implementation. In turn, the specific association between left FP theta activity and planning performance may reflect the participation of this brain region in successfully self-generated plans.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Cognitive planning, the ability to develop a sequenced plan to achieve a goal, plays a crucial role in human goal-directed behavior. However, the specific role of frontal structures in planning is unclear. We used a novel and ecological task, that allowed us to separate the planning period from the execution period. The spatio-temporal dynamics of EEG recordings showed that planning induced a progressive and sustained increase of frontal-midline theta activity (FM$theta$) over time. Source analyses indicated that this activity was generated within the prefrontal cortex. Theta activity from the right mid-Cingulate Cortex (MCC) and the left Anterior Cingulate Cortex (ACC) were correlated with an increase in the time needed for elaborating plans. On the other hand, left Frontopolar cortex (FP) theta activity exhibited a negative correlation with the time required for executing a plan. Since reaction times of planning execution correlated with correct responses, left FP theta activity might be associated with efficiency and accuracy in making a plan. Associations between theta activity from the right MCC and the left ACC with reaction times of the planning period may reflect high cognitive demand of the task, due to the engagement of attentional control and conflict monitoring implementation. In turn, the specific association between left FP theta activity and planning performance may reflect the participation of this brain region in successfully self-generated plans. |
Peter H Donaldson; Caroline T Gurvich; Joanne Fielding; Peter G Enticott Exploring associations between gaze patterns and putative human mirror neuron system activity Journal Article Frontiers in Human Neuroscience, 9 , pp. 1–10, 2015. @article{Donaldson2015, title = {Exploring associations between gaze patterns and putative human mirror neuron system activity}, author = {Peter H Donaldson and Caroline T Gurvich and Joanne Fielding and Peter G Enticott}, doi = {10.3389/fnhum.2015.00523}, year = {2015}, date = {2015-01-01}, journal = {Frontiers in Human Neuroscience}, volume = {9}, pages = {1--10}, abstract = {The human mirror neuron system (MNS) is hypothesized to be crucial to social cognition. Given that key MNS-input regions such as the superior temporal sulcus are involved in biological motion processing, and mirror neuron activity in monkeys has been shown to vary with visual attention, aberrant MNS function may be partly attributable to atypical visual input. To examine the relationship between gaze pattern and interpersonal motor resonance (IMR; an index of putative MNS activity), healthy right-handed participants aged 18–40 (n = 26) viewed videos of transitive grasping actions or static hands, whilst the left primary motor cortex received transcranial magnetic stimulation. Motor- evoked potentials recorded in contralateral hand muscles were used to determine IMR. Participants also underwent eyetracking analysis to assess gaze patterns whilst viewing the same videos. No relationship was observed between predictive gaze and IMR. However, IMR was positively associated with fixation counts in areas of biological motion in the videos, and negatively associated with object areas. These findings are discussed with reference to visual influences on the MNS, and the possibility that MNS atypicalities might be influenced by visual processes such as aberrant gaze pattern.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The human mirror neuron system (MNS) is hypothesized to be crucial to social cognition. Given that key MNS-input regions such as the superior temporal sulcus are involved in biological motion processing, and mirror neuron activity in monkeys has been shown to vary with visual attention, aberrant MNS function may be partly attributable to atypical visual input. To examine the relationship between gaze pattern and interpersonal motor resonance (IMR; an index of putative MNS activity), healthy right-handed participants aged 18–40 (n = 26) viewed videos of transitive grasping actions or static hands, whilst the left primary motor cortex received transcranial magnetic stimulation. Motor- evoked potentials recorded in contralateral hand muscles were used to determine IMR. Participants also underwent eyetracking analysis to assess gaze patterns whilst viewing the same videos. No relationship was observed between predictive gaze and IMR. However, IMR was positively associated with fixation counts in areas of biological motion in the videos, and negatively associated with object areas. These findings are discussed with reference to visual influences on the MNS, and the possibility that MNS atypicalities might be influenced by visual processes such as aberrant gaze pattern. |
Joram van Driel; Eduard Ort; Johannes J Fahrenfort; Christian N L Olivers Beta and theta oscillations differentially support free versus forced control over multiple-target search Journal Article Journal of Neuroscience, 39 (9), pp. 1733–1743, 2019. @article{Driel2019, title = {Beta and theta oscillations differentially support free versus forced control over multiple-target search}, author = {Joram van Driel and Eduard Ort and Johannes J Fahrenfort and Christian N L Olivers}, doi = {10.1523/JNEUROSCI.2547-18.2018}, year = {2019}, date = {2019-01-01}, journal = {Journal of Neuroscience}, volume = {39}, number = {9}, pages = {1733--1743}, abstract = {Many important situations require human observers to simultaneously search for more than one object. Despite a long history of research into visual search, the behavioral and neural mechanisms associated with multiple-target search are poorly understood. Here we test the novel theory that the efficiency of looking for multiple targets critically depends on the mode of cognitive control the environment affords to the observer. We used an innovative combination of electroencephalogram (EEG) and eye tracking while participants searched for two targets, within two different contexts: either both targets were present in the search display and observers were free to prioritize either one of them, thus enabling proactive control over selection; or only one of the two targets would be present in each search display, which requires reactive control to reconfigure selection when the wrong target has been prioritized. During proactive control, both univariate and multivariate signals of beta-band (15–35 Hz) power suppression before display onset predicted switches between target selections. This signal originated over midfrontal and sensorimotor regions and has previously been associated with endogenous state changes. In contrast, imposed target selections requiring reactive control elicited prefrontal power enhancements in the delta/theta band (2– 8 Hz), but only after display onset. This signal predicted individual differences in associated oculomotor switch costs, reflecting reactive reconfiguration of target selection. The results provide compelling evidence that multiple target representations are differentially prioritized during visual search, and for the first time reveal distinct neural mechanisms underlying proactive and reactive control over multiple-target search.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Many important situations require human observers to simultaneously search for more than one object. Despite a long history of research into visual search, the behavioral and neural mechanisms associated with multiple-target search are poorly understood. Here we test the novel theory that the efficiency of looking for multiple targets critically depends on the mode of cognitive control the environment affords to the observer. We used an innovative combination of electroencephalogram (EEG) and eye tracking while participants searched for two targets, within two different contexts: either both targets were present in the search display and observers were free to prioritize either one of them, thus enabling proactive control over selection; or only one of the two targets would be present in each search display, which requires reactive control to reconfigure selection when the wrong target has been prioritized. During proactive control, both univariate and multivariate signals of beta-band (15–35 Hz) power suppression before display onset predicted switches between target selections. This signal originated over midfrontal and sensorimotor regions and has previously been associated with endogenous state changes. In contrast, imposed target selections requiring reactive control elicited prefrontal power enhancements in the delta/theta band (2– 8 Hz), but only after display onset. This signal predicted individual differences in associated oculomotor switch costs, reflecting reactive reconfiguration of target selection. The results provide compelling evidence that multiple target representations are differentially prioritized during visual search, and for the first time reveal distinct neural mechanisms underlying proactive and reactive control over multiple-target search. |
Linda Drijvers; Mircea van der Plas; Asli Özyürek; Ole Jensen Native and non-native listeners show similar yet distinct oscillatory dynamics when using gestures to access speech in noise Journal Article NeuroImage, 194 , pp. 55–67, 2019. @article{Drijvers2019a, title = {Native and non-native listeners show similar yet distinct oscillatory dynamics when using gestures to access speech in noise}, author = {Linda Drijvers and Mircea van der Plas and Asli Özyürek and Ole Jensen}, doi = {10.1016/j.neuroimage.2019.03.032}, year = {2019}, date = {2019-01-01}, journal = {NeuroImage}, volume = {194}, pages = {55--67}, abstract = {Listeners are often challenged by adverse listening conditions during language comprehension induced by external factors, such as noise, but also internal factors, such as being a non-native listener. Visible cues, such as semantic information conveyed by iconic gestures, can enhance language comprehension in such situations. Using magnetoencephalography (MEG) we investigated whether spatiotemporal oscillatory dynamics can predict a listener's benefit of iconic gestures during language comprehension in both internally (non-native versus native listeners) and externally (clear/degraded speech) induced adverse listening conditions. Proficient non-native speakers of Dutch were presented with videos in which an actress uttered a degraded or clear verb, accompanied by a gesture or not, and completed a cued-recall task after every video. The behavioral and oscillatory results obtained from non-native listeners were compared to an MEG study where we presented the same stimuli to native listeners (Drijvers et al., 2018a). Non-native listeners demonstrated a similar gestural enhancement effect as native listeners, but overall scored significantly slower on the cued-recall task. In both native and non-native listeners, an alpha/beta power suppression revealed engagement of the extended language network, motor and visual regions during gestural enhancement of degraded speech comprehension, suggesting similar core processes that support unification and lexical access processes. An individual's alpha/beta power modulation predicted the gestural benefit a listener experienced during degraded speech comprehension. Importantly, however, non-native listeners showed less engagement of the mouth area of the primary somatosensory cortex, left insula (beta), LIFG and ATL (alpha) than native listeners, which suggests that non-native listeners might be hindered in processing the degraded phonological cues and coupling them to the semantic information conveyed by the gesture. Native and non-native listeners thus demonstrated similar yet distinct spatiotemporal oscillatory dynamics when recruiting visual cues to disambiguate degraded speech.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Listeners are often challenged by adverse listening conditions during language comprehension induced by external factors, such as noise, but also internal factors, such as being a non-native listener. Visible cues, such as semantic information conveyed by iconic gestures, can enhance language comprehension in such situations. Using magnetoencephalography (MEG) we investigated whether spatiotemporal oscillatory dynamics can predict a listener's benefit of iconic gestures during language comprehension in both internally (non-native versus native listeners) and externally (clear/degraded speech) induced adverse listening conditions. Proficient non-native speakers of Dutch were presented with videos in which an actress uttered a degraded or clear verb, accompanied by a gesture or not, and completed a cued-recall task after every video. The behavioral and oscillatory results obtained from non-native listeners were compared to an MEG study where we presented the same stimuli to native listeners (Drijvers et al., 2018a). Non-native listeners demonstrated a similar gestural enhancement effect as native listeners, but overall scored significantly slower on the cued-recall task. In both native and non-native listeners, an alpha/beta power suppression revealed engagement of the extended language network, motor and visual regions during gestural enhancement of degraded speech comprehension, suggesting similar core processes that support unification and lexical access processes. An individual's alpha/beta power modulation predicted the gestural benefit a listener experienced during degraded speech comprehension. Importantly, however, non-native listeners showed less engagement of the mouth area of the primary somatosensory cortex, left insula (beta), LIFG and ATL (alpha) than native listeners, which suggests that non-native listeners might be hindered in processing the degraded phonological cues and coupling them to the semantic information conveyed by the gesture. Native and non-native listeners thus demonstrated similar yet distinct spatiotemporal oscillatory dynamics when recruiting visual cues to disambiguate degraded speech. |
Linda Drijvers; Ole Jensen; Eelke Spaak Rapid invisible frequency tagging reveals nonlinear integration of auditory and visual information Journal Article Human Brain Mapping, pp. 1–15, 2020. @article{Drijvers2020, title = {Rapid invisible frequency tagging reveals nonlinear integration of auditory and visual information}, author = {Linda Drijvers and Ole Jensen and Eelke Spaak}, doi = {10.1002/hbm.25282}, year = {2020}, date = {2020-01-01}, journal = {Human Brain Mapping}, pages = {1--15}, abstract = {During communication in real-life settings, the brain integrates information from auditory and visual modalities to form a unified percept of our environment. In the current magnetoencephalography (MEG) study, we used rapid invisible frequency tagging (RIFT) to generate steady-state evoked fields and investigated the integration of audiovisual information in a semantic context. We presented participants with videos of an actress uttering action verbs (auditory; tagged at 61 Hz) accompanied by a gesture (visual; tagged at 68 Hz, using a projector with a 1,440 Hz refresh rate). Integration difficulty was manipulated by lower-order auditory factors (clear/degraded speech) and higher-order visual factors (congruent/incongruent gesture). We identified MEG spectral peaks at the individual (61/68 Hz) tagging frequencies. We furthermore observed a peak at the intermodulation frequency of the auditory and visually tagged signals (fvisual − fauditory = 7 Hz), specifically when lower-order integration was easiest because signal quality was optimal. This intermodulation peak is a signature of nonlinear audiovisual integration, and was strongest in left inferior frontal gyrus and left temporal regions; areas known to be involved in speech-gesture integration. The enhanced power at the intermodulation frequency thus reflects the ease of lower-order audiovisual integration and demonstrates that speech-gesture information interacts in higher-order language areas. Furthermore, we provide a proof-of-principle of the use of RIFT to study the integration of audiovisual stimuli, in relation to, for instance, semantic context.}, keywords = {}, pubstate = {published}, tppubtype = {article} } During communication in real-life settings, the brain integrates information from auditory and visual modalities to form a unified percept of our environment. In the current magnetoencephalography (MEG) study, we used rapid invisible frequency tagging (RIFT) to generate steady-state evoked fields and investigated the integration of audiovisual information in a semantic context. We presented participants with videos of an actress uttering action verbs (auditory; tagged at 61 Hz) accompanied by a gesture (visual; tagged at 68 Hz, using a projector with a 1,440 Hz refresh rate). Integration difficulty was manipulated by lower-order auditory factors (clear/degraded speech) and higher-order visual factors (congruent/incongruent gesture). We identified MEG spectral peaks at the individual (61/68 Hz) tagging frequencies. We furthermore observed a peak at the intermodulation frequency of the auditory and visually tagged signals (fvisual − fauditory = 7 Hz), specifically when lower-order integration was easiest because signal quality was optimal. This intermodulation peak is a signature of nonlinear audiovisual integration, and was strongest in left inferior frontal gyrus and left temporal regions; areas known to be involved in speech-gesture integration. The enhanced power at the intermodulation frequency thus reflects the ease of lower-order audiovisual integration and demonstrates that speech-gesture information interacts in higher-order language areas. Furthermore, we provide a proof-of-principle of the use of RIFT to study the integration of audiovisual stimuli, in relation to, for instance, semantic context. |
Stefan Dürschmid; Andre Maric; Marcel S Kehl; Robert T Knight; Hermann Hinrichs; Hans-Jochen Heinz Fronto-temporal regulation of subjective value to suppress impulsivity in intertemporal choices Journal Article Journal of Neuroscience, 2020. @article{Duerschmid2020, title = {Fronto-temporal regulation of subjective value to suppress impulsivity in intertemporal choices}, author = {Stefan Dürschmid and Andre Maric and Marcel S Kehl and Robert T Knight and Hermann Hinrichs and Hans-Jochen Heinz}, doi = {10.1523/jneurosci.1196-20.2020}, year = {2020}, date = {2020-01-01}, journal = {Journal of Neuroscience}, abstract = {Impulsive decisions arise from preferring smaller but sooner rewards compared to larger but later rewards. How neural activity and attention to choice alternatives contribute to reward decisions during temporal discounting is not clear. Here we probed (i) attention to and (ii) neural representation of delay and reward information in humans (both sexes) engaged in choices. We studied behavioral and frequency specific dynamics supporting impulsive decisions on a fine-grained temporal scale using eye tracking and magnetoencephalographic (MEG) recordings. In one condition participants had to decide for themselves but pretended to decide for their best friend in a second prosocial condition, which required perspective taking. Hence, conditions varied in the value for themselves versus that pretending to choose for another person. Stronger impulsivity was reliably found across three independent groups for prosocial decisions. Eye tracking revealed a systematic shift of attention from the delay to the reward information and differences in eye tracking between conditions predicted differences in discounting. High frequency activity (HFA: 175-250 Hz) distributed over right fronto-temporal sensors correlated with delay and reward information in consecutive temporal intervals for high value decisions for oneself but not the friend. Collectively the results imply that the HFA recorded over fronto-temporal MEG sensors plays a critical role in choice option integration.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Impulsive decisions arise from preferring smaller but sooner rewards compared to larger but later rewards. How neural activity and attention to choice alternatives contribute to reward decisions during temporal discounting is not clear. Here we probed (i) attention to and (ii) neural representation of delay and reward information in humans (both sexes) engaged in choices. We studied behavioral and frequency specific dynamics supporting impulsive decisions on a fine-grained temporal scale using eye tracking and magnetoencephalographic (MEG) recordings. In one condition participants had to decide for themselves but pretended to decide for their best friend in a second prosocial condition, which required perspective taking. Hence, conditions varied in the value for themselves versus that pretending to choose for another person. Stronger impulsivity was reliably found across three independent groups for prosocial decisions. Eye tracking revealed a systematic shift of attention from the delay to the reward information and differences in eye tracking between conditions predicted differences in discounting. High frequency activity (HFA: 175-250 Hz) distributed over right fronto-temporal sensors correlated with delay and reward information in consecutive temporal intervals for high value decisions for oneself but not the friend. Collectively the results imply that the HFA recorded over fronto-temporal MEG sensors plays a critical role in choice option integration. |