EyeLink使用性/应用出版物
以下列出了截至2023年的所有EyeLink可用性和应用研究出版物(有些早于2024年)。您可以使用诸如驾驶、运动、工作负载等关键词搜索出版物。您还可以搜索个人作者姓名。如果我们错过了任何EyeLink可用性或应用文章,请给我们发电子邮件
2021 |
Hame Park; Christoph Kayser The neurophysiological basis of the trial-wise and cumulative ventriloquism aftereffects Journal Article In: Journal of Neuroscience, vol. 41, no. 5, pp. 1068–1079, 2021. @article{Park2021d, Our senses often receive conflicting multisensory information, which our brain reconciles by adaptive recalibration. A classic example is the ventriloquism aftereffect, which emerges following both cumulative (long-term) and trial-wise exposure to spatially discrepant multisensory stimuli. Despite the importance of such adaptive mechanisms for interacting with environments that change over multiple timescales, it remains debated whether the ventriloquism aftereffects observed following trial-wise and cumulative exposure arise from the same neurophysiological substrate. We address this question by probing electroencephalography recordings from healthy humans (both sexes) for processes predictive of the aftereffect biases following the exposure to spatially offset audiovisual stimuli. Our results support the hypothesis that discrepant multisensory evidence shapes aftereffects on distinct timescales via common neurophysiological processes reflecting sensory inference and memory in parietal- occipital regions, while the cumulative exposure to consistent discrepancies additionally recruits prefrontal processes. During the subsequent unisensory trial, both trial-wise and cumulative exposure bias the encoding of the acoustic information, but do so distinctly. Our results posit a central role of parietal regions in shaping multisensory spatial recalibration, suggest that frontal regions consolidate the behavioral bias for persistent multisensory discrepancies, but also show that the trial-wise and cumulative exposure bias sound position encoding via distinct neurophysiological processes. |
Mohsen Parto Dezfouli; Saeideh Davoudi; Robert T. Knight; Mohammad Reza Daliri; Elizabeth L. Johnson Prefrontal lesions disrupt oscillatory signatures of spatiotemporal integration in working memory Journal Article In: Cortex, vol. 138, pp. 113–126, 2021. @article{PartoDezfouli2021, How does the human brain integrate spatial and temporal information into unified mnemonic representations? Building on classic theories of feature binding, we first define the oscillatory signatures of integrating ‘where' and ‘when' information in working memory (WM) and then investigate the role of prefrontal cortex (PFC) in spatiotemporal integration. Fourteen individuals with lateral PFC damage and 20 healthy controls completed a visuospatial WM task while electroencephalography (EEG) was recorded. On each trial, two shapes were presented sequentially in a top/bottom spatial orientation. We defined EEG signatures of spatiotemporal integration by comparing the maintenance of two possible where-when configurations: the first shape presented on top and the reverse. Frontal delta-theta (δθ; 2–7 Hz) activity, frontal-posterior δθ functional connectivity, lateral posterior event-related potentials, and mesial posterior alpha phase-to-gamma amplitude coupling dissociated the two configurations in controls. WM performance and frontal and mesial posterior signatures of spatiotemporal integration were diminished in PFC lesion patients, whereas lateral posterior signatures were intact. These findings reveal both PFC-dependent and independent substrates of spatiotemporal integration and link optimal performance to PFC. |
Jairo Perez-Osorio; Abdulaziz Abubshait; Agnieszka Wykowska In: Journal of Cognitive Neuroscience, vol. 34, no. 1, pp. 108–126, 2021. @article{PerezOsorio2021, Understanding others' nonverbal behavior is essential for social interaction, as it allows, among others, to infer mental states. While gaze communication, a well-established nonverbal social behavior, has shown its importance in inferring others' mental states, not much is known about the effects of irrelevant gaze signals on cognitive conflict markers during collaborative settings. Here, participants completed a categorization task where they categorized objects based on their color while observing images of a robot. On each trial, participants observed the robot iCub grasping an object from a table and offering it to them to simulate a handover. Once the robot “moved” the object forward, participants were asked to categorize the object according to its color. Before participants were allowed to respond, the robot made a lateral head/gaze shift. The gaze shifts were either congruent or incongruent with the object's color. We expected that incongruent head-cues would induce more errors (Study 1), would be associated with more curvature in eye-tracking trajectories (Study 2), and induce larger amplitude in electrophysiological markers of cognitive conflict (Study 3). Results of the three studies show more oculomotor interference as measured in error rates (Study 1), larger curvatures eye-tracking trajectories (Study 2), and higher amplitudes of the N2 event-related potential (ERP) of the EEG signals as well as higher Event-Related Spectral Perturbation (ERSP) amplitudes (Study 3) for incongruent trials compared to congruent trials. Our findings reveal that behavioral, ocular and electrophysiological markers can index the influence of irrelevant signals during goal-oriented tasks. |
Thomas Pfeffer; Adrian Ponce-Alvarez; Konstantinos Tsetsos; Thomas Meindertsma; Christoffer Julius Gahnström; Ruud Lucas Brink; Guido Nolte; Andreas Karl Engel; Gustavo Deco; Tobias Hinrich Donner Circuit mechanisms for the chemical modulation of cortex-wide network interactions and behavioral variability Journal Article In: Science Advances, vol. 7, no. 29, pp. eabf5620, 2021. @article{Pfeffer2021, Influential theories postulate distinct roles of catecholamines and acetylcholine in cognition and behavior. However, previous physiological work reported similar effects of these neuromodulators on the response properties (specifically, the gain) of individual cortical neurons. Here, we show a double dissociation between the effects of catecholamines and acetylcholine at the level of large-scale interactions between cortical areas in humans. A pharmacological boost of catecholamine levels increased cortex-wide interactions during a visual task, but not rest. An acetylcholine boost decreased interactions during rest, but not task. Cortical circuit modeling explained this dissociation by differential changes in two circuit properties: The local excitation-inhibition balance (more strongly increased by catecholamines) and intracortical transmission (more strongly reduced by acetylcholine). The inferred catecholaminergic mechanism also predicted noisier decision-making, which we confirmed for both perceptual and value-based choice behavior. Our work highlights specific circuit mechanisms for shaping cortical network interactions and behavioral variability by key neuromodulatory systems. |
Ella Podvalny; Leana E. King; Biyu J. He Spectral signature and behavioral consequence of spontaneous shifts of pupil-linked arousal in human Journal Article In: eLife, vol. 10, pp. e68265, 2021. @article{Podvalny2021, Arousal levels perpetually rise and fall spontaneously. How markers of arousal—pupil size and frequency content of brain activity—relate to each other and influence behavior in humans is poorly understood. We simultaneously monitored magnetoencephalography and pupil in healthy volunteers at rest and during a visual perceptual decision-making task. Spontaneously varying pupil size correlates with power of brain activity in most frequency bands across large-scale resting-state cortical networks. Pupil size recorded at prestimulus baseline correlates with subsequent shifts in detection bias (c) and sensitivity (d'). When dissociated from pupil-linked state, prestimulus spectral power of resting state networks still predicts perceptual behavior. Fast spontaneous pupil constriction and dilation correlate with large-scale brain activity as well but not perceptual behavior. Our results illuminate the relation between central and peripheral arousal markers and their respective roles in human perceptual decision-making. |
Hamed Rahimi-Nasrabadi; Jianzhong Jin; Reece Mazade; Carmen Pons; Sohrab Najafian; Jose-Manuel Alonso Image luminance changes contrast sensitivity in visual cortex Journal Article In: Cell Reports, vol. 34, no. 5, pp. 1–21, 2021. @article{RahimiNasrabadi2021, Accurate measures of contrast sensitivity are important for evaluating visual disease progression and for navigation safety. Previous measures suggested that cortical contrast sensitivity was constant across widely different luminance ranges experienced indoors and outdoors. Against this notion, here, we show that luminance range changes contrast sensitivity in both cat and human cortex, and the changes are different for dark and light stimuli. As luminance range increases, contrast sensitivity increases more within cortical pathways signaling lights than those signaling darks. Conversely, when the luminance range is constant, light-dark differences in contrast sensitivity remain relatively constant even if background luminance changes. We show that a Naka-Rushton function modified to include luminance range and light-dark polarity accurately replicates both the statistics of light-dark features in natural scenes and the cortical responses to multiple combinations of contrast and luminance. We conclude that differences in light-dark contrast increase with luminance range and are largest in bright environments. |
Isabelle A. Rosenthal; Shridhar R. Singh; Katherine L. Hermann; Dimitrios Pantazis; Bevil R. Conway Color space geometry uncovered with magnetoencephalography Journal Article In: Current Biology, vol. 31, no. 3, pp. 515–526, 2021. @article{Rosenthal2021, The geometry that describes the relationship among colors, and the neural mechanisms that support color vision, are unsettled. Here, we use multivariate analyses of measurements of brain activity obtained with magnetoencephalography to reverse-engineer a geometry of the neural representation of color space. The analyses depend upon determining similarity relationships among the spatial patterns of neural responses to different colors and assessing how these relationships change in time. We evaluate the approach by relating the results to universal patterns in color naming. Two prominent patterns of color naming could be accounted for by the decoding results: the greater precision in naming warm colors compared to cool colors evident by an interaction of hue and lightness, and the preeminence among colors of reddish hues. Additional experiments showed that classifiers trained on responses to color words could decode color from data obtained using colored stimuli, but only at relatively long delays after stimulus onset. These results provide evidence that perceptual representations can give rise to semantic representations, but not the reverse. Taken together, the results uncover a dynamic geometry that provides neural correlates for color appearance and generates new hypotheses about the structure of color space. |
Anna M. Monk; Daniel N. Barry; Vladimir Litvak; Gareth R. Barnes; Eleanor A. Maguire Watching movies unfold, a frame-by-frame analysis of the associated neural dynamics Journal Article In: eNeuro, vol. 8, no. 4, pp. 1–12, 2021. @article{Monk2021, Our lives unfold as sequences of events. We experience these events as seamless, although they are composed of individual images captured in between the interruptions imposed by eye blinks and saccades. Events typically involve visual imagery from the real world (scenes), and the hippocampus is frequently en-gaged in this context. It is unclear, however, whether the hippocampus would be similarly responsive to unfolding events that involve abstract imagery. Addressing this issue could provide insights into the nature of its contribution to event processing, with relevance for theories of hippocampal function. Consequently, during magnetoencephalography (MEG), we had female and male humans watch highly matched unfolding movie events composed of either scene image frames that reflected the real world, or frames depicting abstract pat-terns. We examined the evoked neuronal responses to each image frame along the time course of the movie events. Only one difference between the two conditions was evident, and that was during the viewing of the first image frame of events, detectable across frontotemporal sensors. Further probing of this difference using source reconstruction revealed greater engagement of a set of brain regions across parietal, frontal, premotor, and cerebellar cortices, with the largest change in broadband (1–30 Hz) power in the hippocampus during scene-based movie events. Hippocampal engagement during the first image frame of scene-based events could reflect its role in registering a recognizable context perhaps based on templates or schemas. The hippo-campus, therefore, may help to set the scene for events very early on. |
Anna M. Monk; Marshall A. Dalton; Gareth R. Barnes; Eleanor A. Maguire The role of hippocampal-ventromedial prefrontal cortex neural dynamics in building mental representations Journal Article In: Journal of Cognitive Neuroscience, vol. 33, no. 1, pp. 89–103, 2021. @article{Monk2021a, The hippocampus and ventromedial prefrontal cortex (vmPFC) play key roles in numerous cognitive domains including mind-wandering, episodic memory and imagining the future. Perspectives differ on precisely how they support these diverse functions, but there is general agreement that it involves constructing representations comprised of numerous elements. Visual scenes have been deployed extensively in cognitive neuroscience because they are paradigmatic multi-element stimuli. However, it remains unclear whether scenes, rather than other types of multi-feature stimuli, preferentially engage hippocampus and vmPFC. Here we leveraged the high temporal resolution of magnetoencephalography to test participants as they gradually built scene imagery from three successive auditorily-presented object descriptions and an imagined 3D space. This was contrasted with constructing mental images of non-scene arrays that were composed of three objects and an imagined 2D space. The scene and array stimuli were, therefore, highly matched, and this paradigm permitted a closer examination of step-by-step mental construction than has been undertaken previously. We observed modulation of theta power in our two regions of interest -anterior hippocampus during the initial stage, and in vmPFC during the first two stages, of scene relative to array construction. Moreover, the scene-specific anterior hippocampal activity during the first construction stage was driven by the vmPFC, with mutual entrainment between the two brain regions thereafter. These findings suggest that hippocampal and vmPFC neural activity is especially tuned to scene representations during the earliest stage of their formation, with implications for theories of how these brain areas enable cognitive functions such as episodic memory. |
Peter R. Murphy; Niklas Wilming; Diana C. Hernandez-Bocanegra; Genis Prat-Ortega; Tobias H. Donner Adaptive circuit dynamics across human cortex during evidence accumulation in changing environments Journal Article In: Nature Neuroscience, vol. 24, no. 7, pp. 987–997, 2021. @article{Murphy2021, Many decisions under uncertainty entail the temporal accumulation of evidence that informs about the state of the environment. When environments are subject to hidden changes in their state, maximizing accuracy and reward requires non-linear accumulation of evidence. How this adaptive, non-linear computation is realized in the brain is unknown. We analyzed human behavior and cortical population activity (measured with magnetoencephalography) recorded during visual evidence accumulation in a changing environment. Behavior and decision-related activity in cortical regions involved in action planning exhibited hallmarks of adaptive evidence accumulation, which could also be implemented by a recurrent cortical microcircuit. Decision dynamics in action-encoding parietal and frontal regions were mirrored in a frequency-specific modulation of the state of the visual cortex that depended on pupil-linked arousal and the expected probability of change. These findings link normative decision computations to recurrent cortical circuit dynamics and highlight the adaptive nature of decision-related feedback to the sensory cortex. |
Gaëlle Nicolas; Eric Castet; Adrien Rabier; Emmanuelle Kristensen; Michel Dojat; Anne Guérin-Dugué Neural correlates of intra-saccadic motion perception Journal Article In: Journal of Vision, vol. 21, no. 11, pp. 1–24, 2021. @article{Nicolas2021, Retinal motion of the visual scene is not consciously perceived during ocular saccades in normal everyday conditions. It has been suggested that extra-retinal signals actively suppress intra-saccadic motion perception to preserve stable perception of the visual world. However, using stimuli optimized to preferentially activate the M-pathway, Castet and Masson (2000) demonstrated that motion can be perceived during a saccade. Based on this psychophysical paradigm, we used electroencephalography and eye-tracking recordings to investigate the neural correlates related to the conscious perception of intra-saccadic motion. We demonstrated the effective involvement during saccades of the cortical areas V1-V2 and MT-V5, which convey motion information along the M-pathway. We also showed that individual motion perception was related to retinal temporal frequency. |
J. A. Nij Bijvank; E. M. M. Strijbis; I. M. Nauta; S. D. Kulik; L. J. Balk; C. J. Stam; A. Hillebrand; J. J. G. Geurts; B. M. J. Uitdehaag; L. J. Rijn; A. Petzold; M. M. Schoonheim Impaired saccadic eye movements in multiple sclerosis are related to altered functional connectivity of the oculomotor brain network Journal Article In: NeuroImage: Clinical, vol. 32, pp. 102848, 2021. @article{NijBijvank2021, Background: Impaired eye movements in multiple sclerosis (MS) are common and could represent a non-invasive and accurate measure of (dys)functioning of interconnected areas within the complex brain network. The aim of this study was to test whether altered saccadic eye movements are related to changes in functional connectivity (FC) in patients with MS. Methods: Cross-sectional eye movement (pro-saccades and anti-saccades) and magnetoencephalography (MEG) data from the Amsterdam MS cohort were included from 176 MS patients and 33 healthy controls. FC was calculated between all regions of the Brainnetome atlas in six conventional frequency bands. Cognitive function and disability were evaluated by previously validated measures. The relationships between saccadic parameters and both FC and clinical scores in MS patients were analysed using multivariate linear regression models. Results: In MS pro- and anti-saccades were abnormal compared to healthy controls A relationship of saccadic eye movements was found with FC of the oculomotor network, which was stronger for regional than global FC. In general, abnormal eye movements were related to higher delta and theta FC but lower beta FC. Strongest associations were found for pro-saccadic latency and FC of the precuneus (beta band β = -0.23 |
Hamideh Norouzi; Niloofar Tavakoli; Mohammad Reza Daliri In: International Journal of Psychophysiology, vol. 166, pp. 61–70, 2021. @article{Norouzi2021, Working memory (WM) can be considered as a limited-capacity system which is capable of saving information temporarily with the aim of processing. The aim of the present study was to establish whether eccentricity representation in WM could be decoded from eletroencephalography (EEG) alpha-band oscillation in parietal cortex during delay-period while performing memory-guided saccade (MGS) task. In this regard, we recorded EEG and Eye-tracking signals of 17 healthy volunteers in a variant version of MGS task. We designed the modified version of MGS task for the first time to investigate the effect of locating stimuli in two different positions, in a near (6°) eccentricity and far (12°) eccentricity on saccade error as a behavioral parameter. Another goal of study was to discern whether or not varying the stimuli loci can alter behavioral and eletroencephalographical data while performing the variant version of MGS task. Our findings demonstrate that saccade error for the near fixation condition is significantly smaller than the far from fixation condition. We observed an increase in alpha power in parietal lobe in near vs far conditions. In addition, the results indicate that the increase in alpha (8–12 Hz) power from fixation to memory was negatively correlated with saccade error. The novel approach of using simultaneous EEG/Eye-tracking recording in the modified MGS task provided both behavioral and electroencephalographic analyses for oscillatory activity during this new version of MGS task. |
John Orczyk; Charles E. Schroeder; Ilana Y. Abeles; Manuel Gomez-Ramirez; Pamela D. Butler; Yoshinao Kajikawa Comparison of scalp ERP to faces in macaques and humans Journal Article In: Frontiers in Systems Neuroscience, vol. 15, pp. 667611, 2021. @article{Orczyk2021, Face recognition is an essential activity of social living, common to many primate species. Underlying processes in the brain have been investigated using various techniques and compared between species. Functional imaging studies have shown face-selective cortical regions and their degree of correspondence across species. However, the temporal dynamics of face processing, particularly processing speed, are likely different between them. Across sensory modalities activation of primary sensory cortices in macaque monkeys occurs at about 3/5 the latency of corresponding activation in humans, though this human simian difference may diminish or disappear in higher cortical regions. We recorded scalp event-related potentials (ERPs) to presentation of faces in macaques and estimated the peak latency of ERP components. Comparisons of latencies between macaques (112 ms) and humans (192 ms) suggested that the 3:5 ratio could be preserved in higher cognitive regions of face processing between those species. |
Anastasia O. Ovchinnikova; Anatoly N. Vasilyev; Ivan P. Zubarev; Bogdan L. Kozyrskiy; Sergei L. Shishkin MEG-based detection of voluntary eye fixations used to control a computer Journal Article In: Frontiers in Neuroscience, vol. 15, pp. 619591, 2021. @article{Ovchinnikova2021, Gaze-based input is an efficient way of hand-free human-computer interaction. However, it suffers from the inability of gaze-based interfaces to discriminate voluntary and spontaneous gaze behaviors, which are overtly similar. Here, we demonstrate that voluntary eye fixations can be discriminated from spontaneous ones using short segments of magnetoencephalography (MEG) data measured immediately after the fixation onset. Recently proposed convolutional neural networks (CNNs), linear finite impulse response filters CNN (LF-CNN) and vector autoregressive CNN (VAR-CNN), were applied for binary classification of the MEG signals related to spontaneous and voluntary eye fixations collected in healthy participants (n = 25) who performed a game-like task by fixating on targets voluntarily for 500 ms or longer. Voluntary fixations were identified as those followed by a fixation in a special confirmatory area. Spontaneous vs. voluntary fixation-related single-trial 700 ms MEG segments were non-randomly classified in the majority of participants, with the group average cross-validated ROC AUC of 0.66 ± 0.07 for LF-CNN and 0.67 ± 0.07 for VAR-CNN (M ± SD). When the time interval, from which the MEG data were taken, was extended beyond the onset of the visual feedback, the group average classification performance increased up to 0.91. Analysis of spatial patterns contributing to classification did not reveal signs of significant eye movement impact on the classification results. We conclude that the classification of MEG signals has a certain potential to support gaze-based interfaces by avoiding false responses to spontaneous eye fixations on a single-trial basis. Current results for intention detection prior to gaze-based interface's feedback, however, are not sufficient for online single-trial eye fixation classification using MEG data alone, and further work is needed to find out if it could be used in practical applications. |
Fosca Al Roumi; Sébastien Marti; Liping Wang; Marie Amalric; Stanislas Dehaene Mental compression of spatial sequences in human working memory using numerical and geometrical primitives Journal Article In: Neuron, vol. 109, no. 16, pp. 2627–2639, 2021. @article{AlRoumi2021, How does the human brain store sequences of spatial locations? We propose that each sequence is internally compressed using an abstract, language-like code that captures its numerical and geometrical regularities. We exposed participants to spatial sequences of fixed length but variable regularity while their brain activity was recorded using magneto-encephalography. Using multivariate decoders, each successive location could be decoded from brain signals, and upcoming locations were anticipated prior to their actual onset. Crucially, sequences with lower complexity, defined as the minimal description length provided by the formal language, led to lower error rates and to increased anticipations. Furthermore, neural codes specific to the numerical and geometrical primitives of the postulated language could be detected, both in isolation and within the sequences. These results suggest that the human brain detects sequence regularities at multiple nested levels and uses them to compress long sequences in working memory. |
Thomas Andrillon; Angus Burns; Teigane Mackay; Jennifer Windt; Naotsugu Tsuchiya Predicting lapses of attention with sleep-like slow waves Journal Article In: Nature Communications, vol. 12, pp. 3657, 2021. @article{Andrillon2021, Attentional lapses occur commonly and are associated with mind wandering, where focus is turned to thoughts unrelated to ongoing tasks and environmental demands, or mind blanking, where the stream of consciousness itself comes to a halt. To understand the neural mechanisms underlying attentional lapses, we studied the behaviour, subjective experience and neural activity of healthy participants performing a task. Random interruptions prompted participants to indicate their mental states as task-focused, mind-wandering or mind-blanking. Using high-density electroencephalography, we report here that spatially and temporally localized slow waves, a pattern of neural activity characteristic of the transition toward sleep, accompany behavioural markers of lapses and preceded reports of mind wandering and mind blanking. The location of slow waves could distinguish between sluggish and impulsive behaviours, and between mind wandering and mind blanking. Our results suggest attentional lapses share a common physiological origin: the emergence of local sleep-like activity within the awake brain. |
M. Antúnez; S. Mancini; J. A. Hernández-Cabrera; L. J. Hoversten; H. A. Barber; M. Carreiras Cross-linguistic semantic preview benefit in Basque-Spanish bilingual readers: Evidence from fixation-related potentials Journal Article In: Brain and Language, vol. 214, pp. 104905, 2021. @article{Antunez2021, During reading, we can process and integrate information from words allocated in the parafoveal region. However, whether we extract and process the meaning of parafoveal words is still under debate. Here, we obtained Fixation-Related Potentials in a Basque-Spanish bilingual sample during a Spanish reading task. By using the boundary paradigm, we presented different parafoveal previews that could be either Basque non-cognate translations or unrelated Basque words. We prove for the first time cross-linguistic semantic preview benefit effects in alphabetic languages, providing novel evidence of modulations in the N400 component. Our findings suggest that the meaning of parafoveal words is processed and integrated during reading and that such meaning is activated and shared across languages in bilingual readers. |
Damiano Azzalini; Anne Buot; Stefano Palminteri; Catherine Tallon-Baudry Responses to heartbeats in ventromedial prefrontal cortex contribute to subjective preference-based decisions Journal Article In: Journal of Neuroscience, vol. 41, no. 23, pp. 5102–5114, 2021. @article{Azzalini2021, Forrest Gump or The Matrix? Preference-based decisions are subjective and entail self-reflection. However, these self-related features are unaccounted for by known neural mechanisms of valuation and choice. Self-related processes have been linked to a basic interoceptive biological mechanism, the neural monitoring of heartbeats, in particular in ventromedial prefrontal cortex (vmPFC), a region also involved in value encoding. We thus hypothesized a functional coupling between the neural monitoring of heartbeats and the precision of value encoding in vmPFC. Human participants of both sexes were presented with pairs of movie titles. They indicated either which movie they preferred or performed a control objective visual discrimination that did not require self-reflection. Using magnetoencephalography, we measured heartbeat-evoked responses (HERs) before option presentation and confirmed that HERs in vmPFC were larger when preparing for the subjective, self-related task. We retrieved the expected cortical value network during choice with time-resolved statistical modeling. Crucially, we show that larger HERs before option presentation are followed by stronger value encoding during choice in vmPFC. This effect is independent of overall vmPFC baseline activity. The neural interaction between HERs and value encoding predicted preference-based choice consistency over time, accounting for both interindividual differences and trial-to-trial fluctuations within individuals. Neither cardiac activity nor arousal fluctuations could account for any of the effects. HERs did not interact with the encoding of perceptual evidence in the discrimination task. Our results show that the self-reflection underlying preference-based decisions involves HERs, and that HER integration to subjective value encoding in vmPFC contributes to preference stability. |
Shlomit Beker; John J. Foxe; Sophie Molholm Oscillatory entrainment mechanisms and anticipatory predictive processes in children with autism spectrum disorder Journal Article In: Journal of Neurophysiology, vol. 126, no. 5, pp. 1783–1798, 2021. @article{Beker2021, Anticipating near-future events is fundamental to adaptive behavior, whereby neural processing of predictable stimuli is significantly facilitated relative to nonpredictable events. Neural oscillations appear to be a key anticipatory mechanism by which processing of upcoming stimuli is modified, and they often entrain to rhythmic environmental sequences. Clinical and anecdotal observations have led to the hypothesis that people with autism spectrum disorder (ASD) may have deficits in generating predictions, and as such, a candidate neural mechanism may be failure to adequately entrain neural activity to repetitive environmental patterns, to facilitate temporal predictions. We tested this hypothesis by interrogating temporal predictions and rhythmic entrainment using behavioral and electrophysiological approaches. We recorded high-density electroencephalography in children with ASD and typically developing (TD) age- and IQ-matched controls, while they reacted to an auditory target as quickly as possible. This auditory event was either preceded by predictive rhythmic visual cues or was not preceded by any cue. Both ASD and control groups presented comparable behavioral facilitation in response to the Cue versus No-Cue condition, challenging the hypothesis that children with ASD have deficits in generating temporal predictions. Analyses of the electrophysiological data, in contrast, revealed significantly reduced neural entrainment to the visual cues and altered anticipatory processes in the ASD group. This was the case despite intact stimulus-evoked visual responses. These results support intact behavioral temporal prediction in response to a cue in ASD, in the face of altered neural entrainment and anticipatory processes. |
Chama Belkhiria; Vsevolod Peysakhovich EOG metrics for cognitive workload detection Journal Article In: Procedia Computer Science, vol. 192, pp. 1875–1884, 2021. @article{Belkhiria2021, Increasing workload is a central notion in human factors research that can decrease the performance and yield accidents. Thus, it is crucial to understand the impact of different internal operator's factors including eye movements, memory and audio-visual integration. Here, we explored the relationship between cognitive workload (low vs. high) and eye movements (saccades, fixations and smooth pursuit). The task difficulty was induced by auditory noise, arithmetical count and working memory load. We estimated cognitive workload using EOG and EEG-based mental state monitoring. One novelty consists in recording the EOG around the ears (alternative EOG) and around the eyes (conventional EOG). The number of blinks and saccades amplitude increased along with the difficulty increase (p ≤ 0.05). We found significant correlations between EOG and EEG (theta/alpha ratio) and between conventional and alternative EOG signal. The increase in cognitive load may disturb the coding and maintenance of related visual information. Alternative EOG metrics could be a valuable tool for detecting workload. |
Anne Buot; Damiano Azzalini; Maximilien Chaumon; Catherine Tallon-Baudry Does stroke volume influence heartbeat evoked responses? Journal Article In: Biological Psychology, vol. 165, pp. 108165, 2021. @article{Buot2021, We know surprisingly little on how heartbeat-evoked responses (HERs) vary with cardiac parameters. Here, we measured both stroke volume, or volume of blood ejected at each heartbeat, with impedance cardiography, and HER amplitude with magneto-encephalography, in 21 male and female participants at rest with eyes open. We observed that HER co-fluctuates with stroke volume on a beat-to-beat basis, but only when no correction for cardiac artifact was performed. This highlights the importance of an ICA correction tailored to the cardiac artifact. We also observed that easy-to-measure cardiac parameters (interbeat intervals, ECG amplitude) are sensitive to stroke volume fluctuations and can be used as proxies when stroke volume measurements are not available. Finally, interindividual differences in stroke volume were reflected in MEG data, but whether this effect is locked to heartbeats is unclear. Altogether, our results question assumptions on the link between stroke volume and HERs. |
Nicole Hakim; Edward Awh; Edward K. Vogel; Monica D. Rosenberg Inter-electrode correlations measured with EEG predict individual differences in cognitive ability Journal Article In: Current Biology, vol. 31, no. 22, pp. 4998–5008, 2021. @article{Hakim2021, Human brains share a broadly similar functional organization with consequential individual variation. This duality in brain function has primarily been observed when using techniques that consider the spatial organization of the brain, such as MRI. Here, we ask whether these common and unique signals of cognition are also present in temporally sensitive but spatially insensitive neural signals. To address this question, we compiled electroencephalogram (EEG) data from individuals of both sexes while they performed multiple working memory tasks at two different data-collection sites (n = 171 and 165). Results revealed that trial-averaged EEG activity exhibited inter-electrode correlations that were stable within individuals and unique across individuals. Furthermore, models based on these inter-electrode correlations generalized across datasets to predict participants' working memory capacity and general fluid intelligence. Thus, inter-electrode correlation patterns measured with EEG provide a signature of working memory and fluid intelligence in humans and a new framework for characterizing individual differences in cognitive abilities. |
Nicole Hakim; Tobias Feldmann-Wüstefeld; Edward Awh; Edward K. Vogel Controlling the flow of distracting information in working memory Journal Article In: Cerebral Cortex, vol. 31, no. 7, pp. 3323–3337, 2021. @article{Hakim2021a, Visual working memory (WM) must maintain relevant information, despite the constant influx of both relevant and irrelevant information. Attentional control mechanisms help determine which of this new information gets access to our capacity-limited WM system. Previous work has treated attentional control as a monolithic process - either distractors capture attention or they are suppressed. Here, we provide evidence that attentional capture may instead be broken down into at least two distinct subcomponent processes: (1) Spatial capture, which refers to when spatial attention shifts towards the location of irrelevant stimuli and (2) item-based capture, which refers to when item-based WM representations of irrelevant stimuli are formed. To dissociate these two subcomponent processes of attentional capture, we utilized a series of electroencephalography components that track WM maintenance (contralateral delay activity), suppression (distractor positivity), item individuation (N2pc), and spatial attention (lateralized alpha power). We show that new, relevant information (i.e., a task-relevant distractor) triggers both spatial and item-based capture. Irrelevant distractors, however, only trigger spatial capture from which ongoing WM representations can recover more easily. This fractionation of attentional capture into distinct subcomponent processes provides a refined framework for understanding how distracting stimuli affect attention and WM. |
Xin He; Weilin Liu; Nan Qin; Lili Lyu; Xue Dong; Min Bao Performance-dependent reward hurts performance: The non-monotonic attentional load modulation on task-irrelevant distractor processing Journal Article In: Psychophysiology, vol. 58, no. 12, pp. e13920, 2021. @article{He2021b, Selective attention is essential when we face sensory inputs with distractions. In the past decades, Lavie's load theory of selective attention delineates a complete picture of distractor suppression under different attentional control load. The present study was originally designed to explore how reward modulates the load effect of attentional selection. Unexpectedly, it revealed new findings under extended attentional load that was not involved in previous work. Participants were asked to complete a rewarded attentive visual tracking task while presented with irrelevant auditory oddball stimuli, with their behavioral performance, event-related potentials and pupillary responses recorded. We found that although the behavioral performance and pupil sizes varied unidirectionally with the attentional load, the processing of distractors as reflected by the mismatch negativity (MMN) increased first and then decreased. In contrast to the prediction of Lavie's theory that attentional control fails to effectively suppress distractor processing under high attentional control load, our finding suggests that extremely high attentional control load may instead require suppression of distractor processing at a stage as early as possible. Besides, P3a, a positive-polarity response sometimes following the MMN, was not affected by the attentional load, but both N1 (a negative-polarity component peaking ~100 ms from sound onset) and P3a were weakened at higher reward, indicating that reward leads to attenuated early processing of distractor and thus suppresses the attentional orienting towards distractors. These findings altogether complement Lavie's load theory of selective attention, presenting a more complex picture of how attentional load and reward affects selective attention. |
Peter J. Hills; Martin R. Vasilev; Panarai Ford; Lucy Snell; Emma Whitworth; Tessa Parsons; Rebecca Morisson; Abigail Silveira; Bernhard Angele Sensory gating is related to positive and disorganised schizotypy in contrast to smooth pursuit eye movements and latent inhibition Journal Article In: Neuropsychologia, vol. 161, pp. 107989, 2021. @article{Hills2021a, Since the characteristics and symptoms of both schizophrenia and schizotypy are manifested heterogeneously, it is possible that different endophenotypes and neurophysiological measures (sensory gating and smooth pursuit eye movement errors) represent different clusters of symptoms. Participants (N = 205) underwent a standard conditioned-pairing paradigm to establish their sensory gating ratio, a smooth-pursuit eye-movement task, a latent inhibition task, and completed the Schizotypal Personality Questionnaire. A Multidimensional Scaling analysis revealed that sensory gating was related to positive and disorganised dimensions of schizotypy. Latent inhibition and prepulse inhibition were not related to any dimension of schizotypy. Smooth pursuit eye movement error was unrelated to sensory gating and latent inhibition, but was related to negative dimensions of schizotypy. Our findings suggest that the symptom clusters associated with two main endophenotypes are largely independent. To fully understand symptomology and outcomes of schizotypal traits, the different subtypes of schizotypy (and potentially, schizophrenia) ought to be considered separately rather than together. |
Christoph Huber-Huber; Julia Steininger; Markus Grüner; Ulrich Ansorge Psychophysical dual-task setups do not measure pre-saccadic attention but saccade-related strengthening of sensory representations Journal Article In: Psychophysiology, vol. 58, no. 5, pp. e13787, 2021. @article{HuberHuber2021a, Visual attention and saccadic eye movements are linked in a tight, yet flexible fashion. In humans, this link is typically studied with dual-task setups. Participants are instructed to execute a saccade to some target location, while a discrimination target is flashed on a screen before the saccade can be made. Participants are also instructed to report a specific feature of this discrimination target at the trial end. Discrimination performance is usually better if the discrimination target occurred at the same location as the saccade target compared to when it occurred at a different location, which is explained by the mandatory shift of attention to the saccade target location before saccade onset. This pre-saccadic shift of attention presumably enhances the perception of the discrimination target if it occurred at the same, but not if it occurred at a different location. It is, however, known that a dual-task setup can alter the primary process under investigation. Here, we directly compared pre-saccadic attention in single-task versus dual-task setups using concurrent electroencephalography (EEG) and eye-tracking. Our results corroborate the idea of a pre-saccadic shift of attention. They, however, question that this shift leads to the same-position discrimination advantage. The relation of saccade and discrimination target position affected the EEG signal only after saccade onset. Our results, thus, favor an alternative explanation based on the role of saccades for the consolidation of sensory and short-term memory. We conclude that studies with dual-task setups arrived at a valid conclusion despite not measuring exactly what they intended to measure. |
Anna Hudson; Amie J. Durston; Sarah D. McCrackin; Roxane J. Itier In: Brain Topography, vol. 34, no. 6, pp. 813–833, 2021. @article{Hudson2021, Facial expression processing is a critical component of social cognition yet, whether it is influenced by task demands at the neural level remains controversial. Past ERP studies have found mixed results with classic statistical analyses, known to increase both Type I and Type II errors, which Mass Univariate statistics (MUS) control better. However, MUS open-access toolboxes can use different fundamental statistics, which may lead to inconsistent results. Here, we compared the output of two MUS toolboxes, LIMO and FMUT, on the same data recorded during the processing of angry and happy facial expressions investigated under three tasks in a within-subjects design. Both toolboxes revealed main effects of emotion during the N170 timing and main effects of task during later time points typically associated with the LPP component. Neither toolbox yielded an interaction between the two factors at the group level, nor at the individual level in LIMO, confirming that the neural processing of these two face expressions is largely independent from task demands. Behavioural data revealed main effects of task on reaction time and accuracy, but no influence of expression or an interaction between the two. Expression processing and task demands are discussed in the context of the consistencies and discrepancies between the two toolboxes and existing literature. |
Silvia L. Isabella; J. Allan Cheyne; Douglas Cheyne Inhibitory control in the absence of awareness: Interactions between frontal and motor cortex oscillations mediate implicitly learned responses Journal Article In: Frontiers in Human Neuroscience, vol. 15, pp. 786035, 2021. @article{Isabella2021, Cognitive control of action is associated with conscious effort and is hypothesised to be reflected by increased frontal theta activity. However, the functional role of these increases in theta power, and how they contribute to cognitive control remains unknown. We conducted an MEG study to test the hypothesis that frontal theta oscillations interact with sensorimotor signals in order to produce controlled behaviour, and that the strength of these interactions will vary with the amount of control required. We measured neuromagnetic activity in 16 healthy adults performing a response inhibition (Go/Switch) task, known from previous work to modulate cognitive control requirements using hidden patterns of Go and Switch cues. Learning was confirmed by reduced reaction times (RT) to patterned compared to random Switch cues. Concurrent measures of pupil diameter revealed changes in subjective cognitive effort with stimulus probability, even in the absence of measurable behavioural differences, revealing instances of covert variations in cognitive effort. Significant theta oscillations were found in five frontal brain regions, with theta power in the right middle frontal and right premotor cortices parametrically increasing with cognitive effort. Similar increases in oscillatory power were also observed in motor cortical gamma, suggesting an interaction. Right middle frontal and right precentral theta activity predicted changes in pupil diameter across all experimental conditions, demonstrating a close relationship between frontal theta increases and cognitive control. Although no theta-gamma cross-frequency coupling was found, long-range theta phase coherence among the five significant sources between bilateral middle frontal, right inferior frontal, and bilateral premotor areas was found, thus providing a mechanism for the relay of cognitive control between frontal and motor areas via theta signalling. Furthermore, this provides the first evidence for the sensitivity of frontal theta oscillations to implicit motor learning and its effects on cognitive load. More generally these results present a possible a mechanism for this frontal theta network to coordinate response preparation, inhibition and execution. |
Efthymia C. Kapnoula; Bob McMurray In: Brain and Language, vol. 223, pp. 105031, 2021. @article{Kapnoula2021, Listeners generally categorize speech sounds in a gradient manner. However, recent work, using a visual analogue scaling (VAS) task, suggests that some listeners show more categorical performance, leading to less flexible cue integration and poorer recovery from misperceptions (Kapnoula et al., 2017, 2021). We asked how individual differences in speech gradiency can be reconciled with the well-established gradiency in the modal listener, showing how VAS performance relates to both Visual World Paradigm and EEG measures of gradiency. We also investigated three potential sources of these individual differences: inhibitory control; lexical inhibition; and early cue encoding. We used the N1 ERP component to track pre-categorical encoding of Voice Onset Time (VOT). The N1 linearly tracked VOT, reflecting a fundamentally gradient speech perception; however, for less gradient listeners, this linearity was disrupted near the boundary. Thus, while all listeners are gradient, they may show idiosyncratic encoding of specific cues, affecting downstream processing. |
Hamid Karimi-Rouzbahani; Alexandra Woolgar; Anina N. Rich Neural signatures of vigilance decrements predict behavioural errors before they occur Journal Article In: eLife, vol. 10, pp. e60563, 2021. @article{KarimiRouzbahani2021, There are many monitoring environments, such as railway control, in which lapses of attention can have tragic consequences. Problematically, sustained monitoring for rare targets is difficult, with more misses and longer reaction times over time. What changes in the brain underpin these ‘vigilance decrements'? We designed a multiple-object monitoring (MOM) paradigm to examine how the neural representation of information varied with target frequency and time performing the task. Behavioural performance decreased over time for the rare target (monitoring) condition, but not for a frequent target (active) condition. This was mirrored in neural decoding using magnetoencephalography: coding of critical information declined more during monitoring versus active conditions along the experiment. We developed new analyses that can predict behavioural errors from the neural data more than a second before they occurred. This facilitates pre-empting behavioural errors due to lapses in attention and provides new insight into the neural correlates of vigilance decrements. |
Julian Q. Kosciessa; Ulman Lindenberger; Douglas D. Garrett Thalamocortical excitability modulation guides human perception under uncertainty Journal Article In: Nature Communications, vol. 12, pp. 2430, 2021. @article{Kosciessa2021, Knowledge about the relevance of environmental features can guide stimulus processing. However, it remains unclear how processing is adjusted when feature relevance is uncertain. We hypothesized that (a) heightened uncertainty would shift cortical networks from a rhythmic, selective processing-oriented state toward an asynchronous (“excited”) state that boosts sensitivity to all stimulus features, and that (b) the thalamus provides a subcortical nexus for such uncertainty-related shifts. Here, we had young adults attend to varying numbers of task-relevant features during EEG and fMRI acquisition to test these hypotheses. Behavioral modeling and electrophysiological signatures revealed that greater uncertainty lowered the rate of evidence accumulation for individual stimulus features, shifted the cortex from a rhythmic to an asynchronous/excited regime, and heightened neuromodulatory arousal. Crucially, this unified constellation of within-person effects was dominantly reflected in the uncertainty-driven upregulation of thalamic activity. We argue that neuromodulatory processes involving the thalamus play a central role in how the brain modulates neural excitability in the face of momentary uncertainty. |
James E. Kragel; Stephan Schuele; Stephen VanHaerents; Joshua M. Rosenow; Joel L. Voss Rapid coordination of effective learning by the human hippocampus Journal Article In: Science Advances, vol. 7, no. 25, pp. eabf7144, 2021. @article{Kragel2021, Although the human hippocampus is necessary for long-term memory, controversial findings suggest that it may also support short-Term memory in the service of guiding effective behaviors during learning. We tested the counterintuitive theory that the hippocampus contributes to long-Term memory through remarkably short-Term processing, as reflected in eye movements during scene encoding. While viewing scenes for the first time, shortterm retrieval operative within the episode over only hundreds of milliseconds was indicated by a specific eye-movement pattern, which was effective in that it enhanced spatiotemporal memory formation. This viewing pattern was predicted by hippocampal theta oscillations recorded from depth electrodes and by shifts toward top-down influence of hippocampal theta on activity within visual perception and attention networks. The hippocampus thus supports short-Term memory processing that coordinates behavior in the service of effective spatiotemporal learning. |
Wouter Kruijne; Christian N. L. Olivers; Hedderik Rijn Neural repetition suppression modulates time perception: Evidence from electrophysiology and pupillometry Journal Article In: Journal of Cognitive Neuroscience, vol. 33, no. 7, pp. 1230–1252, 2021. @article{Kruijne2021, Human time perception is malleable and subject to many biases. For example, it has repeatedly been shown that stimuli that are physically intense or that are unexpected seem to last longer. Two competing hypotheses have been proposed to account for such biases: One states that these temporal illusions are the result of increased levels of arousal that speeds up neural clock dynamics, whereas the alternative “magnitude coding” account states that the magnitude of sensory responses causally modulates perceived durations. Common experimental paradigms used to study temporal biases cannot dissociate between these accounts, as arousal and sensory magnitude covary and modulate each other. Here, we present two temporal discrimination experiments where two flashing stimuli demarcated the start and end of a to-be-timed interval. These stimuli could be either in the same or a different location, which led to different sensory responses because of neural repetition suppression. Crucially, changes and repetitions were fully predictable, which allowed us to explore effects of sensory response magnitude without changes in arousal or surprise. Intervals with changing markers were perceived as lasting longer than those with repeating markers. We measured EEG (Experiment 1) and pupil size (Experiment 2) and found that temporal perception was related to changes in ERPs (P2) and pupil constriction, both of which have been related to responses in the sensory cortex. Conversely, correlates of surprise and arousal (P3 amplitude and pupil dilation) were unaffected by stimulus repetitions and changes. These results demonstrate, for the first time, that sensory magnitude affects time perception even under constant levels of arousal. |
Louisa Kulke; Lena Brümmer; Arezoo Pooresmaeili; Annekathrin Schacht Overt and covert attention shifts to emotional faces: Combining EEG, eye tracking, and a go/no-go paradigm Journal Article In: Psychophysiology, vol. 58, no. 8, pp. e13838, 2021. @article{Kulke2021, In everyday life, faces with emotional expressions quickly attract attention and eye movements. To study the neural mechanisms of such emotion-driven attention by means of event-related brain potentials (ERPs), tasks that employ covert shifts of attention are commonly used, in which participants need to inhibit natural eye movements towards stimuli. It remains, however, unclear how shifts of attention to emotional faces with and without eye movements differ from each other. The current preregistered study aimed to investigate neural differences between covert and overt emotion-driven attention. We combined eye tracking with measurements of ERPs to compare shifts of attention to faces with happy, angry, or neutral expressions when eye movements were either executed (go conditions) or withheld (no-go conditions). Happy and angry faces led to larger EPN amplitudes, shorter latencies of the P1 component, and faster saccades, suggesting that emotional expressions significantly affected shifts of attention. Several ERPs (N170, EPN, LPC) were augmented in amplitude when attention was shifted with an eye movement, indicating an enhanced neural processing of faces if eye movements had to be executed together with a reallocation of attention. However, the modulation of ERPs by facial expressions did not differ between the go and no-go conditions, suggesting that emotional content enhances both covert and overt shifts of attention. In summary, our results indicate that overt and covert attention shifts differ but are comparably affected by emotional content. |
Christoforos Christoforou; Argyro Fella; Paavo H. T. Leppänen; George K. Georgiou; Timothy C. Papadopoulos Fixation-related potentials in naming speed: A combined EEG and eye-tracking study on children with dyslexia Journal Article In: Clinical Neurophysiology, vol. 132, no. 11, pp. 2798–2807, 2021. @article{Christoforou2021, Objective: We combined electroencephalography (EEG) and eye-tracking recordings to examine the underlying factors elicited during the serial Rapid-Automatized Naming (RAN) task that may differentiate between children with dyslexia (DYS) and chronological age controls (CAC). Methods: Thirty children with DYS and 30 CAC (Mage = 9.79 years; age range 7.6 through 12.1 years) performed a set of serial RAN tasks. We extracted fixation-related potentials (FRPs) under phonologically similar (rime-confound) or visually similar (resembling lowercase letters) and dissimilar (non-confounding and discrete uppercase letters, respectively) control tasks. Results: Results revealed significant differences in FRP amplitudes between DYS and CAC groups under the phonologically similar and phonologically non-confounding conditions. No differences were observed in the case of the visual conditions. Moreover, regression analysis showed that the average amplitude of the extracted components significantly predicted RAN performance. Conclusion: FRPs capture neural components during the serial RAN task informative of differences between DYS and CAC and establish a relationship between neurocognitive processes during serial RAN and dyslexia. Significance: We suggest our approach as a methodological model for the concurrent analysis of neurophysiological and eye-gaze data to decipher the role of RAN in reading. |
Edan Daniel; Ilan Dinstein Individual magnitudes of neural variability quenching are associated with motion perception abilities Journal Article In: Journal of Neurophysiology, vol. 125, no. 4, pp. 1111–1120, 2021. @article{Daniel2021, Remarkable trial-by-trial variability is apparent in cortical responses to repeating stimulus presentations. This neural variability across trials is relatively high before stimulus presentation and then reduced (i.e., quenched) ~0.2 s after stimulus presentation. Individual subjects exhibit different magnitudes of variability quenching, and previous work from our lab has revealed that individuals with larger variability quenching exhibit lower (i.e., better) perceptual thresholds in a contrast discrimination task. Here, we examined whether similar findings were also apparent in a motion detection task, which is processed by distinct neural populations in the visual system. We recorded EEG data from 35 adult subjects as they detected the direction of coherent motion in random dot kinematograms. The results demonstrated that individual magnitudes of variability quenching were significantly correlated with coherent motion thresholds, particularly when presenting stimuli with low dot densities, where coherent motion was more difficult to detect. These findings provide consistent support for the hypothesis that larger magnitudes of neural variability quenching are associated with better perceptual abilities in multiple visual domain tasks. NEW & NOTEWORTHY The current study demonstrates that better visual perception abilities in a motion discrimination task are associated with larger quenching of neural variability. In line with previous studies and signal detection theory principles, these findings support the hypothesis that cortical sensory neurons increase reproducibility to enhance detection and discrimination of sensory stimuli. |
Jonathan Daume; Peng Wang; Alexander Maye; Dan Zhang; Andreas K. Engel Non-rhythmic temporal prediction involves phase resets of low-frequency delta oscillations Journal Article In: NeuroImage, vol. 224, pp. 117376, 2021. @article{Daume2021, The phase of neural oscillatory signals aligns to the predicted onset of upcoming stimulation. Whether such phase alignments represent phase resets of underlying neural oscillations or just rhythmically evoked activity, and whether they can be observed in a rhythm-free visual context, however, remains unclear. Here, we recorded the magnetoencephalogram while participants were engaged in a temporal prediction task, judging the visual or tactile reappearance of a uniformly moving stimulus. The prediction conditions were contrasted with a control condition to dissociate phase adjustments of neural oscillations from stimulus-driven activity. We observed stronger delta band inter-trial phase consistency (ITPC) in a network of sensory, parietal and frontal brain areas, but no power increase reflecting stimulus-driven or prediction-related evoked activity. Delta ITPC further correlated with prediction performance in the cerebellum and visual cortex. Our results provide evidence that phase alignments of low-frequency neural oscillations underlie temporal predictions in a non-rhythmic visual and crossmodal context. |
Saeideh Davoudi; Mohsen Parto Dezfouli; Robert T. Knight; Mohammad Reza Daliri; Elizabeth L. Johnson Prefrontal lesions disrupt posterior alpha–gamma coordination of visual working memory representations Journal Article In: Journal of Cognitive Neuroscience, vol. 33, no. 9, pp. 1798–1810, 2021. @article{Davoudi2021, How does the human brain prioritize different visual representations in working memory (WM)? Here, we define the oscillatory mechanisms supporting selection of “where”and “when” features from visual WM storage and investigate the role of pFC in feature selection. Fourteen individuals with lateral pFC damage and 20 healthy controls performed a visuospatial WM task while EEG was recorded. On each trial, two shapes were presented sequentially in a top/ bottom spatial orientation. A retro-cue presented mid-delay prompted which of the two shapes had been in either the top/ bottom spatial position or first/second temporal position. We found that cross-frequency coupling between parieto-occipital alpha (α; 8–12 Hz) oscillations and topographi-cally distributed gamma (γ; 30–50 Hz) activity tracked selection of the distinct cued feature in controls. This signature of feature selection was disrupted in patients with pFC lesions, despite intact α–γ coupling independent of feature selection. These findings reveal a pFC-dependent parieto-occipital α–γ mechanism for the rapid selection of visual WM representations. |
Jan Willem De Gee; Camile M. C. Correa; Matthew Weaver; Tobias H. Donner; Simon Van Gaal Pupil dilation and the slow wave ERP reflect surprise about choice outcome resulting from intrinsic variability in decision confidence Journal Article In: Cerebral Cortex, vol. 31, no. 7, pp. 3565–3578, 2021. @article{DeGee2021, Central to human and animal cognition is the ability to learn from feedback in order to optimize future rewards. Such a learning signal might be encoded and broadcasted by the brain's arousal systems, including the noradrenergic locus coeruleus. Pupil responses and the positive slow wave component of event-related potentials reflect rapid changes in the arousal level of the brain. Here, we ask whether and how these variables may reflect surprise: the mismatch between one's expectation about being correct and the outcome of a decision, when expectations fluctuate due to internal factors (e.g., engagement). We show that during an elementary decision task in the face of uncertainty both physiological markers of phasic arousal reflect surprise. We further show that pupil responses and slow wave event-related potential are unrelated to each other and that prediction error computations depend on feedback awareness. These results further advance our understanding of the role of central arousal systems in decision-making under uncertainty. |
Megan T. Debettencourt; Stephanie D. Williams; Edward K. Vogel; Edward Awh Sustained attention and spatial attention distinctly influence long-term memory encoding Journal Article In: Journal of Cognitive Neuroscience, vol. 33, no. 10, pp. 2132–2148, 2021. @article{Debettencourt2021, Our attention is critically important for what we remember. Prior measures of the relationship between attention and memory, however, have largely treated “attention” as a monolith. Here, across three experiments, we provide evidence for two dissociable aspects of attention that influence encoding into long-term memory. Using spatial cues together with a sensitive continuous report procedure, we find that long-term memory response error is affected by both trial-by-trial fluctuations of sustained attention and prioritization via covert spatial attention. Furthermore, using multivariate analyses of EEG, we track both sustained attention and spatial attention before stimulus onset. Intriguingly, even during moments of low sustained attention, there is no decline in the representation of the spatially attended location, showing that these two aspects of attention have robust but independent effects on long-term memory encoding. Finally, sustained and spatial attention predicted distinct variance in long-term memory performance across individuals. That is, the relationship between attention and long-term memory suggests a composite model, wherein distinct attentional subcomponents influence encoding into long-term memory. These results point toward a taxonomy of the distinct attentional processes that constrain our memories. |
Federica Degno; Otto Loberg; Simon P. Liversedge Co-registration of eye movements and fixation-related potentials in natural reading: Practical issues of experimental design and data analysis Journal Article In: Collabra: Psychology, vol. 7, no. 1, pp. 1–28, 2021. @article{Degno2021, A growing number of studies are using co-registration of eye movement (EM) and fixation-related potential (FRP) measures to investigate reading. However, the number of co-registration experiments remains small when compared to the number of studies in the literature conducted with EMs and event-related potentials (ERPs) alone. One reason for this is the complexity of the experimental design and data analyses. The present paper is designed to support researchers who might have expertise in conducting reading experiments with EM or ERP techniques and are wishing to take their first steps towards co-registration research. The objective of this paper is threefold. First, to provide an overview of the issues that such researchers would face. Second, to provide a critical overview of the methodological approaches available to date to deal with these issues. Third, to offer an example pipeline and a full set of scripts for data preprocessing that may be adopted and adapted for one's own needs. The data preprocessing steps are based on EM data parsing via Data Viewer (SR Research), and the provided scripts are written in Matlab and R. Ultimately, with this paper we hope to encourage other researchers to run co-registration experiments to study reading and human cognition more generally. |
Gisella K. Diaz; Edward K. Vogel; Edward Awh Perceptual grouping reveals distinct roles for sustained slow wave activity and alpha oscillations in working memory Journal Article In: Journal of Cognitive Neuroscience, vol. 33, no. 7, pp. 1354–1364, 2021. @article{Diaz2021, Multiple neural signals have been found to track the number of items stored in working memory ( WM). These signals include oscillatory activity in the alpha band and slow-wave components in human EEG, both of which vary with storage loads and predict individual differences in WM capacity. However, recent evidence suggests that these two signals play distinct roles in spatial attention and item-based storage in WM. Here, we examine the hypothesis that sustained negative voltage deflections over parieto-occipital electrodes reflect the number of individuated items in WM, whereas oscillatory activity in the alpha frequency band (8–12 Hz) within the same electrodes tracks the attended positions in the visual display. We measured EEG activity while participants stored the orientation of visual elements that were either grouped by collinearity or not. This grouping manipulation altered the number of individuated items perceived while holding constant the number of locations occupied by visual stimuli. The negative slow wave tracked the number of items stored and was reduced in amplitude in the grouped condition. By contrast, oscillatory activity in the alpha frequency band tracked the number of positions occupied by the memoranda and was unaffected by perceptual grouping. Perceptual grouping, then, reduced the number of individuated representations stored in WM as reflected by the negative slow wave, whereas the location of each element was actively maintained as indicated by alpha power. These findings contribute to the emerging idea that distinct classes of EEG signals work in concert to successfully maintain online representations in WM. |
Marcos Domic-Siede; Martín Irani; Joaquín Valdés; Marcela Perrone-Bertolotti; Tomás Ossandón In: NeuroImage, vol. 226, pp. 117557, 2021. @article{DomicSiede2021, Cognitive planning, the ability to develop a sequenced plan to achieve a goal, plays a crucial role in human goal-directed behavior. However, the specific role of frontal structures in planning is unclear. We used a novel and ecological task, that allowed us to separate the planning period from the execution period. The spatio-temporal dynamics of EEG recordings showed that planning induced a progressive and sustained increase of frontal-midline theta activity (FMθ) over time. Source analyses indicated that this activity was generated within the prefrontal cortex. Theta activity from the right mid-Cingulate Cortex (MCC) and the left Anterior Cingulate Cortex (ACC) were correlated with an increase in the time needed for elaborating plans. On the other hand, left Frontopolar cortex (FP) theta activity exhibited a negative correlation with the time required for executing a plan. Since reaction times of planning execution correlated with correct responses, left FP theta activity might be associated with efficiency and accuracy in making a plan. Associations between theta activity from the right MCC and the left ACC with reaction times of the planning period may reflect high cognitive demand of the task, due to the engagement of attentional control and conflict monitoring implementation. In turn, the specific association between left FP theta activity and planning performance may reflect the participation of this brain region in successfully self-generated plans. |
Linda Drijvers; Ole Jensen; Eelke Spaak Rapid invisible frequency tagging reveals nonlinear integration of auditory and visual information Journal Article In: Human Brain Mapping, vol. 42, no. 4, pp. 1138–1152, 2021. @article{Drijvers2021, During communication in real-life settings, the brain integrates information from auditory and visual modalities to form a unified percept of our environment. In the current magnetoencephalography (MEG) study, we used rapid invisible frequency tagging (RIFT) to generate steady-state evoked fields and investigated the integration of audiovisual information in a semantic context. We presented participants with videos of an actress uttering action verbs (auditory; tagged at 61 Hz) accompanied by a gesture (visual; tagged at 68 Hz, using a projector with a 1,440 Hz refresh rate). Integration difficulty was manipulated by lower-order auditory factors (clear/degraded speech) and higher-order visual factors (congruent/incongruent gesture). We identified MEG spectral peaks at the individual (61/68 Hz) tagging frequencies. We furthermore observed a peak at the intermodulation frequency of the auditory and visually tagged signals (fvisual − fauditory = 7 Hz), specifically when lower-order integration was easiest because signal quality was optimal. This intermodulation peak is a signature of nonlinear audiovisual integration, and was strongest in left inferior frontal gyrus and left temporal regions; areas known to be involved in speech-gesture integration. The enhanced power at the intermodulation frequency thus reflects the ease of lower-order audiovisual integration and demonstrates that speech-gesture information interacts in higher-order language areas. Furthermore, we provide a proof-of-principle of the use of RIFT to study the integration of audiovisual stimuli, in relation to, for instance, semantic context. |
Stefan Dürschmid; Andre Maric; Marcel S. Kehl; Robert T. Knight; Hermann Hinrichs; Hans-Jochen Heinze Fronto-temporal regulation of subjective value to suppress impulsivity in intertemporal choices Journal Article In: Journal of Neuroscience, vol. 41, pp. 1727–1737, 2021. @article{Duerschmid2021, Impulsive decisions arise from preferring smaller but sooner rewards compared to larger but later rewards. How neural activity and attention to choice alternatives contribute to reward decisions during temporal discounting is not clear. Here we probed (i) attention to and (ii) neural representation of delay and reward information in humans (both sexes) engaged in choices. We studied behavioral and frequency specific dynamics supporting impulsive decisions on a fine-grained temporal scale using eye tracking and magnetoencephalographic (MEG) recordings. In one condition participants had to decide for themselves but pretended to decide for their best friend in a second prosocial condition, which required perspective taking. Hence, conditions varied in the value for themselves versus that pretending to choose for another person. Stronger impulsivity was reliably found across three independent groups for prosocial decisions. Eye tracking revealed a systematic shift of attention from the delay to the reward information and differences in eye tracking between conditions predicted differences in discounting. High frequency activity (HFA: 175-250 Hz) distributed over right fronto-temporal sensors correlated with delay and reward information in consecutive temporal intervals for high value decisions for oneself but not the friend. Collectively the results imply that the HFA recorded over fronto-temporal MEG sensors plays a critical role in choice option integration. |
Amie J. Durston; Roxane J. Itier The early processing of fearful and happy facial expressions is independent of task demands – Support from mass univariate analyses Journal Article In: Brain Research, vol. 1765, pp. 147505, 2021. @article{Durston2021, Most ERP studies on facial expressions of emotion have yielded inconsistent results regarding the time course of emotion effects and their possible modulation by task demands. Most studies have used classical statistical methods with a high likelihood of type I and type II errors, which can be limited with Mass Univariate statistics. FMUT and LIMO are currently the only two available toolboxes for Mass Univariate analysis of ERP data and use different fundamental statistics. Yet, no direct comparison of their output has been performed on the same dataset. Given the current push to transition to robust statistics to increase results replicability, here we compared the output of these toolboxes on data previously analyzed using classic approaches (Itier & Neath-Tavares, 2017). The early (0–352 ms) processing of fearful, happy, and neutral faces was investigated under three tasks in a within-subject design that also controlled gaze fixation location. Both toolboxes revealed main effects of emotion and task but neither yielded an interaction between the two, confirming the early processing of fear and happy expressions is largely independent of task demands. Both toolboxes found virtually no difference between neutral and happy expressions, while fearful (compared to neutral and happy) expressions modulated the N170 and EPN but elicited maximum effects after the N170 peak, around 190 ms. Similarities and differences in the spatial and temporal extent of these effects are discussed in comparison to the published classical analysis and the rest of the ERP literature. |
Tobias Feldmann-Wüstefeld Neural measures of working memory in a bilateral change detection task Journal Article In: Psychophysiology, vol. 58, no. 1, pp. e13683, 2021. @article{FeldmannWuestefeld2021, The change detection task is a widely used paradigm to examine visual working memory processes. Participants memorize a set of items and then, try to detect changes in the set after a retention period. The negative slow wave (NSW) and contralateral delay activity (CDA) are event-related potentials in the EEG signal that are commonly used in change detection tasks to track working memory load, as both increase with the number of items maintained in working memory (set size). While the CDA was argued to more purely reflect the memory-specific neural activity than the NSW, it also requires a lateralized design and attention shifts prior to memoranda onset, imposing more restrictions on the task than the NSW. The present study proposes a novel change detection task in which both CDA and NSW can be measured at the same time. Memory items were presented bilaterally, but their distribution in the left and right hemifield varied, inducing a target imbalance or “net load.” NSW increased with set size, whereas CDA increased with net load. In addition, a multivariate linear classifier was able to decode the set size and net load from the EEG signal. CDA, NSW, and decoding accuracy predicted an individual's working memory capacity. In line with the notion of a bilateral advantage in working memory, accuracy, and CDA data suggest that participants tended to encode items relatively balanced. In sum, this novel change detection task offers a basis to make use of converging neural measures of working memory in a comprehensive paradigm. |
Tobias Feldmann-Wüstefeld; Marina Weinberger; Edward Awh Spatially guided distractor suppression during visual search Journal Article In: Journal of Neuroscience, vol. 41, no. 14, pp. 3180–3191, 2021. @article{FeldmannWuestefeld2021a, Past work has demonstrated that active suppression of salient distractors is a critical part of visual selection. Evidence for goaldriven suppression includes below-baseline visual encoding at the position of salient distractors (Gaspelin and Luck, 2018) and neural signals such as the distractor positivity (Pd) that track how many distractors are presented in a given hemifield (Feldmann-Wöstefeld and Vogel, 2019). One basic question regarding distractor suppression is whether it is inherently spatial or nonspatial in character. Indeed, past work has shown that distractors evoke both spatial (Theeuwes, 1992) and nonspatial forms of interference (Folk and Remington, 1998), motivating a direct examination of whether space is integral to goal-driven distractor suppression. Here, we use behavioral and EEG data from adult humans (male and female) to provide clear evidence for a spatial gradient of suppression surrounding salient singleton distractors. Replicating past work, both reaction time and neural indices of target selection improved monotonically as the distance between target and distractor increased. Importantly, these target selection effects were paralleled by a monotonic decline in the amplitude of the Pd, an electrophysiological index of distractor suppression. Moreover, multivariate analyses revealed spatially selective activity in the h-band that tracked the position of the target and, critically, revealed suppressed activity at spatial channels centered on distractor positions. Thus, goal-driven selection of relevant over irrelevant information benefits from a spatial gradient of suppression surrounding salient distractors. |
Joshua J. Foster; William Thyer; Janna W. Wennberg; Edward Awh Covert attention increases the gain of stimulus-evoked population codes Journal Article In: Journal of Neuroscience, vol. 41, no. 8, pp. 1802–1815, 2021. @article{Foster2021, Covert spatial attention has a variety of effects on the responses of individual neurons. However, relatively little is known about the net effect of these changes on sensory population codes, even though perception ultimately depends on population activity. Here, we measured the EEG in human observers (male and female), and isolated stimulus-evoked activity that was phase-locked to the onset of attended and ignored visual stimuli. Using an encoding model, we reconstructed spatially selective population tuning functions from the pattern of stimulus-evoked activity across the scalp. Our EEG-based approach allowed us to measure very early visually evoked responses occurring;100ms after stimulus onset. In Experiment 1, we found that covert attention increased the amplitude of spatially tuned population responses at this early stage of sensory processing. In Experiment 2, we parametrically varied stimulus contrast to test how this effect scaled with stimulus contrast. We found that the effect of attention on the amplitude of spatially tuned responses increased with stimulus contrast, and was well described by an increase in response gain (i.e., a multiplicative scaling of the population response). Together, our results show that attention increases the gain of spatial population codes during the first wave of visual processing. |
Wendel M. Friedl; Andreas Keil Aversive conditioning of spatial position sharpens neural population-level tuning in visual cortex and selectively alters alpha-band activity Journal Article In: Journal of Neuroscience, vol. 41, no. 26, pp. 5723–5733, 2021. @article{Friedl2021, Processing capabilities for many low-level visual features are experientially malleable, aiding sighted organisms in adapting to dynamic environments. Explicit instructions to attend a specific visual field location influence retinotopic visuocortical activity, amplifying responses to stimuli appearing at cued spatial positions. It remains undetermined both how such prioritization affects surrounding nonprioritized locations, and if a given retinotopic spatial position can attain enhanced cortical representation through experience rather than instruction. The current report examined visuocortical response changes as human observers (N = 51, 19 male) learned, through differential classical conditioning, to associate specific screen locations with aversive outcomes. Using dense-array EEG and pupillometry, we tested the preregistered hypotheses of either sharpening or generalization around an aversively associated location following a single conditioning session. Competing hypotheses tested whether mean response changes would take the form of a Gaussian (generalization) or difference-of-Gaussian (sharpening) distribution over spatial positions, peaking at the viewing location paired with a noxious noise. Occipital 15 Hz steady-state visual evoked potential responses were selectively heightened when viewing aversively paired locations and displayed a nonlinear, difference-of-Gaussian profile across neighboring locations, consistent with suppressive surround modulation of nonprioritized positions. Measures of alpha-band (8-12 Hz) activity were differentially altered in anterior versus posterior locations, while pupil diameter exhibited selectively heightened responses to noise-paired locations but did not evince differences across the nonpaired locations. These results indicate that visuocortical spatial representations are sharpened in response to location-specific aversive conditioning, while top-down influences indexed by alpha-power reduction exhibit posterior generalization and anterior sharpening. |
R. Frömer; H. Lin; C. K. Dean Wolf; M. Inzlicht; A. Shenhav Expectations of reward and efficacy guide cognitive control allocation Journal Article In: Nature Communications, vol. 12, pp. 1030, 2021. @article{Froemer2021, The amount of mental effort we invest in a task is influenced by the reward we can expect if we perform that task well. However, some of the rewards that have the greatest potential for driving these efforts are partly determined by factors beyond one's control. In such cases, effort has more limited efficacy for obtaining rewards. According to the Expected Value of Control theory, people integrate information about the expected reward and efficacy of task performance to determine the expected value of control, and then adjust their control allocation (i.e., mental effort) accordingly. Here we test this theory's key behavioral and neural predictions. We show that participants invest more cognitive control when this control is more rewarding and more efficacious, and that these incentive components separately modulate EEG signatures of incentive evaluation and proactive control allocation. Our findings support the prediction that people combine expectations of reward and efficacy to determine how much effort to invest. |
Jordan Garrett; Tom Bullock; Barry Giesbrecht Tracking the contents of spatial working memory during an acute bout of aerobic exercise Journal Article In: Journal of Cognitive Neuroscience, vol. 33, no. 7, pp. 1271–1286, 2021. @article{Garrett2021, Recent studies have reported enhanced visual responses during acute bouts of physical exercise, suggesting that sensory systems may become more sensitive during active exploration of the environment. This raises the possibility that exercise may also modulate brain activity associated with other cognitive functions, like visual working memory, that rely on patterns of activity that persist beyond the initial sensory evoked response. Here, we investigated whether the neural coding of an object location held in memory is modulated by an acute bout of aerobic exercise. Participants performed a spatial change detection task while seated on a stationary bike at rest and during low-intensity cycling (∼50 watts/50 RPM). Brain activity was measured with EEG. An inverted encoding modeling technique was employed to estimate location-selective channel response functions from topographical patterns of alpha-band (8–12 Hz) activity. There was strong evidence of robust spatially selective responses during stimulus presentation and retention periods both at rest and during exercise. During retention, the spatial selectivity of these responses decreased in the exercise condition relative to rest. A temporal generalization analysis indicated that models trained on one time period could be used to reconstruct the remembered locations at other time periods, however, generalization was degraded during exercise. Together, these results demonstrate that it is possible to reconstruct the contents of working memory at rest and during exercise, but that exercise can result in degraded responses, which contrasts with the enhancements observed in early sensory processing. |
2020 |
Simon Majed Ceh; Sonja Annerer-Walcher; Christof Körner; Christian Rominger; Silvia Erika Kober; Andreas Fink; Mathias Benedek Neurophysiological indicators of internal attention: An electroencephalography–eye-tracking coregistration study Journal Article In: Brain and Behavior, vol. 10, no. 10, pp. 1–14, 2020. @article{Ceh2020, Introduction: Many goal-directed and spontaneous everyday activities (e.g., planning, mind wandering) rely on an internal focus of attention. Internally directed cognition (IDC) was shown to differ from externally directed cognition in a range of neurophysiological indicators such as electroencephalogram (EEG) alpha activity and eye behavior. Methods: In this EEG–eye-tracking coregistration study, we investigated effects of attention direction on EEG alpha activity and various relevant eye parameters. We used an established paradigm to manipulate internal attention demands in the visual domain within tasks by means of conditional stimulus masking. Results: Consistent with previous research, IDC involved relatively higher EEG alpha activity (lower alpha desynchronization) at posterior cortical sites. Moreover, IDC was characterized by greater pupil diameter (PD), fewer microsaccades, fixations, and saccades. These findings show that internal versus external cognition is associated with robust differences in several indicators at the neural and perceptual level. In a second line of analysis, we explored the intrinsic temporal covariation between EEG alpha activity and eye parameters during rest. This analysis revealed a positive correlation of EEG alpha power with PD especially in bilateral parieto-occipital regions. Conclusion: Together, these findings suggest that EEG alpha activity and PD represent time-sensitive indicators of internal attention demands, which may be involved in a neurophysiological gating mechanism serving to shield internal cognition from irrelevant sensory information. |
Peter De Lissa; Roberto Caldara; Victoria Nicholls; Sebastien Miellet In pursuit of visual attention: SSVEP frequency-tagging moving targets Journal Article In: PLoS ONE, vol. 15, no. 8, pp. e0236967, 2020. @article{DeLissa2020, Previous research has shown that visual attention does not always exactly follow gaze direction, leading to the concepts of overt and covert attention. However, it is not yet clear how such covert shifts of visual attention to peripheral regions impact the processing of the targets we directly foveate as they move in our visual field. The current study utilised the coregistration of eye-position and EEG recordings while participants tracked moving targets that were embedded with a 30 Hz frequency tag in a Steady State Visually Evoked Potentials (SSVEP) paradigm. When the task required attention to be divided between the moving target (overt attention) and a peripheral region where a second target might appear (covert attention), the SSVEPs elicited by the tracked target at the 30 Hz frequency band were significantly, but transiently, lower than when participants did not have to covertly monitor for a second target. Our findings suggest that neural responses of overt attention are only briefly reduced when attention is divided between covert and overt areas. This neural evidence is in line with theoretical accounts describing attention as a pool of finite resources, such as the perceptual load theory. Altogether, these results have practical implications for many real-world situations where covert shifts of attention may discretely reduce visual processing of objects even when they are directly being tracked with the eyes. |
Andrea Desantis; Adrien Chan-Hon-Tong; Thérèse Collins; Hinze Hogendoorn; Patrick Cavanagh Decoding the temporal dynamics of covert spatial attention using multivariate EEG analysis: Contributions of raw amplitude and alpha power Journal Article In: Frontiers in Human Neuroscience, vol. 14, pp. 570419, 2020. @article{Desantis2020, Attention can be oriented in space covertly without the need of eye movements. We used multivariate pattern classification analyses (MVPA) to investigate whether the time course of the deployment of covert spatial attention leading up to the observer's perceptual decision can be decoded from both EEG alpha power and raw activity traces. Decoding attention from these signals can help determine whether raw EEG signals and alpha power reflect the same or distinct features of attentional selection. Using a classical cueing task, we showed that the orientation of covert spatial attention can be decoded by both signals. However, raw activity and alpha power may reflect different features of spatial attention, with alpha power more associated with the orientation of covert attention in space and raw activity with the influence of attention on perceptual processes. |
Elisa C. Dias; Abraham C. Van Voorhis; Filipe Braga; Julianne Todd; Javier Lopez-Calderon; Antigona Martinez; Daniel C. Javitt Impaired fixation-related theta modulation predicts reduced visual span and guided search deficits in schizophrenia Journal Article In: Cerebral Cortex, vol. 30, no. 5, pp. 2823–2833, 2020. @article{Dias2020, During normal visual behavior, individuals scan the environment through a series of saccades and fixations. At each fixation, the phase of ongoing rhythmic neural oscillations is reset, thereby increasing efficiency of subsequent visual processing. This phase-reset is reflected in the generation of a fixation-related potential (FRP). Here, we evaluate the integrity of theta phase-reset/FRP generation and Guided Visual Search task in schizophrenia. Subjects performed serial and parallel versions of the task. An initial study (15 healthy controls (HC)/15 schizophrenia patients (SCZ)) investigated behavioral performance parametrically across stimulus features and set-sizes. A subsequent study (25-HC/25-SCZ) evaluated integrity of search-related FRP generation relative to search performance and evaluated visual span size as an index of parafoveal processing. Search times were significantly increased for patients versus controls across all conditions. Furthermore, significantly, deficits were observed for fixation-related theta phase-reset across conditions, that fully predicted impaired reduced visual span and search performance and correlated with impaired visual components of neurocognitive processing. By contrast, overall search strategy was similar between groups. Deficits in theta phase-reset mechanisms are increasingly documented across sensory modalities in schizophrenia. Here, we demonstrate that deficits in fixation-related theta phase-reset during naturalistic visual processing underlie impaired efficiency of early visual function in schizophrenia. |
Nadine Dijkstra; Luca Ambrogioni; Diego Vidaurre; Marcel Gerven Neural dynamics of perceptual inference and its reversal during imagery Journal Article In: eLife, vol. 9, pp. 1–19, 2020. @article{Dijkstra2020, After the presentation of a visual stimulus, neural processing cascades from low-level sensory areas to increasingly abstract representations in higher-level areas. It is often hypothesised that a reversal in neural processing underlies the generation of mental images as abstract representations are used to construct sensory representations in the absence of sensory input. According to predictive processing theories, such reversed processing also plays a central role in later stages of perception. Direct experimental evidence of reversals in neural information flow has been missing. Here, we used a combination of machine learning and magnetoencephalography to characterise neural dynamics in humans. We provide direct evidence for a reversal of the perceptual feed-forward cascade during imagery and show that, during perception, such reversals alternate with feed-forward processing in an 11 Hz oscillatory pattern. Together, these results show how common feedback processes support both veridical perception and mental imagery. |
Troy Dildine; Elizabeth Necka; Lauren Yvette Atlas Confidence in subjective pain is predicted by reaction time during decision making Journal Article In: Scientific Reports, vol. 10, pp. 21373, 2020. @article{Dildine2020, Self-report is the gold standard for measuring pain. However, decisions about pain can vary substantially within and between individuals. We measured whether self-reported pain is accompanied by metacognition and variations in confidence, similar to perceptual decision-making in other modalities. Eighty healthy volunteers underwent acute thermal pain and provided pain ratings followed by confidence judgments on continuous visual analogue scales. We investigated whether eye fixations and reaction time during pain rating might serve as implicit markers of confidence. Confidence varied across trials and increased confidence was associated with faster pain rating reaction times. The association between confidence and fixations varied across individuals as a function of the reliability of individuals' association between temperature and pain. Taken together, this work indicates that individuals can provide metacognitive judgments of pain and extends research on confidence in perceptual decision-making to pain. |
Ciara Egan; Filipe Cristino; Joshua S. Payne; Guillaume Thierry; Manon W. Jones How alliteration enhances conceptual–attentional interactions in reading Journal Article In: Cortex, vol. 124, pp. 111–118, 2020. @article{Egan2020, In linguistics, the relationship between phonological word form and meaning is mostly considered arbitrary. Why, then, do literary authors traditionally craft sound relationships between words? We set out to characterise how dynamic interactions between word form and meaning may account for this literary practice. Here, we show that alliteration influences both meaning integration and attentional engagement during reading. We presented participants with adjective-noun phrases, having manipulated semantic relatedness (congruent, incongruent) and form repetition (alliterating, non-alliterating) orthogonally, as in “dazzling-diamond”; “sparkling-diamond”; “dangerous-diamond”; and “creepy-diamond”. Using simultaneous recording of event-related brain potentials and pupil dilation (PD), we establish that, whilst semantic incongruency increased N400 amplitude as expected, it reduced PD, an index of attentional engagement. Second, alliteration affected semantic evaluation of word pairs, since it reduced N400 amplitude even in the case of unrelated items (e.g., “dangerous-diamond”). Third, alliteration specifically boosted attentional engagement for related words (e.g., “dazzling-diamond”), as shown by a sustained negative correlation between N400 amplitudes and PD change after the window of lexical integration. Thus, alliteration strategically arouses attention during reading and when comprehension is challenged, phonological information helps readers link concepts beyond the level of literal semantics. Overall, our findings provide a tentative mechanism for the empowering effect of sound repetition in literary constructs. |
Thomas Geyer; Franziska Günther; Hermann J. Müller; Jim Kacian; Heinrich René Liesefeld; Stella Pierides Reading English-language haiku: An eye-movement study of the 'cut effect' Journal Article In: Journal of Eye Movement Research, vol. 13, no. 2, pp. 1–29, 2020. @article{Geyer2020, The current study, set within the larger enterprise of Neuro-Cognitive Poetics, was designed to examine how readers deal with the 'cut'-a more or less sharp semantic-conceptual break-in normative, three-line English-language haiku poems (ELH). Readers were presented with three-line haiku that consisted of two (seemingly) disparate parts, a (two-line) 'phrase' image and a one-line 'fragment' image, in order to determine how they process the conceptual gap between these images when constructing the poem's meaning-as reflected in their patterns of reading eye movements. In addition to replicating the basic 'cut effect', i.e., the extended fixation dwell time on the fragment line relative to the other lines, the present study examined (a) how this effect is influenced by whether the cut is purely implicit or explicitly marked by punctuation, and (b) whether the effect pattern could be delineated against a control condition of 'uncut', one-image haiku. For 'cut' vs. 'uncut' haiku, the results revealed the distribution of fixations across the poems to be modulated by the position of the cut (after line 1 vs. after line 2), the presence vs. absence of a cut marker, and the semanticconceptual distance between the two images (context-action vs. juxtaposition haiku). These formal-structural and conceptual-semantic properties were associated with systematic changes in how individual poem lines were scanned at first reading and then (selectively) re-sampled in second-and third-pass reading to construct and check global meaning. No such effects were found for one-image (control) haiku. We attribute this pattern to the operation of different meaning resolution processes during the comprehension of two-image haiku, which are invoked by both form-and meaning-related features of the poems. |
Christophe C. Le Dantec; Aaron R. Seitz Dissociating electrophysiological correlates of contextual and perceptual learning in a visual search task Journal Article In: Journal of Vision, vol. 20, no. 6, pp. 1–15, 2020. @article{LeDantec2020, Perceptual learning and contextual learning are two types of implicit visual learning that can co-occur in the same tasks. For example, to find an animal in the woods, you need to know where to look in the environment (contextual learning) and you must be able to discriminate its features (perceptual learning). However, contextual and perceptual learning are typically studied using distinct experimental paradigms, and little is known regarding their comparative neural mechanisms. In this study, we investigated contextual and perceptual learning in 12 healthy adult humans as they performed the same visual search task, and we examined psychophysical and electrophysiological (event-related potentials) measures of learning. Participants were trained to look for a visual stimulus, a small line with a specific orientation, presented among distractors. We found better performance for the trained target orientation as compared to an untrained control orientation, reflecting specificity of perceptual learning for the orientation of trained elements. This orientation specificity effect was associated with changes in the C1 component. We also found better performance for repeated spatial configurations as compared to novel ones, reflecting contextual learning. This context-specific effect was associated with the N2pc component. Taken together, these results suggest that contextual and perceptual learning are distinct visual learning phenomena that have different behavioral and electrophysiological characteristics. |
Alfred Lim; Steve M. J. Janssen; Jason Satel Exploring the temporal dynamics of inhibition of return using steady-state visual evoked potentials Journal Article In: Cognitive, Affective and Behavioral Neuroscience, pp. 1349–1364, 2020. @article{Lim2020, Inhibition of return is characterized by delayed responses to previously attended locations when the interval between stimuli is long enough. The present study employed steady-state visual evoked potentials (SSVEPs) as a measure of attentional modulation to explore the nature and time course of input- and output-based inhibitory cueing mechanisms that each slow response times at previously stimulated locations under different experimental conditions. The neural effects of behavioral inhibition were examined by comparing post-cue SSVEPs between cued and uncued locations measured across two tasks that differed only in the response modality (saccadic or manual response to targets). Grand averages of SSVEP amplitudes for each condition showed a reduction in amplitude at cued locations in the window of 100-500 ms post-cue, revealing an early, short-term decrease in the responses of neurons that can be attributed to sensory adaptation, regardless of response modality. Because primary visual cortex has been found to be one of the major sources of SSVEP signals, the results suggest that the SSVEP modulations observed were caused by input-based inhibition that occurred in V1, or visual areas earlier than V1, as a consequence of reduced visual input activity at previously cued locations. No SSVEP modulations were observed in either response condition late in the cue-target interval, suggesting that neither late input- nor output-based IOR modulates SSVEPs. These findings provide further electrophysiological support for the theory of multiple mechanisms contributing to behavioral cueing effects. |
Jakub Limanowski; Vladimir Litvak; Karl Friston Cortical beta oscillations reflect the contextual gating of visual action feedback Journal Article In: NeuroImage, vol. 222, pp. 117267, 2020. @article{Limanowski2020, In sensorimotor integration, the brain needs to decide how its predictions should accommodate novel evidence by ‘gating' sensory data depending on the current context. Here, we examined the oscillatory correlates of this process by recording magnetoencephalography (MEG) data during a new task requiring action under intersensory conflict. We used virtual reality to decouple visual (virtual) and proprioceptive (real) hand postures during a task in which the phase of grasping movements tracked a target (in either modality). Thus, we rendered visual information either task-relevant or a (to-be-ignored) distractor. Under visuo-proprioceptive incongruence, occipital beta power decreased (relative to congruence) when vision was task-relevant but increased when it had to be ignored. Dynamic causal modeling (DCM) revealed that this interaction was best explained by diametrical, task-dependent changes in visual gain. These results suggest a crucial role for beta oscillations in the contextual gating (i.e., gain or precision control) of visual vs proprioceptive action feedback, depending on current behavioral demands. |
Kevin P. Madore; Anna M. Khazenzon; Cameron W. Backes; Jiefeng Jiang; Melina R. Uncapher; Anthony M. Norcia; Anthony D. Wagner Memory failure predicted by attention lapsing and media multitasking Journal Article In: Nature, vol. 587, no. 7832, pp. 87–91, 2020. @article{Madore2020, With the explosion of digital media and technologies, scholars, educators and the public have become increasingly vocal about the role that an ‘attention economy' has in our lives1. The rise of the current digital culture coincides with longstanding scientific questions about why humans sometimes remember and sometimes forget, and why some individuals remember better than others2–6. Here we examine whether spontaneous attention lapses—in the moment7–12, across individuals13–15 and as a function of everyday media multitasking16–19—negatively correlate with remembering. Electroencephalography and pupillometry measures of attention20,21 were recorded as eighty young adults (mean age, 21.7 years) performed a goal-directed episodic encoding and retrieval task22. Trait-level sustained attention was further quantified using task-based23 and questionnaire measures24,25. Using trial-to-trial retrieval data, we show that tonic lapses in attention in the moment before remembering, assayed by posterior alpha power and pupil diameter, were correlated with reductions in neural signals of goal coding and memory, along with behavioural forgetting. Independent measures of trait-level attention lapsing mediated the relationship between neural assays of lapsing and memory performance, and between media multitasking and memory. Attention lapses partially account for why we remember or forget in the moment, and why some individuals remember better than others. Heavier media multitasking is associated with a propensity to have attention lapses and forget. |
Alie G. Male; Robert P. O'Shea; Erich Schröger; Dagmar Müller; Urte Roeber; Andreas Widmann In: Psychophysiology, vol. 57, no. 6, pp. e13576, 2020. @article{Male2020, Research shows that the visual system monitors the environment for changes. For example, a left-tilted bar, a deviant, that appears after several presentations of a right-tilted bar, standards, elicits a classic visual mismatch negativity (vMMN): greater negativity for deviants than standards in event-related potentials (ERPs) between 100 and 300 ms after onset of the deviant. The classic vMMN is contributed to by adaptation; it can be distinguished from the genuine vMMN that, through use of control conditions, compares standards and deviants that are equally adapted and physically identical. To determine whether the vMMN follows similar principles to the auditory mismatch negativity (MMN), in two experiments we searched for a genuine vMMN from simple, physiologically plausible stimuli that change in fundamental dimensions: orientation, contrast, phase, and spatial frequency. We carefully controlled for attention and eye movements. We found no evidence for the genuine vMMN, despite adequate statistical power. We conclude that either the genuine vMMN is a rather unstable phenomenon that depends on still-to-be-identified experimental parameters, or it is confined to visual stimuli for which monitoring across time is more natural than monitoring over space, such as for high-level features. We also observed an early deviant-related positivity that we propose might reflect earlier predictive processing. |
Radha Nila Meghanathan; Cees Leeuwen; Marcello Giannini; Andrey R. Nikolaev Neural correlates of task-related refixation behavior Journal Article In: Vision Research, vol. 175, pp. 90–101, 2020. @article{Meghanathan2020, Eye movement research has shown that attention shifts from the currently fixated location to the next before a saccade is executed. We investigated whether the cost of the attention shift depends on higher-order processing at the time of fixation, in particular on visual working memory load differences between fixations and refixations on task-relevant items. The attention shift is reflected in EEG activity in the saccade-related potential (SRP). In a free viewing task involving visual search and memorization of multiple targets amongst distractors, we compared the SRP in first fixations versus refixations on targets and distractors. The task-relevance of targets implies that more information will be loaded in memory (e.g. both identity and location) than for distractors (e.g. location only). First fixations will involve greater memory load than refixations, since first fixations involve loading of new items, while refixations involve rehearsal of previously visited items. The SRP in the interval preceding the saccade away from a target or distractor revealed that saccade preparation is affected by task-relevance and refixation behavior. For task-relevant items only, we found longer fixation duration and higher SRP amplitudes for first fixations than for refixations over the occipital region and the opposite effect over the frontal region. Our findings provide first neurophysiological evidence that working memory loading of task-relevant information at fixation affects saccade planning. |
Nick B. Pandža; Ian Phillips; Valerie P. Karuzis; Polly O'Rourke; Stefanie E. Kuchinsky Neurostimulation and pupillometry: New directions for learning and research in applied linguistics Journal Article In: Annual Review of Applied Linguistics, vol. 40, pp. 56–77, 2020. @article{Pandza2020, This paper begins by discussing new trends in the use of neurostimulation techniques in cognitive science and learning research, as well as the nascent research on their application in second language learning. To illustrate this, an experiment designed to investigate the impact of transcutaneous vagus nerve stimulation (tVNS), which is delivered via earbuds, on how learners process and learn Mandarin tones is reported. Pupillometry, which is an index of cognitive effort, is explained and illustrated as one way to assess the impact of tVNS. Participants in the study were native English speakers, naïve to tone languages, pseudorandomly assigned to active or control conditions, while balancing for nonlinguistic pitch ability and musical experience. Their performance after tVNS was assessed using a range of more traditional language outcome measures, including accuracy and reaction times from lexical recognition and recall tasks and was triangulated with pupillometry during word-learning to help understand the mechanism through which tVNS operates. Findings are discussed in light of the literatures on lexical tone learning, cognitive effort, and neurostimulation, including specific benefits for learners of tone languages. Recommendations are made for future work on the increasingly popular area of neurostimulation for the field of applied linguistics in the 40th anniversary issue of ARAL. |
Christian Pfeiffer; Nora Hollenstein; Ce Zhang; Nicolas Langer Neural dynamics of sentiment processing during naturalistic sentence reading Journal Article In: NeuroImage, vol. 218, pp. 116934, 2020. @article{Pfeiffer2020, When we read, our eyes move through the text in a series of fixations and high-velocity saccades to extract visual information. This process allows the brain to obtain meaning, e.g., about sentiment, or the emotional valence, expressed in the written text. How exactly the brain extracts the sentiment of single words during naturalistic reading is largely unknown. This is due to the challenges of naturalistic imaging, which has previously led researchers to employ highly controlled, timed word-by-word presentations of custom reading materials that lack ecological validity. Here, we aimed to assess the electrical neural correlates of word sentiment processing during naturalistic reading of English sentences. We used a publicly available dataset of simultaneous electroencephalography (EEG), eye-tracking recordings, and word-level semantic annotations from 7129 words in 400 sentences (Zurich Cognitive Language Processing Corpus; Hollenstein et al., 2018). We computed fixation-related potentials (FRPs), which are evoked electrical responses time-locked to the onset of fixations. A general linear mixed model analysis of FRPs cleaned from visual- and motor-evoked activity showed a topographical difference between the positive and negative sentiment condition in the 224–304 ms interval after fixation onset in left-central and right-posterior electrode clusters. An additional analysis that included word-, phrase-, and sentence-level sentiment predictors showed the same FRP differences for the word-level sentiment, but no additional FRP differences for phrase- and sentence-level sentiment. Furthermore, decoding analysis that classified word sentiment (positive or negative) from sentiment-matched 40-trial average FRPs showed a 0.60 average accuracy (95% confidence interval: [0.58, 0.61]). Control analyses ruled out that these results were based on differences in eye movements or linguistic features other than word sentiment. Our results extend previous research by showing that the emotional valence of lexico-semantic stimuli evoke a fast electrical neural response upon word fixation during naturalistic reading. These results provide an important step to identify the neural processes of lexico-semantic processing in ecologically valid conditions and can serve to improve computer algorithms for natural language processing. |
Reuben Rideaux; Elizabeth Michael; Andrew E. Welchman Adaptation to binocular anticorrelation results in increased neural excitability Journal Article In: Journal of Cognitive Neuroscience, vol. 32, no. 1, pp. 100–110, 2020. @article{Rideaux2020, Throughout the brain, information from individual sources converges onto higher order neurons. For example, information from the two eyes first converges in binocular neurons in area V1. Many neurons appear tuned to similarities between sources of information, which makes intuitive sense in a system striving to match multiple sensory signals to a single external cause, i.e., establish causal inference. However, there are also neurons that are tuned to dissimilar information. In particular, many binocular neurons respond maximally to a dark feature in one eye and a light feature in the other. Despite compelling neurophysiological and behavioural evidence supporting the existence of these neurons (Cumming & Parker, 1997; Janssen, Vogels, Liu, & Orban, 2003; Katyal, Vergeer, He, He, & Engel, 2018; Kingdom, Jennings, & Georgeson, 2018; Tsao, Conway, & Livingstone, 2003), their function has remained opaque. To determine how neural mechanisms tuned to dissimilarities support perception, here we use electroencephalography to measure human observers' steady-state visually evoked potentials (SSVEPs) in response to change in depth after prolonged viewing of anticorrelated and correlated random-dot stereograms (RDS). We find that adaptation to anticorrelated RDS results in larger SSVEPs, while adaptation to correlated RDS has no effect. These results are consistent with recent theoretical work suggesting ‘what not' neurons play a suppressive role in supporting stereopsis (Goncalves & Welchman, 2017); that is, selective adaptation of neurons tuned to binocular mismatches reduces suppression resulting in increased neural excitability. |
Andre Roelke; Christian Vorstius; Ralph Radach; Markus J. Hofmann Fixation-related NIRS indexes retinotopic occipital processing of parafoveal preview during natural reading Journal Article In: NeuroImage, vol. 215, pp. 116823, 2020. @article{Roelke2020, While word frequency and predictability effects have been examined extensively, any evidence on interactive effects as well as parafoveal influences during whole sentence reading remains inconsistent and elusive. Novel neuroimaging methods utilize eye movement data to account for the hemodynamic responses of very short events such as fixations during natural reading. In this study, we used the rapid sampling frequency of near-infrared spectroscopy (NIRS) to investigate neural responses in the occipital and orbitofrontal cortex to word frequency and predictability. We observed increased activation in the right ventral occipital cortex when the fixated word N was of low frequency, which we attribute to an enhanced cost during saccade planning. Importantly, unpredictable (in contrast to predictable) low frequency words increased the activity in the left dorsal occipital cortex at the fixation of the preceding word N-1, presumably due to an upcoming breach of top-down modulated expectation. Opposite to studies that utilized a serial presentation of words (e.g. Hofmann et al., 2014), we did not find such an interaction in the orbitofrontal cortex, implying that top-down timing of cognitive subprocesses is not required during natural reading. We discuss the implications of an interactive parafoveal-on-foveal effect for current models of eye movements. |
Jing Zhu; Zihan Wang; Tao Gong; Shuai Zeng; Xiaowei Li; Bin Hu; Jianxiu Li; Shuting Sun; Lan Zhang An improved classification model for depression detection using EEG and eye tracking data Journal Article In: IEEE Transactions on Nanobioscience, vol. 19, no. 3, pp. 527–537, 2020. @article{Zhu2020a, At present, depression has become a main health burden in the world. However, there are many problems with the diagnosis of depression, such as low patient cooperation, subjective bias and low accuracy. Therefore, reliable and objective evaluation method is needed to achieve effective depression detection. Electroencephalogram (EEG) and eye movements (EMs) data have been widely used for depression detection due to their advantages of easy recording and non-invasion. This research proposes a content based ensemble method (CBEM) to promote the depression detection accuracy, both static and dynamic CBEM were discussed. In the proposed model, EEG or EMs dataset was divided into subsets by the context of the experiments, and then a majority vote strategy was used to determine the subjects' label. The validation of the method is testified on two datasets which included free viewing eye tracking and resting-state EEG, and these two datasets have 36,34 subjects respectively. For these two datasets, CBEM achieves accuracies of 82.5% and 92.65% respectively. The results show that CBEM outperforms traditional classification methods. Our findings provide an effective solution for promoting the accuracy of depression identification, and provide an effective method for identification of depression, which in the future could be used for the auxiliary diagnosis of depression. |
Bin Zhao; Jinfeng Huang; Gaoyan Zhang; Jianwu Dang; Minbo Chen; Yingjian Fu; Longbiao Wang Brain network reconstruction of speech production based on electro-encephalography and eye movement Journal Article In: Acoustical Science and Technology, vol. 41, no. 1, pp. 349–350, 2020. @article{Zhao2020, To fully understand the brain mechanism associated with speech functions, it is necessary to unfold the spatiotemporal brain dynamics during the whole speech processing range [1]. However, previous functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) studies focused on cerebral activation patterns and their regional functions, while lacking information of the time courses [2]. In contrast, electroencephalography (EEG) and magneto- encephalography (MEG) with high temporal resolution are inferior in source localization, and are also easily buried in electromagnetic artifacts from muscular actions in articulation, thus interfering with the analysis. In this study, we introduced a novel multimodal data acquisition system to collect EEG, eye movement, and speech in an oral reading task. The behavior data (eye movement and speech) were used for segmenting cognitive stages. EEG data went through independent component analyses (ICA), component clustering, and time-varying (adaptive) multi-variate autoregressive modeling [3] for estimating the spatiotemporal causal interactions among brain regions in each cognitive and speech process. Statistical analyses and literature review were followed to interpret the brain dynamic results for better understanding the speech functions. |
Hong Zeng; Junjie Shen; Wenming Zheng; Aiguo Song; Jia Liu Toward measuring target perception: First-order and second-order deep network pipeline for classification of fixation-felated potentials Journal Article In: Journal of Healthcare Engineering, pp. 1–15, 2020. @article{Zeng2020, The topdown determined visual object perception refers to the ability of a person to identify a prespecified visual target. This paper studies the technical foundation for measuring the target-perceptual ability in a guided visual search task, using the EEG-based brain imaging technique. Specifically, it focuses on the feature representation learning problem for single-trial classification of fixation-related potentials (FRPs). The existing methods either capture only first-order statistics while ignoring second-order statistics in data, or directly extract second-order statistics with covariance matrices estimated with raw FRPs that suffer from low signal-to-noise ratio. In this paper, we propose a new representation learning pipeline involving a low-level convolution subnetwork followed by a high-level Riemannian manifold subnetwork, with a novel midlevel pooling layer bridging them. In this way, the discriminative power of the first-order features can be increased by the convolution subnetwork, while the second-order information in the convolutional features could further be deeply learned with the subsequent Riemannian subnetwork. In particular, the temporal ordering of FRPs is well preserved for the components in our pipeline, which is considered to be a valuable source of discriminant information. The experimental results show that proposed approach leads to improved classification performance and robustness to lack of data over the state-of-the-art ones, thus making it appealing for practical applications in measuring the target-perceptual ability of cognitively impaired patients with the FRP technique. |
Artyom Zinchenko; Markus Conci; Thomas Töllner; Hermann J. Müller; Thomas Geyer Automatic guidance (and misguidance) of visuospatial attention by acquired scene memory: Evidence from an N1pc polarity reversal Journal Article In: Psychological Science, vol. 31, no. 12, pp. 1–13, 2020. @article{Zinchenko2020a, Visual search is facilitated when the target is repeatedly encountered at a fixed position within an invariant (vs. randomly variable) distractor layout—that is, when the layout is learned and guides attention to the target, a phenomenon known as contextual cuing. Subsequently changing the target location within a learned layout abolishes contextual cuing, which is difficult to relearn. Here, we used lateralized event-related electroencephalogram (EEG) potentials to explore memory-based attentional guidance (N = 16). The results revealed reliable contextual cuing during initial learning and an associated EEG-amplitude increase for repeated layouts in attention-related components, starting with an early posterior negativity (N1pc, 80–180 ms). When the target was relocated to the opposite hemifield following learning, contextual cuing was effectively abolished, and the N1pc was reversed in polarity (indicative of persistent misguidance of attention to the original target location). Thus, once learned, repeated layouts trigger attentional-priority signals from memory that proactively interfere with contextual relearning after target relocation. |
Niklas Wilming; Peter R. Murphy; Florent Meyniel; Tobias H. Donner Large-scale dynamics of perceptual decision information across human cortex Journal Article In: Nature Communications, vol. 11, pp. 5109, 2020. @article{Wilming2020, Perceptual decisions entail the accumulation of sensory evidence for a particular choice towards an action plan. An influential framework holds that sensory cortical areas encode the instantaneous sensory evidence and downstream, action-related regions accumulate this evidence. The large-scale distribution of this computation across the cerebral cortex has remained largely elusive. Here, we develop a regionally-specific magnetoencephalography decoding approach to exhaustively map the dynamics of stimulus- and choice-specific signals across the human cortical surface during a visual decision. Comparison with the evidence accumulation dynamics inferred from behavior disentangles stimulus-dependent and endogenous components of choice-predictive activity across the visual cortical hierarchy. We find such an endogenous component in early visual cortex (including V1), which is expressed in a low (<20 Hz) frequency band and tracks, with delay, the build-up of choice-predictive activity in (pre-) motor regions. Our results are consistent with choice- and frequency-specific cortical feedback signaling during decision formation. |
Tommy J. Wilson; John J. Foxe Cross-frequency coupling of alpha oscillatory power to the entrainment rhythm of a spatially attended input stream Journal Article In: Cognitive Neuroscience, vol. 11, no. 1-2, pp. 71–91, 2020. @article{Wilson2020, Neural entrainment and alpha oscillatory power (8–14 Hz) are mechanisms of selective attention. The extent to which these two mechanisms interact, especially in the context of visuospatial attention, is unclear. Here, we show that spatial attention to a delta-frequency, rhythmic visual stimulus in one hemifield results in phase-amplitude coupling between the delta-phase of an entrained frontal source and alpha power generated by ipsilateral visuocortical regions. The driving of ipsilateral alpha power by frontal delta also correlates with task performance. Our analyses suggest that neural entrainment may serve a previously underappreciated role in coordinating macroscale brain networks and that inhibition of processing by alpha power can be coupled to an attended temporal structure. Finally, we note that the observed coupling bolsters one dominant hypothesis of modern cognitive neuroscience, that macroscale brain networks and distributed neural computation are coordinated by oscillatory synchrony and cross-frequency interactions. |
Lisa Wirz; Lars Schwabe Prioritized attentional processing: Acute stress, memory and stimulus emotionality facilitate attentional disengagement Journal Article In: Neuropsychologia, vol. 138, pp. 107334, 2020. @article{Wirz2020, Rapid attentional orienting toward relevant stimuli and efficient disengagement from irrelevant stimuli are critical for survival. Here, we examined the roles of memory processes, emotional arousal and acute stress in attentional disengagement. To this end, 64 healthy participants encoded negative and neutral facial expressions and, after being exposed to a stress or control manipulation, performed an attention task in which they had to disengage from these previously encoded as well as novel face stimuli. During the attention task, electroencephalography (EEG) and pupillometry data were recorded. Our results showed overall faster reaction times after acute stress and when participants had to disengage from emotionally negative or old facial expressions. Further, pupil dilations were larger in response to neutral faces. During disengagement, our EEG data revealed a reduced N2pc amplitude when participants disengaged from neutral compared to negative facial expressions when these were not presented before, as well as earlier onset latencies for the N400f (for disengagement from negative and old faces), the N2pc, and the LPP (for disengagement from negative faces). In addition, early visual processing of negative faces, as reflected in the P1 amplitude, was enhanced specifically in stressed participants. Our findings indicate that attentional disengagement is improved for negative and familiar stimuli and that stress facilitates not only attentional disengagement but also emotional processing in general. Together, these processes may represent important mechanisms enabling efficient performance and rapid threat detection. |
G. Elliott Wimmer; Yunzhe Liu; Neža Vehar; Timothy E. J. Behrens; Raymond J. Dolan Episodic memory retrieval success is associated with rapid replay of episode content Journal Article In: Nature Neuroscience, vol. 23, no. 8, pp. 1025–1033, 2020. @article{Wimmer2020, Retrieval of everyday experiences is fundamental for informing our future decisions. The fine-grained neurophysiological mechanisms that support such memory retrieval are largely unknown. We studied participants who first experienced, without repetition, unique multicomponent 40–80-s episodes. One day later, they engaged in cued retrieval of these episodes while undergoing magnetoencephalography. By decoding individual episode elements, we found that trial-by-trial successful retrieval was supported by the sequential replay of episode elements, with a temporal compression factor of >60. The direction of replay supporting retrieval, either backward or forward, depended on whether the task goal was to retrieve elements of an episode that followed or preceded, respectively, a retrieval cue. This sequential replay was weaker in very-high-performing participants, in whom instead we found evidence for simultaneous clustered reactivation. Our results demonstrate that memory-mediated decisions are supported by a rapid replay mechanism that can flexibly shift in direction in response to task goals. |
Steven W. Savage; Douglas D. Potter; Benjamin W. Tatler The effects of cognitive distraction on behavioural, oculomotor and electrophysiological metrics during a driving hazard perception task Journal Article In: Accident Analysis and Prevention, vol. 138, pp. 1–11, 2020. @article{Savage2020, Previous research has demonstrated that the distraction caused by holding a mobile telephone conversation is not limited to the period of the actual conversation (Haigney, 1995; Redelmeier & Tibshirani, 1997; Savage et al., 2013). In a prior study we identified potential eye movement and EEG markers of cognitive distraction during driving hazard perception. However the extent to which these markers are affected by the demands of the hazard perception task are unclear. Therefore in the current study we assessed the effects of secondary cognitive task demand on eye movement and EEG metrics separately for periods prior to, during and after the hazard was visible. We found that when no hazard was present (prior and post hazard windows), distraction resulted in changes to various elements of saccadic eye movements. However, when the target was present, distraction did not affect eye movements. We have previously found evidence that distraction resulted in an overall decrease in theta band output at occipital sites of the brain. This was interpreted as evidence that distraction results in a reduction in visual processing. The current study confirmed this by examining the effects of distraction on the lambda response component of subjects eye fixation related potentials (EFRPs). Furthermore, we demonstrated that although detections of hazards were not affected by distraction, both eye movement and EEG metrics prior to the onset of the hazard were sensitive to changes in cognitive workload. This suggests that changes to specific aspects of the saccadic eye movement system could act as unobtrusive markers of distraction even prior to a breakdown in driving performance. |
Christoph Schneider; Michael Pereira; Luca Tonin; José del R. Millán Real-time EEG feedback on alpha power lateralization leads to behavioral improvements in a covert attention task Journal Article In: Brain Topography, vol. 33, no. 1, pp. 48–59, 2020. @article{Schneider2020, Visual attention can be spatially oriented, even in the absence of saccadic eye-movements, to facilitate the processing of incoming visual information. One behavioral proxy for this so-called covert visuospatial attention (CVSA) is the validity effect (VE): the reduction in reaction time (RT) to visual stimuli at attended locations and the increase in RT to stimuli at unattended locations. At the electrophysiological level, one correlate of CVSA is the lateralization in the occipital α-band oscillations, resulting from α-power increases ipsilateral and decreases contralateral to the attended hemifield. While this α-band lateralization has been considerably studied using electroencephalography (EEG) or magnetoencephalography (MEG), little is known about whether it can be trained to improve CVSA behaviorally. In this cross-over sham-controlled study we used continuous real-time feedback of the occipital α-lateralization to modulate behavioral and electrophysiological markers of covert attention. Fourteen subjects performed a cued CVSA task, involving fast responses to covertly attended stimuli. During real-time feedback runs, trials extended in time if subjects reached states of high α-lateralization. Crucially, the ongoing α-lateralization was fed back to the subject by changing the color of the attended stimulus. We hypothesized that this ability to self-monitor lapses in CVSA and thus being able to refocus attention accordingly would lead to improved CVSA performance during subsequent testing. We probed the effect of the intervention by evaluating the pre-post changes in the VE and the α-lateralization. Behaviorally, results showed a significant interaction between feedback (experimental–sham) and time (pre-post) for the validity effect, with an increase in performance only for the experimental condition. We did not find corresponding pre-post changes in the α-lateralization. Our findings suggest that EEG-based real-time feedback is a promising tool to enhance the level of covert visuospatial attention, especially with respect to behavioral changes. This opens up the exploration of applications of the proposed training method for the cognitive rehabilitation of attentional disorders. |
Eelke Spaak; Floris P. Lange Hippocampal and prefrontal theta-band mechanisms underpin implicit spatial context learning Journal Article In: Journal of Neuroscience, vol. 40, no. 1, pp. 191–202, 2020. @article{Spaak2020, Humans can rapidly and seemingly implicitly learn to predict typical locations of relevant items when those items are encountered in familiar spatial contexts. Two important questions remain, however, concerning this type of learning: (1) which neural structures and mechanisms are involved in acquiring and exploiting such contextual knowledge? (2) Is this type of learning truly implicit and unconscious? We now answer both these questions after closely examining behavior and recording neural activity using MEG while observers (male and female) were acquiring and exploiting statistical regularities. Computational modeling of behavioral data suggested that, after repeated exposures to a spatial context, participants' behavior was marked by an abrupt switch to an exploitation strategy of the learnt regularities. MEG recordings showed that hippocampus and prefrontal cortex (PFC) were involved in the task and furthermore revealed a striking dissociation: only the initial learning phase was associated with hippocampal theta band activity, while the subsequent exploitation phase showed a shift in theta band activity to the PFC. Intriguingly, the behavioral benefit of repeated exposures to certain scenes was inversely related to explicit awareness of such repeats, demonstrating the implicit nature of the expectations acquired. Together, these findings demonstrate that (1a) hippocampus and PFC play complementary roles in the implicit, unconscious learning and exploitation of spatial statistical regularities; (1b) these mechanisms are implemented in the theta frequency band; and (2) contextual knowledge can indeed be acquired unconsciously, and awareness of such knowledge can even interfere with the exploitation thereof. |
Davide Tabarelli; Christian Keitel; Joachim Gross; Daniel Baldauf Spatial attention enhances cortical tracking of quasi-rhythmic visual stimuli Journal Article In: NeuroImage, vol. 208, pp. 116444, 2020. @article{Tabarelli2020, Successfully interpreting and navigating our natural visual environment requires us to track its dynamics constantly. Additionally, we focus our attention on behaviorally relevant stimuli to enhance their neural processing. Little is known, however, about how sustained attention affects the ongoing tracking of stimuli with rich natural temporal dynamics. Here, we used MRI-informed source reconstructions of magnetoencephalography (MEG) data to map to what extent various cortical areas track concurrent continuous quasi-rhythmic visual stimulation. Further, we tested how top-down visuo-spatial attention influences this tracking process. Our bilaterally presented quasi-rhythmic stimuli covered a dynamic range of 4–20 Hz, subdivided into three distinct bands. As an experimental control, we also included strictly rhythmic stimulation (10 vs 12 Hz). Using a spectral measure of brain-stimulus coupling, we were able to track the neural processing of left vs. right stimuli independently, even while fluctuating within the same frequency range. The fidelity of neural tracking depended on the stimulation frequencies, decreasing for higher frequency bands. Both attended and non-attended stimuli were tracked beyond early visual cortices, in ventral and dorsal streams depending on the stimulus frequency. In general, tracking improved with the deployment of visuo-spatial attention to the stimulus location. Our results provide new insights into how human visual cortices process concurrent dynamic stimuli and provide a potential mechanism – namely increasing the temporal precision of tracking – for boosting the neural representation of attended input. |
L. Tankelevitch; E. Spaak; M. F. S. Rushworth; M. G. Stokes In: Journal of Neuroscience, vol. 40, no. 26, pp. 5033–5050, 2020. @article{Tankelevitch2020, Studies of selective attention typically consider the role of task goals or physical salience, but recent work has shown that attention can also be captured by previously reward-associated stimuli, even when these are no longer relevant (i.e., value-driven attentional capture; VDAC). We used magnetoencephalography (MEG) to investigate how previously reward-associated stimuli are processed, the time-course of reward history effects, and how this relates to the behavioural effects of VDAC. Male and female human participants first completed a reward learning task to establish stimulus-reward associations. Next, we measured attentional capture in a separate task by presenting these stimuli in the absence of reward contingency, and probing their effects on the processing of separate target stimuli presented at different time lags. Using time-resolved multivariate pattern analysis, we found that learned value modulated the spatial selection of previously rewarded stimuli in occipital, inferior temporal, and parietal cortex from ~260ms after stimulus onset. This value modulation was related to the strength of participants' behavioural VDAC effect and persisted into subsequent target processing. Furthermore, we found a spatially invariant value signal from ~340ms. Importantly, learned value did not influence the neural discriminability of the previously rewarded stimuli in visual cortical areas. Our results suggest that VDAC is underpinned by learned value signals which modulate spatial selection throughout posterior visual and parietal cortex. We further suggest that VDAC can occur in the absence of changes in early visual cortical processing. Significance statement Attention is our ability to focus on relevant information at the expense of irrelevant information. It can be affected by previously learned but currently irrelevant stimulus-reward associations, a phenomenon termed “value-driven attentional capture” (VDAC). The neural mechanisms underlying VDAC remain unclear. It has been speculated that reward learning induces visual cortical plasticity which modulates early visual processing to capture attention. Although we find that learned value modulates spatial attention in sensory brain areas, an effect which correlates with VDAC, we find no relevant signatures of visual cortical plasticity. |
Quan Wan; Ying Cai; Jason Samaha; Bradley R. Postle Tracking stimulus representation across a 2-back visual working memory task: Tracking 2-back representation Journal Article In: Royal Society Open Science, vol. 7, pp. 1–18, 2020. @article{Wan2020, How does the neural representation of visual working memory content vary with behavioural priority? To address this, we recorded electroencephalography (EEG) while subjects performed a continuous-performance 2-back working memory task with oriented-grating stimuli. We tracked the transition of the neural representation of an item (n) from its initial encoding, to the status of 'unprioritized memory item' (UMI), and back to 'prioritized memory item', with multivariate inverted encoding modelling. Results showed that the representational format was remapped from its initially encoded format into a distinctive 'opposite' representational format when it became a UMI and then mapped back into its initial format when subsequently prioritized in anticipation of its comparison with item n + 2. Thus, contrary to the default assumption that the activity representing an item in working memory might simply get weaker when it is deprioritized, it may be that a process of priority-based remapping helps to protect remembered information when it is not in the focus of attention. |
Yongchun Wang; Meilin Di; Jingjing Zhao; Saisai Hu; Zhao Yao; Yonghui Wang Attentional modulation of unconscious inhibitory visuomotor processes: An EEG study Journal Article In: Psychophysiology, vol. 57, no. 8, pp. e13561, 2020. @article{Wang2020k, The present study examined the role of attention in unconscious inhibitory visuomotor processes in three experiments that employed a mixed paradigm including a spatial cueing task and masked prime task. Spatial attention to the prime was manipulated. Specifically, the valid-cue condition (in which the prime obtained more attentional resources) and invalid-cue condition (in which the prime obtained fewer attentional resources) were included. The behavioral results showed that the negative compatibility effect (a behavioral indicator of inhibitory visuomotor processing) in the valid-cue condition was larger than that in the invalid-cue condition. Most importantly, lateralized readiness potential results indicated that the prime-related activation was stronger in the valid-cue condition than in the invalid-cue condition and that the followed inhibition in the compatible trials was also stronger in the valid-cue condition than in the invalid-cue condition. In line with the proposed attentional modulation model, unconscious visuomotor inhibitory processing is modulated by attentional resources. |
Maximilian F. A. Hauser; Stefanie Heba; Tobias Schmidt-Wilcke; Martin Tegenthoff; Denise Manahan-Vaughan Cerebellar-hippocampal processing in passive perception of visuospatial change: An ego- and allocentric axis? Journal Article In: Human Brain Mapping, vol. 41, no. 5, pp. 1153–1166, 2020. @article{Hauser2020, In addition to its role in visuospatial navigation and the generation of spatial representations, in recent years, the hippocampus has been proposed to support perceptual processes. This is especially the case where high-resolution details, in the form of fine-grained relationships between features such as angles between components of a visual scene, are involved. An unresolved question is how, in the visual domain, perspective-changes are differentiated from allocentric changes to these perceived feature relationships, both of which may be argued to involve the hippocampus. We conducted functional magnetic resonance imaging of the brain response (corroborated through separate event-related potential source-localization) in a passive visuospatial oddball-paradigm to examine to what extent the hippocampus and other brain regions process changes in perspective, or configuration of abstract, three-dimensional structures. We observed activation of the left superior parietal cortex during perspective shifts, and right anterior hippocampus in configuration-changes. Strikingly, we also found the cerebellum to differentiate between the two, in a way that appeared tightly coupled to hippocampal processing. These results point toward a relationship between the cerebellum and the hippocampus that occurs during perception of changes in visuospatial information that has previously only been reported with regard to visuospatial navigation. |
Simone G. Heideman; Andrew J. Quinn; Mark W. Woolrich; Freek Ede; Anna C. Nobre Dissecting beta-state changes during timed movement preparation in Parkinson's disease Journal Article In: Progress in Neurobiology, vol. 184, pp. 101731, 2020. @article{Heideman2020, An emerging perspective describes beta-band (15−28 Hz) activity as consisting of short-lived high-amplitude events that only appear sustained in conventional measures of trial-average power. This has important implications for characterising abnormalities observed in beta-band activity in disorders like Parkinson's disease. Measuring parameters associated with beta-event dynamics may yield more sensitive measures, provide more selective diagnostic neural markers, and provide greater mechanistic insight into the breakdown of brain dynamics in this disease. Here, we used magnetoencephalography in eighteen Parkinson's disease participants off dopaminergic medication and eighteen healthy control participants to investigate beta-event dynamics during timed movement preparation. We used the Hidden Markov Model to classify event dynamics in a data-driven manner and derived three parameters of beta events: (1) beta-state amplitude, (2) beta-state lifetime, and (3) beta-state interval time. Of these, changes in beta-state interval time explained the overall decreases in beta power during timed movement preparation and uniquely captured the impairment in such preparation in patients with Parkinson's disease. Thus, the increased granularity of the Hidden Markov Model analysis (compared with conventional analysis of power) provides increased sensitivity and suggests a possible reason for impairments of timed movement preparation in Parkinson's disease. |
James E. Hoffman; Minwoo Kim; Matt Taylor; Kelsey Holiday Emotional capture during emotion-induced blindness is not automatic Journal Article In: Cortex, vol. 122, pp. 140–158, 2020. @article{Hoffman2020, The present research used behavioral and event-related brain potentials (ERP) measures to determine whether emotional capture is automatic in the emotion-induced blindness (EIB) paradigm. The first experiment varied the priority of performing two concurrent tasks: identifying a negative or neutral picture appearing in a rapid serial visual presentation (RSVP) stream of pictures and multiple object tracking (MOT). Results showed that increased attention to the MOT task resulted in decreased accuracy for identifying both negative and neutral target pictures accompanied by decreases in the amplitude of the P3b component. In contrast, the early posterior negativity (EPN) component elicited by negative pictures was unaffected by variations in attention. Similarly, there was a decrement in MOT performance for dual-task versus single task conditions but no effect of picture type (negative vs neutral) on MOT accuracy which isn't consistent with automatic emotional capture of attention. However, the MOT task might simply be insensitive to brief interruptions of attention. The second experiment used a more sensitive reaction time (RT) measure to examine this possibility. Results showed that RT to discriminate a gap appearing in a tracked object was delayed by the simultaneous appearance of to-be-ignored distractor pictures even though MOT performance was once again unaffected by the distractor. Importantly, the RT delay was the same for both negative and neutral distractors suggesting that capture was driven by physical salience rather than emotional salience of the distractors. Despite this lack of emotional capture, the EPN component, which is thought to reflect emotional capture, was still present. We suggest that the EPN doesn't reflect capture but rather downstream effects of attention, including object recognition. These results show that capture by emotional pictures in EIB can be suppressed when attention is engaged in another difficult task. The results have important implications for understanding capture effects in EIB. |
Leyla Isik; Anna Mynick; Dimitrios Pantazis; Nancy Kanwisher The speed of human social interaction perception Journal Article In: NeuroImage, vol. 215, pp. 116844, 2020. @article{Isik2020, The ability to perceive others' social interactions, here defined as the directed contingent actions between two or more people, is a fundamental part of human experience that develops early in infancy and is shared with other primates. However, the neural computations underlying this ability remain largely unknown. Is social interaction recognition a rapid feedforward process or a slower post-perceptual inference? Here we used magnetoencephalography (MEG) decoding to address this question. Subjects in the MEG viewed snapshots of visually matched real-world scenes containing a pair of people who were either engaged in a social interaction or acting independently. The presence versus absence of a social interaction could be read out from subjects' MEG data spontaneously, even while subjects performed an orthogonal task. This readout generalized across different people and scenes, revealing abstract representations of social interactions in the human brain. These representations, however, did not come online until quite late, at 300 ms after image onset, well after feedforward visual processes. In a second experiment, we found that social interaction readout still occurred at this same late latency even when subjects performed an explicit task detecting social interactions. We further showed that MEG responses distinguished between different types of social interactions (mutual gaze vs joint attention) even later, around 500 ms after image onset. Taken together, these results suggest that the human brain spontaneously extracts information about others' social interactions, but does so slowly, likely relying on iterative top-down computations. |
Stephanie J. Kayser; Christoph Kayser Shared physiological correlates of multisensory and expectation-based facilitation Journal Article In: eNeuro, vol. 7, no. 2, pp. 1–13, 2020. @article{Kayser2020, Perceptual performance in a visual task can be enhanced by simultaneous multisensory information, but can also be enhanced by a symbolic or amodal cue inducing a specific expectation. That similar benefits can arise from multisensory information and within-modality expectation raises the question of whether the underlying neurophysiological processes are the same or distinct. We investigated this by comparing the influence of the following three types of auxiliary probabilistic cues on visual motion discrimination in humans: (1) acoustic motion, (2) a premotion visual symbolic cue, and (3) a postmotion symbolic cue. Using multivariate analysis of the EEG data, we show that both the multisensory and preceding visual symbolic cue enhance the encoding of visual motion direction as reflected by cerebral activity arising from occipital regions;200–400 ms post-stimulus onset. This suggests a common or overlapping physiological correlate of cross-modal and intramodal auxiliary information, pointing to a neural mechanism susceptive to both multisensory and more abstract probabilistic cues. We also asked how prestimulus activity shapes the cue–stimulus combination and found a differential influence on the cross-modal and intramodal combination: while alpha power modulated the relative weight of visual motion and the acoustic cue, it did not modulate the behavioral influence of a visual symbolic cue, pointing to differences in how prestimulus activity shapes the combination of multisensory and abstract cues with task-relevant information. |
Florent Meyniel Brain dynamics for confidence-weighted learning Journal Article In: PLoS Computational Biology, vol. 16, no. 6, pp. e1007935, 2020. @article{Meyniel2020, Learning in a changing, uncertain environment is a difficult problem. A popular solution is to predict future observations and then use surprising outcomes to update those predictions. However, humans also have a sense of confidence that characterizes the precision of their predictions. Bayesian models use a confidence-weighting principle to regulate learning: For a given surprise, the update is smaller when the confidence about the prediction was higher. Prior behavioral evidence indicates that human learning adheres to this confidence-weighting principle. Here, we explored the human brain dynamics sub-tending the confidenceweighting of learning using magneto-encephalography (MEG). During our volatile probability learning task, subjects' confidence reports conformed with Bayesian inference. MEG revealed several stimulus-evoked brain responses whose amplitude reflected surprise, and some of them were further shaped by confidence: Surprise amplified the stimulus-evoked response whereas confidence dampened it. Confidence about predictions also modulated several aspects of the brain state: Pupil-linked arousal and beta-range (15-30 Hz) oscillations. The brain state in turn modulated specific stimulus-evoked surprise responses following the confidence-weighting principle. Our results thus indicate that there exist, in the human brain, signals reflecting surprise that are dampened by confidence in a way that is appropriate for learning according to Bayesian inference. They also suggest a mechanism for confidence-weighted learning: Confidence about predictions would modulate intrinsic properties of the brain state to amplify or dampen surprise responses evoked by discrepant observations. |
Jonathan Mirault; Jeremy Yeaton; Fanny Broqua; Stéphane Dufau; Phillip J. Holcomb; Jonathan Grainger Parafoveal-on-foveal repetition effects in sentence reading: A co-registered eye-tracking and electroencephalogram study Journal Article In: Psychophysiology, vol. 57, no. 8, pp. e13553, 2020. @article{Mirault2020, When reading, can the next word in the sentence (word n + 1) influence how you read the word you are currently looking at (word n)? Serial models of sentence reading state that this generally should not be the case, whereas parallel models predict that this should be the case. Here we focus on perhaps the simplest and the strongest Parafoveal-on-Foveal (PoF) manipulation: word n + 1 is either the same as word n or a different word. Participants read sentences for comprehension and when their eyes left word n, the repeated or unrelated word at position n + 1 was swapped for a word that provided a syntactically correct continuation of the sentence. We recorded electroencephalogram and eye-movements, and time-locked the analysis of fixation-related potentials (FRPs) to fixation of word n. We found robust PoF repetition effects on gaze durations on word n, and also on the initial landing position on word n. Most important is that we also observed significant effects in FRPs, reaching significance at 260 ms post-fixation of word n. Repetition of the target word n at position n + 1 caused a widely distributed reduced negativity in the FRPs. Given the timing of this effect, we argue that it is driven by orthographic processing of word n + 1, while readers were still looking at word n, plus the spatial integration of orthographic information extracted from these two words in parallel. |
Kieran S. Mohr; Niamh Carr; Rachel Georgel; Simon P. Kelly Modulation of the earliest component of the human VEP by spatial attention: An investigation of task demands Journal Article In: Cerebral Cortex Communications, pp. 1–22, 2020. @article{Mohr2020, Spatial attention modulations of initial afferent activity in area V1, indexed by the first component “C1” of the human visual evoked potential, are rarely found. It has thus been suggested that early modulation is induced only by special task conditions, but what these conditions are remains unknown. Recent failed replications—findings of no C1 modulation using a certain task that had previously produced robust modulations—present a strong basis for examining this question. We ran 3 experiments, the first to more exactly replicate the stimulus and behavioral conditions of the original task, and the second and third to manipulate 2 key factors that differed in the failed replication studies: the provision of informative performance feedback, and the degree to which the probed stimulus features matched those facilitating target perception. Although there was an overall significant C1 modulation of 11%, individually, only experiments 1 and 2 showed reliable effects, underlining that the modulations do occur but not consistently. Better feedback induced greater P1, but not C1, modulations. Target-probe feature matching had an inconsistent influence on modulation patterns, with behavioral performance differences and signal-overlap analyses suggesting interference from extrastriate modulations as a potential cause. |
Anna M. Monk; Gareth R. Barnes; Eleanor A. Maguire The effect of object type on building scene imagery — An MEG study Journal Article In: Frontiers in Human Neuroscience, vol. 14, pp. 592175, 2020. @article{Monk2020, Previous studies have reported that some objects evoke a sense of local three-dimensional space (space-defining; SD), while others do not (space-ambiguous; SA), despite being imagined or viewed in isolation devoid of a background context. Moreover, people show a strong preference for SD objects when given a choice of objects with which to mentally construct scene imagery. When deconstructing scenes, people retain significantly more SD objects than SA objects. It, therefore, seems that SD objects might enjoy a privileged role in scene construction. In the current study, we leveraged the high temporal resolution of magnetoencephalography (MEG) to compare the neural responses to SD and SA objects while they were being used to build imagined scene representations, as this has not been examined before using neuroimaging. On each trial, participants gradually built a scene image from three successive auditorily-presented object descriptions and an imagined 3D space. We then examined the neural dynamics associated with the points during scene construction when either SD or SA objects were being imagined. We found that SD objects elicited theta changes relative to SA objects in two brain regions, the right ventromedial prefrontal cortex (vmPFC) and the right superior temporal gyrus (STG). Furthermore, using dynamic causal modeling, we observed that the vmPFC drove STG activity. These findings may indicate that SD objects serve to activate schematic and conceptual knowledge in vmPFC and STG upon which scene representations are then built. |
Christina Mühlberger; Johannes Klackl; Sandra Sittenthaler; Eva Jonas The approach-motivational nature of reactance-Evidence from asymmetrical frontal cortical activation Journal Article In: Motivation Science, vol. 6, no. 3, pp. 203–220, 2020. @article{Muehlberger2020, Research has demonstrated that freedom restrictions evoke psychological reactance-a strong motivation to take action to regain the threatened freedom. We hypothesized that the underlying motivational state of reactance is approach-related. We used either a behavioral measure (line bisection task) or electroencephalography to assess relative left frontal brain activation, an indicator of approach motivation. We found increased approach motivation following imagined (Experiment 1), remembered (Experiment 2), and induced (Experiment 3) freedom threats. The results additionally revealed that only a selfexperienced freedom threat and not a vicarious freedom threat resulted in approach motivation. Overall, the findings suggest that reactance is approach motivational. |
Paul S. Muhle-Karbe; Nicholas E. Myers; Mark G. Stokes A hierarchy of functional states in working memory Journal Article In: Journal of Neuroscience, vol. 41, no. 20, pp. 4461–4475, 2020. @article{MuhleKarbe2020, Extensive research has examined how information is maintained in working memory (WM), but it remains unknown how WM is used to guide behaviour. We addressed this question using a combination of electroencephalography, pattern analyses, and cognitive modelling with a task that required maintenance of two WM items and flexible priority shifts between them. This enabled us to discern neural states coding for immediately and prospectively task-relevant items, and to examine how these states contribute to WM-based decisions. We identified two qualitatively different neural states: a functionally latent state encoded both items, was unrelated to performance on the current trial, but predictive of performance accuracy over longer time scales. In contrast, a functionally active state encoded only the immediately task-relevant item, and closely tracked the quality of evidence integration on the current trial. These results delineate a hierarchy of functional states whereby latent memories supporting general maintenance are transformed into active decision-circuits to guide WM-based behaviour. |
Taihei Ninomiya; Atsushi Noritake; Kenta Kobayashi; Masaki Isoda A causal role for frontal cortico-cortical coordination in social action monitoring Journal Article In: Nature Communications, vol. 11, pp. 5233, 2020. @article{Ninomiya2020, Decision-making via monitoring others' actions is a cornerstone of interpersonal exchanges. Although the ventral premotor cortex (PMv) and the medial prefrontal cortex (MPFC) are cortical nodes in social brain networks, the two areas are rarely concurrently active in neuroimaging, inviting the hypothesis that they are functionally independent. Here we show in macaques that the ability of the MPFC to monitor others' actions depends on input from the PMv. We found that delta-band coherence between the two areas emerged during action execution and action observation. Information flow especially in the delta band increased from the PMv to the MPFC as the biological nature of observed actions increased. Furthermore, selective blockade of the PMv-to-MPFC pathway using a double viral vector infection technique impaired the processing of observed, but not executed, actions. These findings demonstrate that coordinated activity in the PMv-to-MPFC pathway has a causal role in social action monitoring. |
José P. Ossandón; Peter König; Tobias Heed No evidence for a role of spatially modulated a-band activity in tactile remapping and short-latency, overt orienting behavior Journal Article In: Journal of Neuroscience, vol. 40, no. 47, pp. 9088–9102, 2020. @article{Ossandon2020, Oscillatory a-band activity is commonly associated with spatial attention and multisensory prioritization. It has also been suggested to reflect the automatic transformation of tactile stimuli from a skin-based, somatotopic reference frame into an external one. Previous research has not convincingly separated these two possible roles of a-band activity. Previous experimental paradigms have used artificially long delays between tactile stimuli and behavioral responses to aid relating oscillatory activity to these different events. However, this strategy potentially blurs the temporal relationship of a-band activity relative to behavioral indicators of tactile-spatial transformations. Here, we assessed a-band modulation with massive univariate deconvolution, an analysis approach that disentangles brain signals overlapping in time and space. Thirty-one male and female human participants performed a delay-free, visual search task in which saccade behavior was unrestricted. A tactile cue to uncrossed or crossed hands was either informative or uninformative about visual target location. a-Band suppression following tactile stimulation was lateralized relative to the stimulated hand over central-parietal electrodes but relative to its external location over parieto-occipital electrodes. a-Band suppression reflected external touch location only after informative cues, suggesting that posterior a-band lateralization does not index automatic tactile transformation. Moreover, a-band suppression occurred at the time of, or after, the production of the saccades guided by tactile stimulation. These findings challenge the idea that a-band activity is directly involved in tactile-spatial transformation and suggest instead that it reflects delayed, supramodal processes related to attentional reorienting. |
Kirsten C. S. Adam; Lillian Chang; Nicole Rangan; John T. Serences In: Journal of Cognitive Neuroscience, vol. 33, no. 4, pp. 695–724, 2020. @article{Adam2020a, Feature-based attention is the ability to selectively attend to a particular feature (e.g., attend to red but not green items while looking for the ketchup bottle in your refrigerator), and steady- state visually evoked potentials (SSVEPs) measured from the human EEG signal have been used to track the neural deployment of feature-based attention. Although many published studies sug- gest that we can use trial-by-trial cues to enhance relevant feature information (i.e., greater SSVEP response to the cued color), there is ongoing debate about whether participants may likewise use trial-by-trial cues to voluntarily ignore a particular feature. Here, we report the results of a preregistered study in which participants either were cued to attend or to ignore a color. Counter to prior work, we found no attention-related modulation of the SSVEP response in either cue condition. However, positive control analyses revealed that participants paid some degree of attention to the cued color (i.e., we observed a greater P300 component to targets in the attended vs. the unattended color). In light of these unexpected null results, we conducted a focused review of methodological considerations for studies of feature- based attention using SSVEPs. In the review, we quantify potentially important stimulus parameters that have been used in the past (e.g., stimulation frequency, trial counts) and we discuss the potential importance of these and other task factors (e.g., feature-based priming) for SSVEP studies. |