EyeLink 认知出版物
All EyeLink cognitive and perception research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications using keywords such as Visual Search, Scene Perception, Face Processing, etc. You can also search for individual author names. If we missed any EyeLink cognitive or perception articles, please email us!
2023 |
Tilo Strobach; Jens Kürten; Lynn Huestegge Benefits of repeated alternations – Task-specific vs. task-general sequential adjustments of dual-task order control Journal Article In: Acta Psychologica, vol. 236, pp. 1–16, 2023. @article{Strobach2023, An important cognitive requirement in multitasking is the decision of how multiple tasks should be temporally scheduled (task order control). Specifically, task order switches (vs. repetitions) yield performance costs (i.e., task-order switch costs), suggesting that task order scheduling is a vital part of configuring a task set. Recently, it has been shown that this process takes specific task-related characteristics into account: task order switches were easier when switching to a preferred (vs. non-preferred) task order. Here, we ask whether another determinant of task order control, namely the phenomenon that a task order switch in a previous trial facilitates a task order switch in a current trial (i.e., a sequential modulation of task order switch effect) also takes task-specific characteristics into account. Based on three experiments involving task order switches between a preferred (dominant oculomotor task prior to non-dominant manual/pedal task) and a non-preferred (vice versa) order, we replicated the finding that task order switching (in Trial N) is facilitated after a previous switch (vs. repetition in Trial N - 1) in task order. There was no substantial evidence in favor of a significant difference when switching to the preferred vs. non-preferred order and in the analyses of the dominant oculomotor task and the non-dominant manual task. This indicates different mechanisms underlying the control of immediate task order configuration (indexed by task order switch costs) and the sequential modulation of these costs based on the task order transition type in the previous trial. |
Moritz Stolte; Leon Kraus; Ulrich Ansorge Visual attentional guidance during smooth pursuit eye movements: Distractor interference is independent of distractor-target similarity Journal Article In: Psychophysiology, vol. 60, no. 12, pp. 1–11, 2023. @article{Stolte2023, In the current study, we used abrupt-onset distractors similar and dissimilar in luminance to the target of a smooth pursuit eye-movement to test if abrupt-onset distractors capture attention in a top-down or bottom-up fashion while the eyes track a moving object. Abrupt onset distractors were presented at different positions relative to the current position of a pursuit target during the closed-loop phase of smooth pursuit. Across experiments, we varied the duration of the distractors, their motion direction, and task-relevance. We found that abrupt-onset distractors decreased the gain of horizontally directed smooth-pursuit eye-movements. This effect, however, was independent of the similarity in luminance between distractor and target. In addition, distracting effects on horizontal gain were the same, regardless of the exact duration and position of the distractors, suggesting that capture was relatively unspecific and short-lived (Experiments 1 and 2). This was different with distractors moving in a vertical direction, perpendicular to the horizontally moving target. In line with past findings, these distractors caused suppression of vertical gain (Experiment 3). Finally, making distractors task-relevant by asking observers to report distractor positions increased the pursuit gain effect of the distractors. This effect was also independent of target-distractor similarity (Experiment 4). In conclusion, the results suggest that a strong location signal exerted by the pursuit targets led to very brief and largely location-unspecific interference through the abrupt onsets and that this interference was bottom-up, implying that the control of smooth pursuit was independent of other target features besides its motion signal. |
Gabriel M. Stine; Eric M. Trautmann; Danique Jeurissen; Michael N. Shadlen A neural mechanism for terminating decisions Journal Article In: Neuron, vol. 111, no. 16, pp. 2601–2613, 2023. @article{Stine2023, The brain makes decisions by accumulating evidence until there is enough to stop and choose. Neural mechanisms of evidence accumulation are established in association cortex, but the site and mechanism of termination are unknown. Here, we show that the superior colliculus (SC) plays a causal role in terminating decisions, and we provide evidence for a mechanism by which this occurs. We recorded simultaneously from neurons in the lateral intraparietal area (LIP) and SC while monkeys made perceptual decisions. Despite similar trial-averaged activity, we found distinct single-trial dynamics in the two areas: LIP displayed drift-diffusion dynamics and SC displayed bursting dynamics. We hypothesized that the bursts manifest a threshold mechanism applied to signals represented in LIP to terminate the decision. Consistent with this hypothesis, SC inactivation produced behavioral effects diagnostic of an impaired threshold sensor and prolonged the buildup of activity in LIP. The results reveal the transformation from deliberation to commitment. |
Pnina Stern; Tamar Kolodny; Shlomit Tsafrir; Galit Cohen; Lilach Shalev In: Journal of Attention Disorders, vol. 27, no. 7, pp. 757–776, 2023. @article{Stern2023, Objective: The present study evaluated the near (attention) and far (reading, ADHD symptoms, learning, and quality of life) transfer effects of a Computerized Progressive Attention Training (CPAT) versus Mindfulness Based Stress Reduction (MBSR) practice among adults with ADHD compared to a passive group. Method: Fifty-four adults participated in a non-fully randomized controlled trial. Participants in the intervention groups completed eight 2-hr weekly training sessions. Outcomes were assessed before, immediately after, and 4 months post-intervention, using objective tools: attention tests, eye-tracker, and subjective questionnaires. Results: Both interventions showed near-transfer to various attention functions. The CPAT produced far-transfer effects to reading, ADHD symptoms, and learning while the MBSR improved the self-perceived quality of life. At follow-up, all improvements except for ADHD symptoms were preserved in the CPAT group. The MBSR group showed mixed preservations. Conclusion: Both interventions have beneficial effects, however only the CPAT group exhibited improvements compared to the passive group. |
Maximilian Stefani; Marian Sauter Relative contributions of oculomotor capture and disengagement to distractor-related dwell times in visual search Journal Article In: Scientific Reports, vol. 13, no. 1, pp. 1–10, 2023. @article{Stefani2023, In visual search, attention is reliably captured by salient distractors and must be actively disengaged from them to reach the target. In such attentional capture paradigms, dwell time is measured on distractors that appear in the periphery (e.g., on a random location on a circle). Distractor-related dwell time is typically thought to be largely due to stimulus-driven processes related to oculomotor capture dynamics. However, the extent to which oculomotor capture and oculomotor disengagement contribute to distractor dwell time has not been known because standard attentional capture paradigms cannot decouple these processes. In the present study, we used a novel paradigm combining classical attentional capture trials and delayed disengagement trials. We measured eye movements to dissociate the capture and disengagement mechanisms underlying distractor dwell time. We found that only two-thirds of distractor dwell time (~ 52 ms) can be explained by oculomotor capture, while one-third is explained by oculomotor disengagement (~ 18 ms), which has been neglected or underestimated in previous studies. Thus, oculomotor disengagement (goal-directed) processes play a more significant role in distractor dwell times than previously thought. |
Anna Cornelia Stausberg Who are you to judge? Investigating narcissism through pupil dilation at witness testimonials Journal Article In: Psychology, vol. 14, no. 02, pp. 144–157, 2023. @article{Stausberg2023, Various literature has explored narcissistic behaviour and its distinct emotional and aggressive nature. However, there is a significant gap in research when exploring the physiological differences of individual classified as having high narcissistic traits and tendencies. Pupillometry is a measure often utilised to measure distinct differences in emotional arousal. This measure allows for excellent insight into physiological responses to various stimuli presented. This exploratory study was set up to investigate pupillometry responses to auditory stimulation in narcissistic versus control participants. Findings were consistent with previous research; however, various limitations hindered significant findings. This pilot study is a guideline for future research as it is a first attempt at exploring a physiological relationship between emotionality and narcissism in the context of a criminal hearing. |
Priyanka Srivastava; Saskia Jaarsveld; Kishan Sangani Verbal-analytical rather than visuo-spatial Raven's puzzle solving favors Raven's-like puzzle generation Journal Article In: Frontiers in Psychology, vol. 14, pp. 1–13, 2023. @article{Srivastava2023, Raven's advanced progressive matrices (APM) comprise two types of representational codes, namely visuo-spatial and verbal-analytical, that are used to solve APM puzzles. Studies using analytical, behavioral, and imaging methods have supported the multidimensional perspectives of APM puzzles. The visuo-spatial code is expected to recruit operations more responsive to the visual perception tasks. In contrast, the verbal-analytical code is expected to use operations more responsive to the logical reasoning task and may entail different cognitive strategies. Acknowledging different representational codes used in APM puzzle-solving is critical for a better understanding of APM's performance and their relationship with other tasks, especially creative reasoning. We used the eye-tracking method to investigate the role of two representational codes, visuo-spatial and verbal-analytical, in strategies involved in solving APM puzzles and in generating an APM-like puzzle by using a creative-reasoning task (CRT). Participants took longer time to complete the verbal-analytical than visuo-spatial puzzles. In addition, visuo-analytical than visual-spatial puzzles showed higher progressive and regressive saccade counts, suggesting the use of more response elimination than constructive matching strategies employed while solving verbal-analytical than visuo-spatial puzzles. We observed higher CRT scores when it followed verbal-analytical (Mdn = 84) than visuo-spatial (Mdn = 73) APM puzzles, suggesting puzzle-solving specific strategies affect puzzle-creating task performance. The advantage of verbal-analytical over visuo-spatial puzzle-solving has been discussed in light of shared cognitive processing between APM puzzle-solving and APM-like puzzle-creating task performance. |
Sybren Spit; Andreea Geambașu; Daan Renswoude; Elma Blom; Paula Fikkert; Sabine Hunnius; Caroline Junge; Josje Verhagen; Ingmar Visser; Frank Wijnen; Clara C. Levelt Robustness of the cognitive gains in 7-month-old bilingual infants: A close multi-center replication of Kovács and Mehler (2009) Journal Article In: Developmental Science, vol. 26, no. 6, pp. 1–16, 2023. @article{Spit2023, We present an exact replication of Experiment 2 from Kovács and Mehler's 2009 study, which showed that 7-month-old infants who are raised bilingually exhibit a cognitive advantage. In the experiment, a sound cue, following an AAB or ABB pattern, predicted the appearance of a visual stimulus on the screen. The stimulus appeared on one side of the screen for nine trials and then switched to the other side. In the original experiment, both mono- and bilingual infants anticipated where the visual stimulus would appear during pre-switch trials. However, during post-switch trials, only bilingual children anticipated that the stimulus would appear on the other side of the screen. The authors took this as evidence of a cognitive advantage. Using the exact same materials in combination with novel analysis techniques (Bayesian analyses, mixed effects modeling and cluster based permutation analyses), we assessed the robustness of these findings in four babylabs (N = 98). Our results did not replicate the original findings: although anticipatory looks increased slightly during post-switch trials for both groups, bilingual infants were not better switchers than monolingual infants. After the original experiment, we presented additional trials to examine whether infants associated sound patterns with cued locations, for which we did not find any evidence either. The results highlight the importance of multicenter replications and more fine-grained statistical analyses to better understand child development. Highlights: We carried out an exact replication across four baby labs of the high-impact study by Kovács and Mehler (2009). We did not replicate the findings of the original study, calling into question the robustness of the claim that bilingual infants have enhanced cognitive abilities. After the original experiment, we presented additional trials to examine whether infants correctly associated sound patterns with cued locations, for which we did not find any evidence. The use of novel analysis techniques (Bayesian analyses, mixed effects modeling and cluster based permutation analyses) allowed us to draw better-informed conclusions. |
John P. Spencer; Samuel H. Forbes; Sophie Naylor; Vinay P. Singh; Kiara Jackson; Sean Deoni; Madhuri Tiwari; Aarti Kumar Poor air quality is associated with impaired visual cognition in the first two years of life: A longitudinal investigation Journal Article In: eLife, vol. 12, pp. 1–19, 2023. @article{Spencer2023, Background: Poor air quality has been linked to cognitive deficits in children, but this relationship has not been examined in the first year of life when brain growth is at its peak. Methods: We measured in-home air quality focusing on particulate matter with diameter of <2.5 μm (PM2.5) and infants' cognition longitudinally in a sample of families from rural India. Results: Air quality was poorer in homes that used solid cooking materials. Infants from homes with poorer air quality showed lower visual working memory scores at 6 and 9 months of age and slower visual processing speed from 6 to 21 months when controlling for family socio-economic status. Conclusions: Thus, poor air quality is associated with impaired visual cognition in the first two years of life, consistent with animal studies of early brain development. We demonstrate for the first time an association between air quality and cognition in the first year of life using direct measures of in-home air quality and looking-based measures of cognition. Because indoor air quality was linked to cooking materials in the home, our findings suggest that efforts to reduce cooking emissions should be a key target for intervention. |
David Souto; Jennifer Sudkamp; Kyle Nacilla; Mateusz Bocian Tuning in to a hip-hop beat: Pursuit eye movements reveal processing of biological motion Journal Article In: Human Movement Science, vol. 91, pp. 1–12, 2023. @article{Souto2023, Smooth pursuit eye movements are mainly driven by motion signals to achieve their goal of reducing retinal motion blur. However, they can also show anticipation of predictable movement patterns. Oculomotor predictions may rely on an internal model of the target kinematics. Most investigations on the nature of those predictions have concentrated on simple stimuli, such as a decontextualized dot. However, biological motion is one of the most important visual stimuli in regulating human interaction and its perception involves integration of form and motion across time and space. Therefore, we asked whether there is a specific contribution of an internal model of biological motion in driving pursuit eye movements. Unlike previous contributions, we exploited the cyclical nature of walking to measure eye movement's ability to track the velocity oscillations of the hip of point-light walkers. We quantified the quality of tracking by cross-correlating pursuit and hip velocity oscillations. We found a robust correlation between signals, even along the horizontal dimension, where changes in velocity during the stepping cycle are very subtle. The inversion of the walker and the presentation of the hip-dot without context incurred the same additional phase lag along the horizontal dimension. These findings support the view that information beyond the hip-dot contributes to the prediction of hip kinematics that controls pursuit. We also found a smaller phase lag in inverted walkers for pursuit along the vertical dimension compared to upright walkers, indicating that inversion does not simply reduce prediction. We suggest that pursuit eye movements reflect the visual processing of biological motion and as such could provide an implicit measure of higher-level visual function. |
Wenfang Song; Xinze Xie; Wenyue Huang; Qianqian Yu The design of automotive interior for Chinese young consumers based on Kansei engineering and eye-tracking technology Journal Article In: Applied Sciences, vol. 13, no. 19, pp. 1–20, 2023. @article{Song2023, The reasonable CMF (Color, Material and Finishing) design for automotive interiors could bring positive psychophysical and affective responses of customers, providing an important guideline for automobile enterprises making differentiated products. However, current studies mainly focus on an aspect of CMF design or a single style of the automotive interior, and examined the design mainly through human visual perception. There lack systematic studies on the design and evaluation of automobile interior CMF, and more scientific evaluation of the design through human visual and touching perception was required. Therefore, this study systematically designed the automobile interior CMF based on Kansei engineering and eye-tracking technology. The study consists of five steps: (1) Product positioning: the Chinese young consumers, the new energy vehicles, and bridge and seat are the target users, the automotive model and the key interior components. (2) Kansei physiological measurement: nine groups of Kansei words and thirty-three interior samples were selected, and the interior samples were scored by the Kansei words. (3) Kansei data analysis: three design types were determined, i.e., “hard and stately”, “concise and technological” and “comfortable and safe”. Meanwhile, the CMF design elements of the automotive interiors under the three styles were obtained through mathematical methods. (4) Design practice: four CMF samples under each design style (12 samples) were developed. (5) Kansei evaluation: the design themes were conducted using eye-tracking technology, and the optimal sample that mostly satisfy the user's Kansei requirements under each style was obtained. The proposed design process of automotive interior CMF may have great implications in the design of automotive interiors. |
Sangkyu Son; Joonsik Moon; Yee-Joon Kim; Min-Suk Kang; Joonyeol Lee Frontal-to-visual information flow explains predictive motion tracking Journal Article In: NeuroImage, vol. 269, pp. 1–11, 2023. @article{Son2023, Predictive tracking demonstrates our ability to maintain a line of vision on moving objects even when they temporarily disappear. Models of smooth pursuit eye movements posit that our brain achieves this ability by directly streamlining motor programming from continuously updated sensory motion information. To test this hypothesis, we obtained sensory motion representation from multivariate electroencephalogram activity while human participants covertly tracked a temporarily occluded moving stimulus with their eyes remaining stationary at the fixation point. The sensory motion representation of the occluded target evolves to its maximum strength at the expected timing of reappearance, suggesting a timely modulation of the internal model of the visual target. We further characterize the spatiotemporal dynamics of the task-relevant motion information by computing the phase gradients of slow oscillations. We discovered a predominant posterior-to-anterior phase gradient immediately after stimulus occlusion; however, at the expected timing of reappearance, the axis reverses the gradient, becoming anterior-to-posterior. The behavioral bias of smooth pursuit eye movements, which is a signature of the predictive process of the pursuit, was correlated with the posterior division of the gradient. These results suggest that the sensory motion area modulated by the prediction signal is involved in updating motor programming. |
Emma J. Solly; Meaghan Clough; Allison M. McKendrick; Owen B. White; Joanne Fielding Eye movement characteristics are not significantly influenced by psychiatric comorbidities in people with visual snow syndrome Journal Article In: Brain Research, vol. 1804, pp. 1–5, 2023. @article{Solly2023, Visual snow syndrome (VSS) is a neurological disorder primarily affecting the processing of visual information. Using ocular motor (OM) tasks, we previously demonstrated that participants with VSS exhibit altered saccade profiles consistent with visual attention impairments. We subsequently proposed that OM assessments may provide an objective measure of dysfunction in these individuals. However, VSS participants also frequently report significant psychiatric symptoms. Given that that these symptoms have been shown previously to influence performance on OM tasks, the objective of this study was to investigate whether psychiatric symptoms (specifically: depression, anxiety, fatigue, sleep difficulties, and depersonalization) influence the OM metrics found to differ in VSS. Sixty-one VSS participants completed a battery of four OM tasks and a series of online questionnaires assessing psychiatric symptomology. We revealed no significant relationship between psychiatric symptoms and OM metrics on any of the tasks, demonstrating that in participants with VSS, differences in OM behaviour are a feature of the disorder. This supports the utility of OM assessment in characterising deficit in VSS, whether supporting a diagnosis or monitoring future treatment efficacy. |
Maverick E. Smith; Lester C. Loschky; Heather R. Bailey Eye movements and event segmentation: Eye movements reveal age-related differences in event model updating Journal Article In: Psychology and Aging, pp. 1–8, 2023. @article{Smith2023, People spontaneously segment continuous ongoing actions into sequences of events. Prior research found that gaze similarity and pupil dilation increase at event boundaries and that older adults segmentmore idiosyncratically than do young adults.We used eye tracking to explore age-related differences in gaze similarity (i.e., the extent to which individuals look at the same places at the same time as others) and pupil dilation at event boundaries. Older and young adults watched naturalistic videos of actors performing everyday activities while we tracked their eye movements. Afterward, they segmented the videos into subevents. Replicating prior work, we found that pupil size and gaze similarity increased at event boundaries. Thus, there were fewer individual differences in eye position at boundaries.We also found that young adults had higher gaze similarity than older adults throughout an entire video and at event boundaries. This study is the first to show that age-related differences in how people parse continuous everyday activities into events may be partially explained by individual differences in gaze patterns. Those who segment less normatively may do so because they fixate less normative regions. Results have implications for future interventions designed to improve encoding in older adults. |
Simona Skripkauskaite; Ioana Mihai; Kami Koldewyn Attentional bias towards social interactions during viewing of naturalistic scenes Journal Article In: Quarterly Journal of Experimental Psychology, vol. 76, no. 10, pp. 2303 –2311, 2023. @article{Skripkauskaite2023, Human visual attention is readily captured by the social information in scenes. Multiple studies have shown that social areas of interest (AOIs) such as faces and bodies attract more attention than non-social AOIs (e.g., objects or background). However, whether this attentional bias is moderated by the presence (or absence) of a social interaction remains unclear. Here, the gaze of 70 young adults was tracked during the free viewing of 60 naturalistic scenes. All photographs depicted two people, who were either interacting or not. Analyses of dwell time revealed that more attention was spent on human than background AOIs in the interactive pictures. In non-interactive pictures, however, dwell time did not differ between AOI type. In the time-to-first-fixation analysis, humans always captured attention before other elements of the scene, although this difference was slightly larger in interactive than non-interactive scenes. These findings confirm the existence of a bias towards social information in attentional capture and suggest our attention values social interactions beyond the presence of two people. |
Alice E. Skelton; Anna Franklin; Jenny M. Bosten Colour vision is aligned with natural scene statistics at 4 months of age Journal Article In: Developmental Science, vol. 26, no. 6, pp. 1–8, 2023. @article{Skelton2023, Visual perception in adult humans is thought to be tuned to represent the statistical regularities of natural scenes. For example, in adults, visual sensitivity to different hues shows an asymmetry which coincides with the statistical regularities of colour in the natural world. Infants are sensitive to statistical regularities in social and linguistic stimuli, but whether or not infants' visual systems are tuned to natural scene statistics is currently unclear. We measured colour discrimination in infants to investigate whether or not the visual system can represent chromatic scene statistics in very early life. Our results reveal the earliest association between vision and natural scene statistics that has yet been found: even as young as 4 months of age, colour vision is aligned with the distributions of colours in natural scenes. Research Highlights: We find infants' colour sensitivity is aligned with the distribution of colours in the natural world, as it is in adults. At just 4 months, infants' visual systems are tailored to extract and represent the statistical regularities of the natural world. This points to a drive for the human brain to represent statistical regularities even at a young age. |
Oindrila Sinha; Shirin Madarshahian; Ana Gómez-Granados; Morgan L. Paine; Isaac Kurtzer; Tarkeshwar Singh Smooth pursuit eye movements contribute to anticipatory force control during mechanical stopping of moving objects Journal Article In: Journal of Neurophysiology, vol. 129, no. 6, pp. 1293–1309, 2023. @article{Sinha2023, When stopping a closing door or catching an object, humans process the motion of inertial objects and apply reactive limb force over short period to interact with them. One way in which the visual system processes motion is through extraretinal signals associated with smooth pursuit eye movements (SPEMs). We conducted three experiments to investigate how SPEMs contribute to anticipatory and reactive hand force modulation when interacting with a virtual object moving in the horizontal plane. We hypothesized that SPEM signals are critical for timing motor responses, anticipatory control of hand force, and task performance. Participants held a robotic manipulandum and attempted to stop an approaching simulated object by applying a force impulse (area under force-time curve) that matched the object's virtual momentum upon contact. We manipulated the object's momentum by varying either its virtual mass or its speed under free gaze or constrained gaze conditions. We examined gaze variables, the timing of hand motor responses, anticipatory force control, and overall task performance. Our results show that when participants were fixated at a designated location instead of following objects with SPEM, anticipatory modulation of hand force before contact decreased. However, constraining gaze by asking participants to fixate did not seem to affect the timing of the motor response or the task performance. Together, these results suggest that SPEMs may be important for anticipatory control of hand force before contact and may also play a critical role in anticipatory stabilization of limb posture when humans interact with moving objects. NEW & NOTEWORTHY We show for the first time that smooth pursuit eye movements (SPEMs) play a role in the modulation of anticipatory control of hand force to stabilize posture against contact forces. SPEMs are critical for tracking moving objects, facilitate processing motion of moving objects, and are impacted during aging and in many neurological disorders, such as Alzheimer's disease and multiple sclerosis. These results provide a novel basis to probe how changes in SPEMs could contribute to deficient limb motor control in older adults and patients with neurological disorders. |
Tarkeshwar Singh; John-Ross Rizzo; Cédrick Bonnet; Jennifer A. Semrau; Troy M. Herter Enhanced cognitive interference during visuomotor tasks may cause eye–hand dyscoordination Journal Article In: Experimental Brain Research, vol. 241, no. 2, pp. 547–558, 2023. @article{Singh2023, In complex visuomotor tasks, such as cooking, people make many saccades to continuously search for items before and during reaching movements. These tasks require cognitive resources, such as short-term memory and task-switching. Cognitive load may impact limb motor performance by increasing demands on mental processes, but mechanisms remain unclear. The Trail-Making Tests, in which participants sequentially search for and make reaching movements to 25 targets, consist of a simple numeric variant (Trails-A) and a cognitively challenging variant that requires alphanumeric switching (Trails-B). We have previously shown that stroke survivors and age-matched controls make many more saccades in Trails-B, and those increases in saccades are associated with decreases in speed and smoothness of reaching movements. However, it remains unclear how patients with neurological injuries, e.g., stroke, manage progressive increases in cognitive load during visuomotor tasks, such as the Trail-Making Tests. As Trails-B trial progresses, switching between numbers and letters leads to progressive increases in cognitive load. Here, we show that stroke survivors with damage to frontoparietal areas and age-matched controls made more saccades and had longer fixations as they progressed through the 25 alphanumeric targets in Trails-B. Furthermore, when stroke survivors made saccades during reaching movements in Trails-B, their movement speed slowed down significantly. Thus, damage to frontoparietal areas serving cognitive motor functions may cause interference between oculomotor, visual, and limb motor functions, which could lead to significant disruptions in activities of daily living. These findings augment our understanding of the mechanisms that underpin cognitive-motor interference during complex visuomotor tasks. |
Johannes J. D. Singer; Radoslaw M. Cichy; Martin N. Hebart The spatiotemporal neural dynamics of object recognition for natural images and line drawings Journal Article In: Journal of Neuroscience, vol. 43, no. 3, pp. 484–500, 2023. @article{Singer2023, Drawings offer a simple and efficient way to communicate meaning. While line drawings capture only coarsely how objects look in reality, we still perceive them as resembling real-world objects. Previous work has shown that this perceived similarity is mirrored by shared neural representations for drawings and natural images, which suggests that similar mechanisms underlie the recognition of both. However, other work has proposed that representations of drawings and natural images become similar only after substantial processing has taken place, suggesting distinct mechanisms. To arbitrate between those alternatives, we measured brain responses resolved in space and time using fMRI and MEG, respectively, while human participants (female and male) viewed images of objects depicted as photographs, line drawings, or sketch-like drawings. Using multivariate decoding, we demonstrate that object category information emerged similarly fast and across overlapping regions in occipital, ventral-temporal, and posterior parietal cortex for all types of depiction, yet with smaller effects at higher levels of visual abstraction. In addition, cross-decoding between depiction types revealed strong generalization of object category information from early processing stages on. Finally, by combining fMRI and MEG data using representational similarity analysis, we found that visual information traversed similar processing stages for all types of depiction, yet with an overall stronger representation for photographs. Together, our results demonstrate broad commonalities in the neural dynamics of object recognition across types of depiction, thus providing clear evidence for shared neural mechanisms underlying recognition of natural object images and abstract drawings. |
Olympia Simantiraki; Anita E. Wagner; Martin Cooke The impact of speech type on listening effort and intelligibility for native and non-native listeners Journal Article In: Frontiers in Neuroscience, vol. 17, pp. 1–16, 2023. @article{Simantiraki2023, Listeners are routinely exposed to many different types of speech, including artificially-enhanced and synthetic speech, styles which deviate to a greater or lesser extent from naturally-spoken exemplars. While the impact of differing speech types on intelligibility is well-studied, it is less clear how such types affect cognitive processing demands, and in particular whether those speech forms with the greatest intelligibility in noise have a commensurately lower listening effort. The current study measured intelligibility, self-reported listening effort, and a pupillometry-based measure of cognitive load for four distinct types of speech: (i) plain i.e. natural unmodified speech; (ii) Lombard speech, a naturally-enhanced form which occurs when speaking in the presence of noise; (iii) artificially-enhanced speech which involves spectral shaping and dynamic range compression; and (iv) speech synthesized from text. In the first experiment a cohort of 26 native listeners responded to the four speech types in three levels of speech-shaped noise. In a second experiment, 31 non-native listeners underwent the same procedure at more favorable signal-to-noise ratios, chosen since second language listening in noise has a more detrimental effect on intelligibility than listening in a first language. For both native and non-native listeners, artificially-enhanced speech was the most intelligible and led to the lowest subjective effort ratings, while the reverse was true for synthetic speech. However, pupil data suggested that Lombard speech elicited the lowest processing demands overall. These outcomes indicate that the relationship between intelligibility and cognitive processing demands is not a simple inverse, but is mediated by speech type. The findings of the current study motivate the search for speech modification algorithms that are optimized for both intelligibility and listening effort. |
Mustafa Shirzad; James Van Riesen; Nikan Behboodpour; Matthew Heath 10-min exposure to a 2.5% hypercapnic environment increases cerebral blood blow but does not impact executive function Journal Article In: Life Sciences in Space Research, vol. 40, no. 2023, pp. 143–150, 2023. @article{Shirzad2023, Space travel and exploration are associated with increased ambient CO2 (i.e., a hypercapnic environment). Some work reported that the physiological changes (e.g., increased cerebral blood flow [CBF]) associated with a chronic hypercapnic environment contributes to a “space fog” that adversely impacts cognition and psychomotor performance, whereas other work reported no change or a positive change. Here, we employed the antisaccade task to evaluate whether transient exposure to a hypercapnic environment influences top-down executive function (EF). Antisaccades require a goal-directed eye movement mirror-symmetrical to a target and are an ideal tool for identifying subtle EF changes. Healthy young adults (aged 19–25 years) performed blocks of antisaccade trials prior to (i.e., pre-intervention), during (i.e., concurrent) and after (i.e., post-intervention) 10-min of breathing factional inspired CO2 (FiCO2) of 2.5% (i.e., hypercapnic condition) and during a normocapnic (i.e., control) condition. In both conditions, CBF, ventilatory and cardiorespiratory responses were measured. Results showed that the hypercapnic condition increased CBF, ventilation and end-tidal CO2 and thus demonstrated an expected physiological adaptation to increased FiCO2. Notably, however, null hypothesis and equivalence tests indicated that concurrent and post-intervention antisaccade reaction times were refractory to the hypercapnic environment; that is, transient exposure to a FiCO2 of 2.5% did not produce a real-time or lingering influence on an oculomotor-based measure of EF. Accordingly, results provide a framework that – in part – establishes the FiCO2 percentage and timeline by which high-level EF can be maintained. Future work will explore CBF and EF dynamics during chronic hypercapnic exposure as more direct proxy for the challenges of space flight and exploration. |
Frederick Shic; Erin C. Barney; Adam J. Naples; Kelsey J. Dommer; Shou An Chang; Beibin Li; Takumi McAllister; Adham Atyabi; Quan Wang; Raphael Bernier; Geraldine Dawson; James Dziura; Susan Faja; Shafali Spurling Jeste; Michael Murias; Scott P. Johnson; Maura Sabatos-DeVito; Gerhard Helleman; Damla Senturk; Catherine A. Sugar; Sara Jane Webb; James C. McPartland; Katarzyna Chawarska In: Autism Research, vol. 16, pp. 2150–2159, 2023. @article{Shic2023, The Selective Social Attention (SSA) task is a brief eye-tracking task involving experimental conditions varying along socio-communicative axes. Traditionally the SSA has been used to probe socially-specific attentional patterns in infants and toddlers who develop autism spectrum disorder (ASD). This current work extends these findings to preschool and school-age children. Children 4- to 12-years-old with ASD (N = 23) and a typically-developing comparison group (TD; N = 25) completed the SSA task as well as standardized clinical assessments. Linear mixed models examined group and condition effects on two outcome variables: percent of time spent looking at the scene relative to scene presentation time (%Valid), and percent of time looking at the face relative to time spent looking at the scene (%Face). Age and IQ were included as covariates. Outcome variables' relationships to clinical data were assessed via correlation analysis. The ASD group, compared to the TD group, looked less at the scene and focused less on the actress' face during the most socially-engaging experimental conditions. Additionally, within the ASD group, %Face negatively correlated with SRS total T-scores with a particularly strong negative correlation with the Autistic Mannerism subscale T-score. These results highlight the extensibility of the SSA to older children with ASD, including replication of between-group differences previously seen in infants and toddlers, as well as its ability to capture meaningful clinical variation within the autism spectrum across a wide developmental span inclusive of preschool and school-aged children. The properties suggest that the SSA may have broad potential as a biomarker for ASD. |
Summer Sheremata; George L. Malcolm; Sarah Shomstein Behavioral asymmetries in visual short-term memory occur in retinotopic coordinates Journal Article In: Attention, Perception, and Psychophysics, vol. 85, pp. 113–119, 2023. @article{Sheremata2023, Visual short-term memory (VSTM) is an essential store that creates continuous representations from disjointed visual input. However, severe capacity limits exist, reflecting constraints in supporting brain networks. VSTM performance shows spatial biases predicted by asymmetries in the brain based upon the location of the remembered object. Visual representations are retinotopic, or relative to location of the representation on the retina. It therefore stands to reason that memory performance may also show retinotopic biases. Here, eye position was manipulated to tease apart retinotopic coordinates from spatiotopic coordinates, or location relative to the external world. Memory performance was measured while participants performed a color change-detection task for items presented across the visual field while subjects fixated central or peripheral position. VSTM biases reflected the location of the stimulus on the retina, regardless of where the stimulus appeared on the screen. Therefore, spatial biases occur in retinotopic coordinates in VSTM and suggest a fundamental link between behavioral VSTM measures and visual representations. |
Yuanping Shen; Qin Wang; Hongli Liu; Jianye Luo; Qunyue Liu; Yuxiang Lan Landscape design intensity and its associated complexity of forest landscapes in relation to preference and eye movements Journal Article In: Forests, vol. 14, no. 4, pp. 1–16, 2023. @article{Shen2023b, Understanding how people perceive landscapes is essential for the design of forest landscapes. The study investigates how design intensity affects landscape complexity, preference, and eye movements for urban forest settings. Eight groups of twenty-four pictures, representing lawn, path, and waterscape settings in urban forests, with each type of setting having two groups of pictures and one group having four pictures, were selected. The four pictures in each group were classified into slight, low, medium, and high design intensities. A total of 76 students were randomly assigned to observe one group of pictures within each type of landscape with an eye-tracking apparatus and give ratings of complexity and preference. The results indicate that design intensity was positively associated with subjective landscape complexity but was positively or negatively related to objective landscape complexity in three types of settings. Subjective landscape complexity was found to significantly contribute to visual preference across landscape types, while objective landscape complexity did not contribute to preference. In addition, the marginal effect of medium design intensity on preference was greater than that of low and high design intensity in most cases. Moreover, although some eye movement metrics were significantly related to preference in lawn settings, none were found to be indicative predictors for preference. The findings enrich research in visual preference and assist landscape designers during the design process to effectively arrange landscape design intensity in urban forests. |
Shdeour O.; Tal-Perry N.; Glickman M.; Yuval-Greenberg S. Exposure to temporal randomness promotes subsequent adaptation to new temporal regularities Journal Article In: Cognition, vol. 244, pp. 1–11, 2023. @article{Shdeour2023, Noise is intuitively thought to interfere with perceptual learning; However, human and machine learning studies suggest that, in certain contexts, variability may reduce overfitting and improve generalizability. Whereas previous studies have examined the effects of variability in learned stimuli or tasks, it is hitherto unknown what are the effects of variability in the temporal environment. Here, we examined this question in two groups of adult participants (N=40) presented with visual targets at either random or fixed temporal routines and then tested on the same type of targets at a new nearly-fixed temporal routine. Findings reveal that participants of the random group performed better and adapted quicker following a change in the timing routine, relative to participants of the fixed group. Corroborated with eye tracking and computational modeling, these findings suggest that prior exposure to temporal randomness promotes the formation of new temporal expectations and enhances generalizability in a dynamic environment. We conclude that noise plays an important role in promotion perceptual learning in the temporal domain: rather than interfering with the formation of temporal expectations, noise enhances them. This counterintuitive effect is hypothesized to be achieved through eliminating overfitting and promoting generalizability. |
Mishaal Sharif; Yougan Saman; Rose Burling; Oliver Rea; Rakesh Patel; Douglas J. K. Barrett; Peter Rea; Amir Kheradmand; Qadeer Arshad Altered visual conscious awareness in patients with vestibular dysfunctions; a cross-sectional observation study Journal Article In: Journal of the Neurological Sciences, vol. 448, pp. 1–6, 2023. @article{Sharif2023, Background: Patients with vestibular dysfunctions often experience visual-induced symptoms. Here we asked whether such visual dependence can be related to alterations in visual conscious awareness in these patients. Methods: To measure visual conscious awareness, we used the effect of motion-induced blindness (MIB,) in which the perceptual awareness of the visual stimulus alternates despite its unchanged physical characteristics. In this phenomenon, a salient visual target spontaneously disappears and subsequently reappears from visual perception when presented against a moving visual background. The number of perceptual switches during the experience of the MIB stimulus was measured for 120 s in 15 healthy controls, 15 patients with vestibular migraine, 15 patients with benign positional paroxysmal vertigo (BPPV) and 15 with migraine without vestibular symptoms. Results: Patients with vestibular dysfunctions (i.e., both vestibular migraine and BPPV) exhibited increased perceptual fluctuations during MIB compared to healthy controls and migraine patients without vertigo. In VM patients, those with more severe symptoms exhibited higher fluctuations of visual awareness (i.e., positive correlation), whereas, in BPPV patients, those with more severe symptoms had lower fluctuations of visual awareness (i.e., negative correlation). Implications: Taken together, these findings show that fluctuations of visual awareness are linked to the severity of visual-induced symptoms in patients with vestibular dysfunctions, and distinct pathophysiological mechanisms may mediate visual vertigo in peripheral versus central vestibular dysfunctions. |
Anaïs Servais; Noémie Préa; Christophe Hurter; Emmanuel J. Barbeau In: Acta Psychologica, vol. 240, pp. 1–13, 2023. @article{Servais2023, It is common to look away while trying to remember specific information, for example during autobiographical memory retrieval, a behavior referred to as gaze aversion. Given the competition between internal and external attention, gaze aversion is assumed to play a role in visual decoupling, i.e., suppressing environmental distractors during internal tasks. This suggests a link between gaze aversion and the attentional switch from the outside world to a temporary internal mental space that takes place during the initial stage of memory retrieval, but this assumption has never been verified so far. We designed a protocol where 33 participants answered 48 autobiographical questions while their eye movements were recorded with an eye-tracker and a camcorder. Results indicated that gaze aversion occurred early (median 1.09 s) and predominantly during the access phase of memory retrieval—i.e., the moment when the attentional switch is assumed to take place. In addition, gaze aversion lasted a relatively long time (on average 6 s), and was notably decoupled from concurrent head movements. These results support a role of gaze aversion in perceptual decoupling. Gaze aversion was also related to higher retrieval effort and was rare during memories which came spontaneously to mind. This suggests that gaze aversion might be required only when cognitive effort is required to switch the attention toward the internal world to help retrieving hard-to-access memories. Compared to eye vergence, another visual decoupling strategy, the association with the attentional switch seemed specific to gaze aversion. Our results provide for the first time several arguments supporting the hypothesis that gaze aversion is related to the attentional switch from the outside world to memory. |
Eser Sendesen; Samet Kılıç; Nurhan Erbil; Özgür Aydın; Didem Turkyilmaz An exploratory study of the effect of tinnitus on listening effort using EEG and pupillometry Journal Article In: Otolaryngology-Head and Neck Surgery, vol. 169, no. 5, pp. 1259–1267, 2023. @article{Sendesen2023, Objective: Previous behavioral studies on listening effort in tinnitus patients did not consider extended high-frequency hearing thresholds and had conflicting results. This inconsistency may be related that listening effort is not evaluated by the central nervous system (CNS) and autonomic nervous system (ANS), which are directly related to tinnitus pathophysiology. This study matches hearing thresholds at all frequencies, including the extended high-frequency and reduces hearing loss to objectively evaluate listening effort over the CNS and ANS simultaneously in tinnitus patients. Study Design: Case-control study. Setting: University hospital. Methods: Sixteen chronic tinnitus patients and 23 matched healthy controls having normal pure-tone averages with symmetrical hearing thresholds were included. Subjects were evaluated with 0.125 to 20 kHz pure-tone audiometry, Montreal Cognitive Assessment Test (MoCA), Tinnitus Handicap Inventory (THI), Visual Analog Scale (VAS), electroencephalography (EEG), and pupillometry. Results: Pupil dilation and EEG alpha band in the “coding” phase of the sentence presented in tinnitus patients was less than in the control group (p <.05). VAS score was higher in the tinnitus group (p <.01). Also, there was no statistically significant relationship between EEG and pupillometry components and THI or MoCA (p >.05). Conclusion: This study suggests that tinnitus patients may need to make an extra effort to listen. Also, pupillometry may not be sufficiently reliable to assess listening effort in ANS-related pathologies. Considering the possible listening difficulties in tinnitus patients, reducing the listening difficulties, especially in noisy environments, can be added to the goals of tinnitus therapy protocols. |
Werner Seitz; Artyom Zinchenko; Hermann J. Müller; Thomas Geyer Contextual cueing of visual search reflects the acquisition of an optimal, one-for-all oculomotor scanning strategy Journal Article In: Communications Psychology, vol. 1, no. 1, pp. 1–12, 2023. @article{Seitz2023, Visual search improves when a target is encountered repeatedly at a fixed location within a stable distractor arrangement (spatial context), compared to non-repeated contexts. The standard account attributes this contextual-cueing effect to the acquisition of display-specific long-term memories, which, when activated by the current display, cue attention to the target location. Here we present an alternative, procedural-optimization account, according to which contextual facilitation arises from the acquisition of generic oculomotor scanning strategies, optimized with respect to the entire set of displays, with frequently searched displays accruing greater weight in the optimization process. To decide between these alternatives, we examined measures of the similarity, across time-on-task, of the spatio-temporal sequences of fixations through repeated and non-repeated displays. We found scanpath similarity to increase generally with learning, but more for repeated versus non-repeated displays. This pattern contradicts display-specific guidance, but supports one-for-all scanpath optimization. |
Andreas Schroeer; Martin Rune Andersen; Mike Lind Rank; Ronny Hannemann; Eline Borch Petersen; Filip Marchman Rønne; Daniel J. Strauss; Farah I. Corona-Strauss Assessment of vestigial auriculomotor activity to acoustic stimuli using electrodes in and around the ear Journal Article In: Trends in Hearing, vol. 27, pp. 1–6, 2023. @article{Schroeer2023, Recently, it has been demonstrated that electromyographic (EMG) activity of auricular muscles in humans, especially the postauricular muscle (PAM), depends on the spatial location of auditory stimuli. This observation has only been shown using wet electrodes placed directly on auricular muscles. To move towards a more applied, out-of-the-laboratory setting, this study aims to investigate if similar results can be obtained using electrodes placed in custom-fitted earpieces. Furthermore, with the exception of the ground electrode, only dry-contact electrodes were used to record EMG signals, which require little to no skin preparation and can therefore be applied extremely fast. In two experiments, auditory stimuli were presented to ten participants from different spatial directions. In experiment 1, stimuli were rapid onset naturalistic stimuli presented in silence, and in experiment 2, the corresponding participant's first name, presented in a “cocktail party” environment. In both experiments, ipsilateral responses were significantly larger than contralateral responses. Furthermore, machine learning models objectively decoded the direction of stimuli significantly above chance level on a single trial basis (PAM: (Formula presented.) 80%, in-ear: (Formula presented.) 69%). There were no significant differences when participants repeated the experiments after several weeks. This study provides evidence that auricular muscle responses can be recorded reliably using an almost entirely dry-contact in-ear electrode system. The location of the PAM, and the fact that in-ear electrodes can record comparable signals, would make hearing aids interesting devices to record these auricular EMG signals and potentially utilize them as control signals in the future. |
Svea C. Y. Schroeder; David Aagten-Murphy; Niko A. Busch Contralateral delay activity, but not alpha lateralization, indexes prioritization of information for working memory storage Journal Article In: Attention, Perception, and Psychophysics, vol. 85, no. 3, pp. 718–733, 2023. @article{Schroeder2023, Working memory is inherently limited, which makes it important to select and maintain only task-relevant information and to protect it from distraction. Previous research has suggested the contralateral delay activity (CDA) and lateralized alpha oscillations as neural candidates for such a prioritization process. While most of this work focused on distraction during encoding, we examined the effect of external distraction presented during memory maintenance. Participants memorized the orientations of three lateralized objects. After an initial distraction-free maintenance interval, distractors appeared in the same location as the targets or in the opposite hemifield. This distraction was followed by another distraction-free interval. Our results show that CDA amplitudes were stronger in the interval before compared with the interval after the distraction (i.e., CDA amplitudes were stronger in response to targets compared with distractors). This amplitude reduction in response to distractors was more pronounced in participants with higher memory accuracy, indicating prioritization and maintenance of relevant over irrelevant information. In contrast, alpha lateralization did not change from the interval before distraction compared with the interval after distraction, and we found no correlation between alpha lateralization and memory accuracy. These results suggest that alpha lateralization plays no direct role in either selective maintenance of task-relevant information or inhibition of distractors. Instead, alpha lateralization reflects the current allocation of spatial attention to the most salient information regardless of task-relevance. In contrast, CDA indicates flexible allocation of working memory resources depending on task-relevance. |
Rebekka Schröder; Kristof Keidel; Peter Trautner; Alexander Radbruch; Ulrich Ettinger Neural mechanisms of background and velocity effects in smooth pursuit eye movements Journal Article In: Human Brain Mapping, vol. 44, no. 3, pp. 1–17, 2023. @article{Schroeder2023a, Smooth pursuit eye movements (SPEM) are essential to guide behaviour in complex visual environments. SPEM accuracy is known to be degraded by the presence of a structured visual background and at higher target velocities. The aim of this preregistered study was to investigate the neural mechanisms of these robust behavioural effects. N = 33 participants performed a SPEM task with two background conditions (present and absent) at two target velocities (0.4 and 0.6 Hz). Eye movement and BOLD data were collected simultaneously. Both the presence of a structured background and faster target velocity decreased pursuit gain and increased catch-up saccade rate. Faster targets additionally increased position error. Higher BOLD response with background was found in extensive clusters in visual, parietal, and frontal areas (including the medial frontal eye fields; FEF) partially overlapping with the known SPEM network. Faster targets were associated with higher BOLD response in visual cortex and left lateral FEF. Task-based functional connectivity analyses (psychophysiological interactions; PPI) largely replicated previous results in the basic SPEM network but did not yield additional information regarding the neural underpinnings of the background and velocity effects. The results show that the presentation of visual background stimuli during SPEM induces activity in a widespread visuo-parieto-frontal network including areas contributing to cognitive aspects of oculomotor control such as medial FEF, whereas the response to higher target velocity involves visual and motor areas such as lateral FEF. Therefore, we were able to propose for the first time different functions of the medial and lateral FEF during SPEM. |
Sebastian Schneegans; Jessica M. V. McMaster; Paul M. Bays Role of time in binding features in visual working memory Journal Article In: Psychological Review, vol. 130, no. 1, pp. 137–154, 2023. @article{Schneegans2023, Previous research on feature binding in visual working memory has supported a privileged role for location in binding an object's nonspatial features. However, humans are able to correctly recall feature conjunctions of objects that occupy the same location at different times. In a series of behavioral experiments, we investigated binding errors under these conditions, and specifically tested whether ordinal position can take the role of location in mediating feature binding. We performed two dual report experiments in which participants had to memorize three colored shapes presented sequentially at the screen center. When participants were cued with the ordinal position of one item and had to report its shape and color, report errors for the two features were largely uncorrelated. In contrast, when participants were cued, for example, with an item's shape and reported an incorrect ordinal position, they had a high chance of making a corresponding error in the color report. This pattern of error correlations closely matched the predictions of a model in which color and shape are bound to each other only indirectly via an item's ordinal position. In a third experiment, we directly compared the roles of location and sequential position in feature binding. Participants viewed a sequence of colored disks displayed at different locations and were cued either by a disk's location or its ordinal position to report its remaining properties. The pattern of errors supported a mixed strategy with individual variation, suggesting that binding via either time or space could be used for this task. |
Tim Schilling; Mojtaba Soltanlou; Hans-Christoph Nuerk; Hamed Bahmani Blue-light stimulation of the blind-spot constricts the pupil and enhances contrast sensitivity Journal Article In: PLoS ONE, vol. 18, pp. 1–11, 2023. @article{Schilling2023, Short- and long-wavelength light can alter pupillary responses differently, allowing inferences to be made about the contribution of different photoreceptors on pupillary constriction. In addition to classical retinal photoreceptors, the pupillary light response is formed by the activity of melanopsin-expressing intrinsically photosensitive retinal ganglion cells (ipRGC). It has been shown in rodents that melanopsin is expressed in the axons of ipRGCs that bundle at the optic nerve head, which forms the perceptual blind-spot. Hence, the first aim of this study was to investigate if blind-spot stimulation induces a pupillary response. The second aim was to investigate the effect of blind-spot stimulation by using the contrast sensitivity tests. Fifteen individuals participated in the pupil response experiment and thirty-two individuals in the contrast sensitivity experiment. The pupillary change was quantified using the post-illumination pupil response (PIPR) amplitudes after blue-light (experimental condition) and red-light (control condition) pulses in the time window between 2 s and 6 s post-illumination. The contrast sensitivity was assessed using two different tests: the Freiburg Visual Acuity Test and Contrast Test and the Tuebingen Contrast Sensitivity Test, respectively. Contrast sensitivity was measured before and 20 minutes after binocular blue-light stimulation of the blind-spot at spatial frequencies higher than or equal to 3 cycles per degree (cpd) and at spatial frequencies lower than 3 cpd (control condition). Blue-light blind-spot stimulation induced a significantly larger PIPR compared to red-light, confirming a melanopsin-mediated pupil-response in the blind-spot. Furthermore, contrast sensitivity was increased after blind-spot stimulation, confirmed by both contrast sensitivity tests. Only spatial frequencies of at least 3 cpd were enhanced. This study demonstrates that stimulating the blind-spot with blue-light constricts the pupil and increases the contrast sensitivity at higher spatial frequencies. |
Michael Paul Schallmo; Kimberly B. Weldon; Rohit S. Kamath; Hannah R. Moser; Samantha A. Montoya; Kyle W. Killebrew; Caroline Demro; Andrea N. Grant; Małgorzata Marjańska; Scott R. Sponheim; Cheryl A. Olman The psychosis human connectome project: Design and rationale for studies of visual neurophysiology Journal Article In: NeuroImage, vol. 272, pp. 1–20, 2023. @article{Schallmo2023, Visual perception is abnormal in psychotic disorders such as schizophrenia. In addition to hallucinations, laboratory tests show differences in fundamental visual processes including contrast sensitivity, center-surround interactions, and perceptual organization. A number of hypotheses have been proposed to explain visual dysfunction in psychotic disorders, including an imbalance between excitation and inhibition. However, the precise neural basis of abnormal visual perception in people with psychotic psychopathology (PwPP) remains unknown. Here, we describe the behavioral and 7 tesla MRI methods we used to interrogate visual neurophysiology in PwPP as part of the Psychosis Human Connectome Project (HCP). In addition to PwPP (n = 66) and healthy controls (n = 43), we also recruited first-degree biological relatives (n = 44) in order to examine the role of genetic liability for psychosis in visual perception. Our visual tasks were designed to assess fundamental visual processes in PwPP, whereas MR spectroscopy enabled us to examine neurochemistry, including excitatory and inhibitory markers. We show that it is feasible to collect high-quality data across multiple psychophysical, functional MRI, and MR spectroscopy experiments with a sizable number of participants at a single research site. These data, in addition to those from our previously described 3 tesla experiments, will be made publicly available in order to facilitate further investigations by other research groups. By combining visual neuroscience techniques and HCP brain imaging methods, our experiments offer new opportunities to investigate the neural basis of abnormal visual perception in PwPP. |
Jonathan Schaffner; Sherry Dongqi Bao; Philippe N. Tobler; Todd A. Hare; Rafael Polania Sensory perception relies on fitness-maximizing codes Journal Article In: Nature Human Behaviour, vol. 7, no. 7, pp. 1135–1151, 2023. @article{Schaffner2023, Sensory information encoded by humans and other organisms is generally presumed to be as accurate as their biological limitations allow. However, perhaps counterintuitively, accurate sensory representations may not necessarily maximize the organism's chances of survival. To test this hypothesis, we developed a unified normative framework for fitness-maximizing encoding by combining theoretical insights from neuroscience, computer science, and economics. Behavioural experiments in humans revealed that sensory encoding strategies are flexibly adapted to promote fitness maximization, a result confirmed by deep neural networks with information capacity constraints trained to solve the same task as humans. Moreover, human functional MRI data revealed that novel behavioural goals that rely on object perception induce efficient stimulus representations in early sensory structures. These results suggest that fitness-maximizing rules imposed by the environment are applied at early stages of sensory processing in humans and machines. |
Eugenio Scaliti; Kiri Pullar; Giulia Borghini; Andrea Cavallo; Stefano Panzeri; Cristina Becchio Kinematic priming of action predictions Journal Article In: Current Biology, vol. 33, no. 13, pp. 2717–2727, 2023. @article{Scaliti2023, The ability to anticipate what others will do next is crucial for navigating social, interactive environments. Here, we develop an experimental and analytical framework to measure the implicit readout of prospective intention information from movement kinematics. Using a primed action categorization task, we first demonstrate implicit access to intention information by establishing a novel form of priming, which we term kinematic priming: subtle differences in movement kinematics prime action prediction. Next, using data collected from the same participants in a forced-choice intention discrimination task 1 h later, we quantify single-trial intention readout—the amount of intention information read by individual perceivers in individual kinematic primes—and assess whether it can be used to predict the amount of kinematic priming. We demonstrate that the amount of kinematic priming, as indexed by both response times (RTs) and initial fixations to a given probe, is directly proportional to the amount of intention information read by the individual perceiver at the single-trial level. These results demonstrate that human perceivers have rapid, implicit access to intention information encoded in movement kinematics and highlight the potential of our approach to reveal the computations that permit the readout of this information with single-subject, single-trial resolution. |
Ceyda Sayalı; Emma Heling; Roshan Cools Learning progress mediates the link between cognitive effort and task engagement Journal Article In: Cognition, vol. 236, pp. 1–11, 2023. @article{Sayali2023, While a substantial body of work has shown that cognitive effort is aversive and costly, a separate line of research on intrinsic motivation suggests that people spontaneously seek challenging tasks. According to one prominent account of intrinsic motivation, the learning progress motivation hypothesis, the preference for difficult tasks reflects the dynamic range that these tasks yield for changes in task performance (Kaplan & Oudeyer, 2007). Here we test this hypothesis, by asking whether greater engagement with intermediately difficult tasks, indexed by subjective ratings and objective pupil measurements, is a function of trial-wise changes in performance. In a novel paradigm, we determined each individual's capacity for task performance and used difficulty levels that are low, intermediately challenging or high for that individual. We demonstrated that challenging tasks resulted in greater liking and engagement scores compared with easy tasks. Pupil size tracked objective task difficulty, where challenging tasks were associated with greater pupil responses than easy tasks. Most importantly, pupil responses were predicted by trial-to-trial changes in average accuracy as well as learning progress (derivative of average accuracy), while greater pupil responses also predicted greater subjective engagement scores. Together, these results substantiate the learning progress motivation hypothesis stating that the link between task engagement and cognitive effort is mediated the dynamic range for changes in task performance. |
Pankhuri Saxena; Stefan Treue Effect of attention on human direction-discrimination thresholds at iso-eccentric locations in the visual field: A registered report protocol Journal Article In: PLoS ONE, vol. 18, pp. 1–9, 2023. @article{Saxena2023, Human visual perceptual performance is strongly dependent on a given stimulus' distance from the line of sight, i.e. its eccentricity. In addition, multiple studies have shown a dependence on a stimulus' angular position relative to the fovea. In humans, the resulting spatial profile of perceptual performance (the “performance field”) typically shows better performance near the lower vertical meridian, compared to the upper vertical meridian, and better performance near the horizontal meridian compared to the vertical meridian. Predominantly, these variations have been interpreted as sensory inhomogeneities. But it has also been shown that they are modulated by the allocation of spatial attention, either homogeneously elevating performance or compensating for the sensory inhomogeneities. Here, we propose a study protocol for pre-registration to investigate such interactions between sensory and attentional effects. First, we will determine performance fields for time-dependent, dynamic stimuli, namely the direction discrimination of moving random dot patterns. Then, we will establish whether directing focal attention to a particular stimulus location differentially improves thresholds compared to a distributed attention condition. |
Tetsuya Sato; Samia Islam; Jeremiah D. Still; Mark W. Scerbo; Yusuke Yamani Task priority reduces an adverse effect of task load on automation trust in a dynamic multitasking environment Journal Article In: Cognition, Technology and Work, vol. 25, no. 1, pp. 1–13, 2023. @article{Sato2023, The present study examined how task priority influences operators' scanning patterns and trust ratings toward imperfect automation. Previous research demonstrated that participants display lower trust and fixate less frequently toward a visual display for the secondary task assisted with imperfect automation when the primary task demanded more attention. One account for this phenomenon is that the increased primary task demand induced the participants to prioritize the primary task than the secondary task. The present study asked participants to perform a tracking task, system monitoring task, and resource management task simultaneously using the Multi-Attribute Task Battery (MATB) II. Automation assisted the system monitoring task with 70% reliability. Task load was manipulated via difficulty of the tracking task. Participants were explicitly instructed to either prioritize the tracking task over all other tasks (tracking priority condition) or reduce tracking performance (equal priority condition). The results demonstrate the effects of task load on attention distribution, task performance and trust ratings. Furthermore, participants under the equal priority condition reported lower performance-based trust when the tracking task required more frequent manual input (tracking condition), while no effect of task load was observed under the tracking priority condition. Task priority can modulate automation trust by eliminating the adverse effect of task load in a dynamic multitasking environment. |
Florian Sandhaeger; Nina Omejc; Anna Antonia Pape; Markus Siegel Abstract perceptual choice signals during action-linked decisions in the human brain Journal Article In: PLoS Biology, vol. 21, no. 10, pp. 1–27, 2023. @article{Sandhaeger2023, Humans can make abstract choices independent of motor actions. However, in laboratory tasks, choices are typically reported with an associated action. Consequentially, knowledge about the neural representation of abstract choices is sparse, and choices are often thought to evolve as motor intentions. Here, we show that in the human brain, perceptual choices are represented in an abstract, motor-independent manner, even when they are directly linked to an action. We measured MEG signals while participants made choices with known or unknown motor response mapping. Using multivariate decoding, we quantified stimulus, perceptual choice, and motor response information with distinct cortical distributions. Choice representations were invariant to whether the response mapping was known during stimulus presentation, and they occupied a distinct representational space from motor signals. As expected from an internal decision variable, they were informed by the stimuli, and their strength predicted decision confidence and accuracy. Our results demonstrate abstract neural choice signals that generalize to action-linked decisions, suggesting a general role of an abstract choice stage in human decision-making. |
Kathryn Nicole Sam; K. Jayasankara Reddy The effect of music and editing style on subjective perception of time when watching videos Journal Article In: Projections, vol. 17, no. 2, pp. 41–61, 2023. @article{Sam2023, Arousal, editing style, and eye movements have been implicated in time perception when watching videos. However, little multimodal research has explored how manipulating both the auditory and visual properties of videos affects temporal processing. This study investigated how editing density and music-induced arousal affect viewers' time perception. Thirty-nine participants watched six videos varying in editing density and music while their eye movements were recorded. They estimated the videos' duration and reported their subjective experience of time passage and emotional involvement. Fast-paced editing was associated with the feeling of time passing faster, a relationship mediated by fixation durations. High-arousal background music was also associated with the feeling of time passing faster. The consequences of this study in terms of a possible auditory driving effect are explored. |
Atena Sajedin; Sina Salehi; Hossein Esteky Information content and temporal structure of face selective local field potentials frequency bands in IT cortex Journal Article In: Cerebral Cortex, pp. 1–12, 2023. @article{Sajedin2023, Sensory stimulation triggers synchronized bioelectrical activity in the brain across various frequencies. This study delves into network-level activities, specifically focusing on local field potentials as a neural signature of visual category representation. Specifically, we studied the role of different local field potential frequency oscillation bands in visual stimulus category representation by presenting images of faces and objects to three monkeys while recording local field potential from inferior temporal cortex. We found category selective local field potential responses mainly for animate, but not inanimate, objects. Notably, face-selective local field potential responses were evident across all tested frequency bands, manifesting in both enhanced (above mean baseline activity) and suppressed (below mean baseline activity) local field potential powers. We observed four different local field potential response profiles based on frequency bands and face selective excitatory and suppressive responses. Low-frequency local field potential bands (1–30 Hz) were more prodominstaly suppressed by face stimulation than the high-frequency (30–170 Hz) local field potential bands. Furthermore, the low-frequency local field potentials conveyed less face category informtion than the high-frequency local field potential in both enhansive and suppressive conditions. Furthermore, we observed a negative correlation between face/object d-prime values in all the tested local field potential frequency bands and the anterior–posterior position of the recording sites. In addition, the power of low-frequency local field potential systematically declined across inferior temporal anterior–posterior positions, whereas high-frequency local field potential did not exhibit such a pattern. In general, for most of the above-mentioned findings somewhat similar results were observed for body, but not, other stimulus categories. The observed findings suggest that a balance of face selective excitation and inhibition across time and cortical space shape face category selectivity in inferior temporal cortex. |
Muhammet Ikbal Sahan; Roma Siugzdaite; Sebastiaan Mathôt; Wim Fias Attention-based rehearsal: Eye movements reveal how visuospatial information is maintained in working memory Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, pp. 1–12, 2023. @article{Sahan2023, The human eye scans visual information through scan paths, series of fixations. Analogous to these scan paths during the process of actual “seeing,” we investigated whether similar scan paths are also observed while subjects are “rehearsing” stimuli in visuospatial working memory. Participants performed a continu- ous recall task in which they rehearsed the precise location and color ofthree serially presented discs during a retention interval, and later reproduced either the precise location or the color of a single probed item. In two experiments, we varied the direction along which the items were presented and investigated whether scan paths during rehearsal followed the pattern of stimulus presentation during encoding (left-to-right in Experiment 1; left-to-right/right-to-left in Experiment 2). In both experiments, we confirmed that the eyes follow similar scan paths during encoding and rehearsal. Specifically, we observed that during rehearsal par- ticipants refixated the memorized locations they saw during encoding. Most interestingly, the precision with which these locations were refixated was associated with smaller recall errors. Assuming that eye position reflects the focus of attention, our findings suggest a functional contribution of spatial attention shifts to working memory and are in line with the hypothesis that maintenance ofinformation in visuospatial working memory is supported by attention-based rehearsal. |
Elizabeth M. Sachse; Adam C. Snyder Dynamic attention signalling in V4: Relation to fast-spiking/non-fast-spiking cell class and population coupling Journal Article In: European Journal of Neuroscience, vol. 57, no. 6, pp. 918–939, 2023. @article{Sachse2023, The computational role of a neuron during attention depends on its firing properties, neurotransmitter expression and functional connectivity. Neurons in the visual cortical area V4 are reliably engaged by selective attention but exhibit diversity in the effect of attention on firing rates and correlated variability. It remains unclear what specific neuronal properties shape these attention effects. In this study, we quantitatively characterised the distribution of attention modulation of firing rates across populations of V4 neurons. Neurons exhibited a continuum of time-varying attention effects. At one end of the continuum, neurons' spontaneous firing rates were slightly depressed with attention (compared to when unattended), whereas their stimulus responses were enhanced with attention. The other end of the continuum showed the converse pattern: attention depressed stimulus responses but increased spontaneous activity. We tested whether the particular pattern of time-varying attention effects that a neuron exhibited was related to the shape of their actions potentials (so-called ‘fast-spiking' [FS] neurons have been linked to inhibition) and the strength of their coupling to the overall population. We found an interdependence among neural attention effects, neuron type and population coupling. In particular, we found neurons for which attention enhanced spontaneous activity but suppressed stimulus responses were less likely to be fast-spiking (more likely to be non-fast-spiking) and tended to have stronger population coupling, compared to neurons with other types of attention effects. These results add important information to our understanding of visual attention circuits at the cellular level. |
Satu Saalasti; Jussi Alho; Juha M. Lahnakoski; Mareike Bacha-Trams; Enrico Glerean; Iiro P. Jääskeläinen; Uri Hasson; Mikko Sams Lipreading a naturalistic narrative in a female population: Neural characteristics shared with listening and reading Journal Article In: Brain and Behavior, vol. 13, no. 2, pp. 1–17, 2023. @article{Saalasti2023, Introduction: Few of us are skilled lipreaders while most struggle with the task. Neural substrates that enable comprehension of connected natural speech via lipreading are not yet well understood. Methods: We used a data-driven approach to identify brain areas underlying the lipreading of an 8-min narrative with participants whose lipreading skills varied extensively (range 6–100% |
Juliette Ryan-Lortie; Gabriel Pelletier; Matthew Pilgrim; Lesley K. Fellows Gaze differences in configural and elemental evaluation during multi-attribute decision-making Journal Article In: Frontiers in Neuroscience, vol. 17, pp. 1–10, 2023. @article{RyanLortie2023, Introduction: While many everyday choices are between multi-attribute options, how attribute values are integrated to allow such choices remains unclear. Recent findings suggest a distinction between elemental (attribute-by-attribute) and configural (holistic) evaluation of multi-attribute options, with different neural substrates. Here, we asked if there are behavioral or gaze pattern differences between these putatively distinct modes of multi-attribute decision-making. Methods: Thirty-nine healthy men and women learned the monetary values of novel multi-attribute pseudo-objects (fribbles) and then made choices between pairs of these objects while eye movements were tracked. Value was associated with individual attributes in the elemental condition, and with unique combinations of attributes in the configural condition. Choice, reaction time, gaze fixation time on options and individual attributes, and within- and between-option gaze transitions were recorded. Results: There were systematic behavioral differences between elemental and configural conditions. Elemental trials had longer reaction times and more between-option transitions, while configural trials had more within-option transitions. The effect of last fixation on choice was more pronounced in the configural condition. Discussion: We observed differences in gaze patterns and the influence of last fixation location on choice in multi-attribute value-based choices depending on how value is associated with those attributes. This adds support for the claim that multi-attribute option values may emerge either elementally or holistically, reminiscent of similar distinctions in multi-attribute object recognition. This may be important to consider in neuroeconomics research that involve visually-presented complex objects. |
Brian E. Brain E. Russ; Kenji W. Koyano; Julian Day-Cooney; Neda Perwez; David A. Leopold Temporal continuity shapes visual responses of macaque face patch neurons Journal Article In: Neuron, vol. 111, no. 6, pp. 903–914, 2023. @article{Russ2023, Macaque inferior temporal cortex neurons respond selectively to complex visual images, with recent work showing that they are also entrained reliably by the evolving content of natural movies. To what extent does visual continuity itself shape the responses of high-level visual neurons? We addressed this question by measuring how cells in face-selective regions of the macaque temporal cortex were affected by the manipulation of a movie's temporal structure. Sampling the movie at 1s intervals, we measured neural responses to randomized, brief stimuli of different lengths, ranging from 800 ms dynamic movie snippets to 100 ms static frames. We found that the disruption of temporal continuity strongly altered neural response profiles, particularly in the early onset response period of the randomized stimulus. The results suggest that models of visual system function based on discrete and randomized visual presentations may not translate well to the brain's natural modes of operation. |
Annie Roy-Charland; Victoria Foglia; Karolyn Cloutier; Emalie Hendel; Marie Pier Mazerolle The effect of instructions and response format on smile judgement Journal Article In: Canadian Journal of Experimental Psychology/Revue canadienne de psychologie expérimentale, vol. 77, no. 4, pp. 308–318, 2023. @article{RoyCharland2023, Our study examined the role of instructions, response type, and definition on the judgement of enjoyment and nonenjoyment smiles. Participants viewed symmetric Duchenne, non-Duchenne, and asymmetric smiles. They were instructed to judge the happiness, authenticity, and sincerity of the smiles using either Likert scales or a dichotomous response type. Participants were also either given a definition of the instruction words "happy," "authentic," and "sincere" or not. Results showed that the probability of saying "really (happy/sincere/authentic)" was higher for the symmetric Duchenne than the asymmetric smiles and higher for the asymmetric than non-Duchenne smiles. Changing the instructions given to participants did not override the effect of smile type with the use of Likert scale or dichotomous response. However, with the use of Likert scale, we observed subtilities that were not observed with the use of dichotomous response. When given a definition, in the case of symmetric non-Duchenne smiles, Likert ratings were significantly lower, and participants were more accurate in their judgement on the dichotomous scale. However, no differences were observed for the asymmetric Duchenne and symmetric Duchenne smiles whether a definition was given or not. Symmetric non-Duchenne and asymmetric Duchenne smiles were also viewed longer when a definition was given than when one was not. Nevertheless, considering methodological variations of our study failed to explain the variations in the pattern of results of previous studies, other avenues should be explored, such as the use of dynamic stimuli and a greater variety of encoders. |
Cristina Rovira-Gay; Clara Mestre; Marc Argiles; Valldeflors Vinuela-Navarro; Jaume Pujol Feasibility of measuring fusional vergence amplitudes objectively Journal Article In: PLoS ONE, vol. 18, pp. 1–14, 2023. @article{RoviraGay2023, Two tests to measure fusional vergence amplitudes objectively were developed and validated against the two conventional clinical tests. Forty-nine adults participated in the study. Participants' negative (BI, base in) and positive (BO, base out) fusional vergence amplitudes at near were measured objectively in an haploscopic set-up by recording eye movements with an EyeLink 1000 Plus (SR Research). Stimulus disparity changed in steps or smoothly mimicking a prim bar and a Risley prism, respectively. Break and recovery points were determined offline using a custom Matlab algorithm for the analysis of eye movements. Fusional vergence amplitudes were also measured with two clinical tests using a Risley prism and a prism bar. A better agreement between tests was found for the measurement of BI than for BO fusional vergence amplitudes. The means ± SD of the differences between the BI break and recovery points measured with the two objective tests were -1.74 ± 3.35 PD and -1.97 ± 2.60 PD, respectively, which were comparable to those obtained for the subjective tests. For the BO break and recovery points, although the means of the differences between the two objective tests were small, high variability between subjects was found (0.31 ± 6.44 PD and -2.84 ± 7.01 PD, respectively). This study showed the feasibility to measure fusional vergence amplitudes objectively and overcome limitations of the conventional subjective tests. However, these tests cannot be used interchangeably due to their poor agreement. |
Alireza Rouzitalab; Chadwick B. Boulay; Jeongwon Park; Julio C. Martinez-Trujillo; Adam J. Sachs Ensembles code for associative learning in the primate lateral prefrontal cortex Journal Article In: Cell Reports, vol. 42, no. 5, pp. 1–16, 2023. @article{Rouzitalab2023, The lateral prefrontal cortex (LPFC) of primates is thought to play a role in associative learning. However, it remains unclear how LPFC neuronal ensembles dynamically encode and store memories for arbitrary stimulus-response associations. We recorded the activity of neurons in LPFC of two macaques during an associative learning task using multielectrode arrays. During task trials, the color of a symbolic cue indicated the location of one of two possible targets for a saccade. During a trial block, multiple randomly chosen associations were learned by the subjects. A state-space analysis indicated that LPFC neuronal ensembles rapidly learn new stimulus-response associations mirroring the animals' learning. Multiple associations acquired during training are stored in a neuronal subspace and can be retrieved hours after learning. Finally, knowledge of old associations facilitates learning new, similar associations. These results indicate that neuronal ensembles in the primate LPFC provide a flexible and dynamic substrate for associative learning. |
Tevin C. Rouse; Amy M. Ni; Chengcheng Huang; Marlene R. Cohen Topological insights into the neural basis of flexible behavior Journal Article In: Proceedings of the National Academy of Sciences of the United States of America, vol. 120, no. 24, pp. 1–11, 2023. @article{Rouse2023, It is widely accepted that there is an inextricable link between neural computations, biological mechanisms, and behavior, but it is challenging to simultaneously relate all three. Here, we show that topological data analysis (TDA) provides an important bridge between these approaches to studying how brains mediate behavior. We demonstrate that cognitive processes change the topological description of the shared activity of populations of visual neurons. These topological changes constrain and distinguish between competing mechanistic models, are connected to subjects performance on a visual change detection task, and, via a link with network control theory, reveal a tradeoff between improving sensitivity to subtle visual stimulus changes and increasing the chance that the subject will stray off task. These connections provide a blueprint for using TDA to uncover the biological and computational mechanisms by which cognition affects behavior in health and disease. |
Mylène Ross-Plourde; Mylène Lachance-Grzela; Andréanne Charbonneau; Mylène Dumont; Annie Roy-Charland Parental stereotypes and cognitive processes: Evidence for a double standard in parenting roles when reading texts Journal Article In: Journal of Gender Studies, vol. 32, no. 1, pp. 74–82, 2023. @article{RossPlourde2023, While the characteristics associated with fathers have taken on more maternal traits more recently, a similar shift has not been observed for maternal characteristics. The role of mother remains stereotyped, and those who do not adhere to this often face criticism. This study examines the impact of parental stereotypes on the cognitive processes associated with reading. A sample of 32 individuals read 24 experimental passages introducing a parent (mother or father) in a traditional or non-traditional role, and in a neutral or disambiguating context. Results show a significant interaction between the type of role and gender of the parent on reading times. Simple main effect tests revealed that for traditional roles, fixation durations were longer when the protagonist was a father than when the protagonist was a mother. There was no effect of role type for fathers, yet for mothers, fixation durations were longer when they were depicted in non-traditional roles than when they were depicted in traditional roles. This disruption of information processing of schema incongruent content suggests that mothers' parenting stereotypes remain anchored in society and are more rigid than those of fathers, supporting the idea of a double standard in parenting roles. |
F. Rojas-Thomas; C. Artigas; G. Wainstein; Juan Pablo Morales; M. Arriagada; D. Soto; A. Dagnino-Subiabre; J. Silva; V. Lopez Impact of acute psychosocial stress on attentional control in humans. A study of evoked potentials and pupillary response Journal Article In: Neurobiology of Stress, vol. 25, pp. 1–16, 2023. @article{RojasThomas2023, Psychosocial stress has increased considerably in our modern lifestyle, affecting global mental health. Deficits in attentional control are cardinal features of stress disorders and pathological anxiety. Studies suggest that changes in the locus coeruleus-norepinephrine system could underlie the effects of stress on top-down attentional control. However, the impact of psychosocial stress on attentional processes and its underlying neural mechanisms are poorly understood. This study aims to investigate the effect of psychosocial stress on attentional processing and brain signatures. Evoked potentials and pupillary activity related to the oddball auditory paradigm were recorded before and after applying the Montreal Imaging Stress Task (MIST). Electrocardiogram (ECG), salivary cortisol, and subjective anxiety/stress levels were measured at different experimental periods. The control group experienced the same physical and cognitive effort but without the psychosocial stress component. The results showed that stressed subjects exhibited decreased P3a and P3b amplitude, pupil phasic response, and correct responses. On the other hand, they displayed an increase in Mismatch Negativity (MMN). N1 amplitude after MIST only decreased in the control group. We found that differences in P3b amplitude between the first and second oddball were significantly correlated with pupillary dilation and salivary cortisol levels. Our results suggest that under social-evaluative threat, basal activity of the coeruleus-norepinephrine system increases, enhancing alertness and decreasing voluntary attentional resources for the cognitive task. These findings contribute to understanding the neurobiological basis of attentional changes in pathologies associated with chronic psychosocial stress. |
Claudia Rodriguez-Sobstel; Shannon Wake; Helen Dodd; Eugene McSorley; Carien M. Reekum; Jayne Morriss Shifty eyes: The impact of intolerance of uncertainty on gaze behaviour during threat conditioning Journal Article In: Collabra: Psychology, vol. 9, no. 1, pp. 1–15, 2023. @article{RodriguezSobstel2023, Previous research has demonstrated that individuals with high levels of Intolerance of Uncertainty (IU) have difficulty updating threat associations to safety associations. Notably, prior research has focused on measuring IU-related differences in threat and safety learning using arousal-based measures such as skin conductance response. Here we assessed whether IU-related differences in threat and safety learning could be captured using eye-tracking metrics linked with gaze behaviours such as dwelling and scanning. Participants (N = 144) completed self-report questionnaires assessing levels of IU and trait anxiety. Eye movements were then recorded during each conditioning phase: acquisition, extinction learning, and extinction retention. Fixation count and fixation duration served as indices of conditioned responding. Patterns of threat and safety learning typically reported for physiology and self-report were observed for the fixation count and fixation duration metrics during acquisition and to some extent in extinction learning, but not for extinction retention. There was little evidence for specific associations between IU and disrupted safety learning (e.g., greater differential responses to the threat vs. safe cues during extinction learning and retention). While there was tentative evidence that IU was associated with shorter fixation durations (e.g., scanning) to threat vs. safe cues during extinction retention, this effect did not remain after controlling for trait anxiety. IU and trait anxiety similarly predicted greater fixation count and shorter fixation durations overall during extinction learning, and greater fixation count overall during extinction retention. IU further predicted shorter fixation durations overall during extinction retention. However, the only IU-based effect that remained significant after controlling for trait anxiety was that of fixation duration overall during threat extinction learning. Our results inform models of anxiety, particularly in relation to how individual differences modulate gaze behaviour during threat conditioning. |
Nadira Yusif Rodriguez; Theresa H. McKim; Debaleena Basu; Aarit Ahuja; Theresa M. Desrochers Monkey dorsolateral prefrontal cortex represents abstract visual sequences during a no-report task Journal Article In: Journal of Neuroscience, vol. 43, no. 15, pp. 2741–2755, 2023. @article{Rodriguez2023, Monitoring sequential information is an essential component of our daily lives. Many of these sequences are abstract, in that they do not depend on the individual stimuli, but do depend on an ordered set of rules (e.g., chop then stir when cooking). Despite the ubiquity and utility of abstract sequential monitoring, little is known about its neural mechanisms. Human rostrolateral prefrontal cortex (RLPFC) exhibits specific increases in neural activity (i.e., “ramping”) during abstract sequences. Monkey dorsolateral prefrontal cortex (DLPFC) has been shown to represent sequential information in motor (not abstract) sequence tasks, and contains a subregion, area 46, with homologous functional connectivity to human RLPFC. To test the prediction that area 46 may represent abstract sequence information, and do so with parallel dynamics to those found in humans, we conducted functional magnetic resonance imaging (fMRI) in three male monkeys. When monkeys performed no-report abstract sequence viewing, we found that left and right area 46 responded to abstract sequential changes. Interestingly, responses to rule and number changes overlapped in right area 46 and left area 46 exhibited responses to abstract sequence rules with changes in ramping activation, similar to that observed in humans. Together, these results indicate that monkey DLPFC monitors abstract visual sequential information, potentially with a preference for different dynamics in the two hemispheres. More generally, these results show that abstract sequences are represented in functionally homologous regions across monkeys and humans. |
Helen Rodger; Nayla Sokhn; Junpeng Lao; Yingdi Liu; Roberto Caldara Developmental eye movement strategies for decoding facial expressions of emotion Journal Article In: Journal of Experimental Child Psychology, vol. 229, pp. 1–23, 2023. @article{Rodger2023, In our daily lives, we routinely look at the faces of others to try to understand how they are feeling. Few studies have examined the perceptual strategies that are used to recognize facial expressions of emotion, and none have attempted to isolate visual information use with eye movements throughout development. Therefore, we recorded the eye movements of children from 5 years of age up to adulthood during recognition of the six “basic emotions” to investigate when perceptual strategies for emotion recognition become mature (i.e., most adult-like). Using iMap4, we identified the eye movement fixation patterns for recognition of the six emotions across age groups in natural viewing and gaze-contingent (i.e., expanding spotlight) conditions. While univariate analyses failed to reveal significant differences in fixation patterns, more sensitive multivariate distance analyses revealed a U-shaped developmental trajectory with the eye movement strategies of the 17- to 18-year-old group most similar to adults for all expressions. A developmental dip in strategy similarity was found for each emotional expression revealing which age group had the most distinct eye movement strategy from the adult group: the 13- to 14-year-olds for sadness recognition; the 11- to 12-year-olds for fear, anger, surprise, and disgust; and the 7- to 8-year-olds for happiness. Recognition performance for happy, angry, and sad expressions did not differ significantly across age groups, but the eye movement strategies for these expressions diverged for each group. Therefore, a unique strategy was not a prerequisite for optimal recognition performance for these expressions. Our data provide novel insights into the developmental trajectories underlying facial expression recognition, a critical ability for adaptive social relations. |
Scott Roberts; Peter R. Kufahl; Rebecca J. Ryznar; Taylor Norris; Sagar Patel; K. Dean Gubler; Dean Paz; Greg Schwimer; Richard Besserman; Anthony J. LaPorta Start-of-day oculomotor screening demonstrates the effects of fatigue and rest during a total immersion training program Journal Article In: Surgery, vol. 174, no. 5, pp. 1193–1200, 2023. @article{Roberts2023, Background: Investigating changes in sleep and fatigue metrics during intensive surgical and trauma skills training, this study explored the dynamic association between oculomotor metrics and fatigue. Specifically, alterations in these relations over extended stress exposure, the influence of time of day, and the impact of fatigue exposure on sleep metrics were examined. Methods: Thirty-nine military medical students participated in 6 days of immersion, hyper-realistic, and high-stress experiential casualty training. Participants completed surveys assessing the state of sleepiness with oculomotor tests performed each morning and evening, analyzing eye movement and pupillary change to characterize fatigue. Participants wore Fitbit TM devices to measure overall time asleep and time in each sleep stage during the training. Results: Fitbit data showed increased average minutes in rapid eye movement, deep sleep, and less time in light sleep from day 1 to day 4. The microsaccade peak velocity-to-displacement ratio exhibited a morning decrease but not in afternoon sessions, indicating repeated but temporary effects of accumulated fatigue. There were no findings regarding pupil reactivity to illumination changes. Conclusion: This study describes characteristics of fatigue measured by rapid and individually calibrated oculomotor tests. It demonstrates oculomotor relationships to fatigue in start-of-day testing, providing a direction for timing for optimal fatigue testing. These data suggest that improved sleep could signal resilience to fatigue during afternoon testing. Further investigation with more participants and longer duration is warranted. A deeper understanding of the interrelationships between training, sleep, and fatigue could improve surgical and military fitness. |
Elizabeth Riley; Hamid Turker; Dongliang Wang; Khena M. Swallow; Adam K. Anderson; Eve De Rosa Nonlinear changes in pupillary attentional orienting responses across the lifespan Journal Article In: GeroScience, pp. 1–17, 2023. @article{Riley2023, The cognitive aging process is not necessarily linear. Central task-evoked pupillary responses, representing a brainstem-pupil relationship, may vary across the lifespan. Thus we examined, in 75 adults ranging in age from 19 to 86, whether task-evoked pupillary responses to an attention task may serve in as an index of cognitive aging. This is because the locus coeruleus (LC), located in the brainstem, is not only among the earliest sites of degeneration in pathological aging, but also supports both attentional and pupillary behaviors. We assessed brief, task-evoked phasic attentional orienting to behaviorally relevant and irrelevant auditory tones, stimuli known specifically to recruit the LC in the brainstem and evoke pupillary responses. Due to potential nonlinear changes across the lifespan, we used a novel data-driven analysis on 6 dynamic pupillary behaviors on 10% of the data to reveal cut off points that best characterized the three age bands: young (19–41 years old), middle aged (42–68 years old), and older adults (69 + years old). Follow-up analyses on independent data, the remaining 90%, revealed age-related changes such as monotonic decreases in tonic pupillary diameter and dynamic range, along with curvilinear phasic pupillary responses to the behaviorally relevant target events, increasing in the middle-aged group and then decreasing in the older group. Additionally, the older group showed decreased differentiation of pupillary responses between target and distractor events. This pattern is consistent with potential compensatory LC activity in midlife that is diminished in old age, resulting in decreased adaptive gain. Beyond regulating responses to light, pupillary dynamics reveal a nonlinear capacity for neurally mediated gain across the lifespan, thus providing evidence in support of the LC adaptive gain hypothesis. |
Mario Reutter; Matthias Gamer Individual patterns of visual exploration predict the extent of fear generalization in humans Journal Article In: Emotion, vol. 23, no. 5, pp. 1267–1280, 2023. @article{Reutter2023, Generalization of fear is considered an important mechanism contributing to the etiology and maintenance of anxiety disorders. Although previous studies have identified the importance of stimulus discrimination for fear generalization, it is still unclear to what degree overt attention to relevant stimulus features might mediate its magnitude. To test the prediction that visual preferences for distinguishing stimulus aspects are associated with reduced fear generalization, we developed a set of facial stimuli that was meticulously manipulated such that pairs of faces could either be distinguished by looking into the eyes or into the region around mouth and nose, respectively. These pairs were then employed as CS+ and CS in a differential fear conditioning paradigm followed by a generalization test with morphs in steps of 20%. Shock expectancy ratings indicated a moderately curved fear generalization gradient that is typical for healthy samples, but its shape was altered depending on individual attentional deployment: Particpants who dwelled on the distinguishing facial features faster and for longer periods of time exhibited less fear generalization. Although both pupil and heart rate responses also showed a generalization gradient, with pupil diameter and heart rate deceleration increasing as a function of threat, these responses were not significantly related to visual exploration. In total, the current results indicate that the extent of explicit fear generalization depends on individual patterns of attentional deployment. Future studies evaluating the efficacy of perceptual trainings that aim to augment stimulus discriminability in order to reduce (over)generalization seem desirable. |
Anja Rettig; Ulrich Schiefele Relations between reading motivation and reading efficiency—evidence from a longitudinal eye-tracking study Journal Article In: Reading Research Quarterly, vol. 58, no. 4, pp. 685–709, 2023. @article{Rettig2023, Studies on the relation between children's reading motivation and early developmental stages of reading competence are rare and have neglected on-line measures of reading skill (e.g., eye movements indicating word decoding). For this reason, we investigated the effects of intrinsic and extrinsic reading motivation on the efficiency of reading processes based on eye-movement data. Moreover, we examined reading efficiency as a mediator of the relation between motivation and comprehension. German elementary school students in Grades 1–3 (N = 131) were tested on three measurement occasions. Specifically, we assessed reading motivation, reading amount, and sentence comprehension at Time 1, reading efficiency at Time 2 (2 months after Time 1), and all of the variables again at Time 3 (10 months after Time 2). Reading efficiency was assessed while children read age-appropriate sentences and comprised measures of first-fixation duration, gaze duration, total reading time, forward-saccade length, and refixation probability. Linear and cross-lagged panel models showed significant favorable relations between intrinsic reading motivation (operationalized as involvement and enjoyment of reading), but not extrinsic reading motivation (operationalized as striving to outperform one's peers), and most measures of reading efficiency, while controlling for gender, grade level, and reading amount. The reverse effects of reading-efficiency indicators on intrinsic reading motivation were all significant. Moreover, the test of the mediation model revealed a significant indirect effect of Time 1 intrinsic reading motivation on Time 3 sentence comprehension mediated by Time 2 reading efficiency. We concluded that intrinsic reading motivation, in contrast to extrinsic reading motivation, facilitates reading comprehension through its effect on reading efficiency, independent of variations in reading amount. |
Thomas R. Reppert; Richard P. Heitz; Jeffrey D. Schall Neural mechanisms for executive control of speed-accuracy trade-off Journal Article In: Cell Reports, vol. 42, no. 11, pp. 1–18, 2023. @article{Reppert2023, The medial frontal cortex (MFC) plays an important but disputed role in speed-accuracy trade-off (SAT). In samples of neural spiking in the supplementary eye field (SEF) in the MFC simultaneous with the visuomotor frontal eye field and superior colliculus in macaques performing a visual search with instructed SAT, during accuracy emphasis, most SEF neurons discharge less from before stimulus presentation until response generation. Discharge rates adjust immediately and simultaneously across structures upon SAT cue changes. SEF neurons signal choice errors with stronger and earlier activity during accuracy emphasis. Other neurons signal timing errors, covarying with adjusting response time. Spike correlations between neurons in the SEF and visuomotor areas did not appear, disappear, or change sign across SAT conditions or trial outcomes. These results clarify findings with noninvasive measures, complement previous neurophysiological findings, and endorse the role of the MFC as a critic for the actor instantiated in visuomotor structures. |
Eyal M. Reingold; Heather Sheridan Chess expertise reflects domain-specific perceptual processing: Evidence from eye movements Journal Article In: Journal of Expertise, vol. 6, no. 1, pp. 1–18, 2023. @article{Reingold2023, The remarkably efficient performance of chess experts reflects extensive practice with domain-related visual configurations. To study the perceptual component of chess expertise, we monitored the eye movements of expert and novice chess players during the performance of a novel double-check detection task. Chess players viewed an array of six minimized chessboards (4 x 4 squares), with each board displaying a king and 2 attackers. Players rapidly searched for the target board containing a double-check among distractor boards which either displayed a single check or displayed no check. During each fixation, chess pieces were only visible within the fixated board, while all other boards were replaced by empty boards. On half the trials, chess pieces were represented using the familiar symbol notation, while on the other half of the trials, pieces were represented using an unfamiliar letter notation. The analysis of overall response times and several fine-grained eye movement measures indicated that in trials using the familiar symbol notation, experts were much faster at identifying the double-check board, and this advantage was substantially attenuated in trials using the unfamiliar letter notation. In addition, an ex-Gaussian distributional analysis documented similar expertise by notation interactions. We discuss the implications of the present findings for theories of visual expertise in general, and skilled performance in chess, in particular. |
Gwendolyn Rehrig; Taylor R. Hayes; John M. Henderson; Fernanda Ferreira Visual attention during seeing for speaking in healthy aging Journal Article In: Psychology and Aging, vol. 38, no. 1, pp. 1–18, 2023. @article{Rehrig2023, As we age, we accumulate a wealth of information, but cognitive processing becomes slower and less efficient. There is mixed evidence on whether world knowledge compensates for age- related cognitive decline (Umanath & Marsh, 2014). We investigated whether older adults are more likely to fixate more meaningful scene locations than are young adults. Young (N=30) and older adults (N=30, aged 66-82) described scenes while eye movements and descriptions were recorded. We used a logistic mixed-effects model to determine whether fixated scene locations differed in meaning, salience, and center distance from locations that were not fixated, and whether those properties differed for locations young and older adults fixated. Meaning predicted fixated locations well overall, though the locations older adults fixated were less meaningful than those that young adults fixated. These results suggest that older adults' visual attention is less sensitive to meaning than young adults, despite extensive experience with scenes. |
Maimu Alissa Rehbein; Thomas Kroker; Constantin Winker; Lena Ziehfreund; Anna Reschke; Jens Bölte; Miroslaw Wyczesany; Kati Roesmann; Ida Wessing; Markus Junghöfer Non-invasive stimulation reveals ventromedial prefrontal cortex function in reward prediction and reward processing Journal Article In: Frontiers in Neuroscience, vol. 17, pp. 1–22, 2023. @article{Rehbein2023, Introduction: Studies suggest an involvement of the ventromedial prefrontal cortex (vmPFC) in reward prediction and processing, with reward-based learning relying on neural activity in response to unpredicted rewards or non-rewards (reward prediction error, RPE). Here, we investigated the causal role of the vmPFC in reward prediction, processing, and RPE signaling by transiently modulating vmPFC excitability using transcranial Direct Current Stimulation (tDCS). Methods: Participants received excitatory or inhibitory tDCS of the vmPFC before completing a gambling task, in which cues signaled varying reward probabilities and symbols provided feedback on monetary gain or loss. We collected self-reported and evaluative data on reward prediction and processing. In addition, cue-locked and feedback-locked neural activity via magnetoencephalography (MEG) and pupil diameter using eye-tracking were recorded. Results: Regarding reward prediction (cue-locked analysis), vmPFC excitation (versus inhibition) resulted in increased prefrontal activation preceding loss predictions, increased pupil dilations, and tentatively more optimistic reward predictions. Regarding reward processing (feedback-locked analysis), vmPFC excitation (versus inhibition) resulted in increased pleasantness, increased vmPFC activation, especially for unpredicted gains (i.e., gain RPEs), decreased perseveration in choice behavior after negative feedback, and increased pupil dilations. Discussion: Our results support the pivotal role of the vmPFC in reward prediction and processing. Furthermore, they suggest that transient vmPFC excitation via tDCS induces a positive bias into the reward system that leads to enhanced anticipation and appraisal of positive outcomes and improves reward-based learning, as indicated by greater behavioral flexibility after losses and unpredicted outcomes, which can be seen as an improved reaction to the received feedback. |
Lukas Recker; Christian H. Poth Test–retest reliability of eye tracking measures in a computerized Trail Making Test Journal Article In: Journal of Vision, vol. 23, no. 8, pp. 1–17, 2023. @article{Recker2023, The Trail Making Test (TMT) is a frequently applied neuropsychological test that evaluates participants' executive functions based on their time to connect a sequence of numbers (TMT-A) or alternating numbers and letters (TMT-B). Test performance is associated with various cognitive functions ranging from visuomotor speed to working memory capabilities. However, although the test can screen for impaired executive functioning in a variety of neuropsychiatric disorders, it provides only little information about which specific cognitive impairments underlie performance detriments. To resolve this lack of specificity, recent cognitive research combined the TMT with eye tracking so that eye movements could help uncover reasons for performance impairments. However, using eye-tracking-based test scores to examine differences between persons, and ultimately apply the scores for diagnostics, presupposes that the reliability of the scores is established. Therefore, we investigated the test–retest reliabilities of scores in an eye-tracking version of the TMT recently introduced by Recker et al. (2022).We examined two healthy samples performing an initial test and then a retest 3 days (n = 31) or 10 to 30 days (n = 34) later. Results reveal that, although reliabilities of classic completion times were overall good, comparable with earlier versions, reliabilities of eye-tracking-based scores ranged from excellent (e.g., durations of fixations) to poor (e.g., number of fixations guiding manual responses). These findings indicate that some eye-tracking measures offer a strong basis for assessing interindividual differences beyond classic behavioral measures when examining processes related to information accumulation processes but are less suitable to diagnose differences in eye–hand coordination. |
Daniele Re; Golan Karvat; Ayelet N. Landau Attentional sampling between eye channels Journal Article In: Journal of Cognitive Neuroscience, vol. 35, no. 8, pp. 1350–1360, 2023. @article{Re2023, Our ability to detect targets in the environment fluctuates in time. When individuals focus attention on a single location, the ongoing temporal structure of performance fluctuates at 8 Hz. When task demands require the distribution of attention over two objects defined by their location, color or motion direction, ongoing performance fluctuates at 4 Hz per object. This suggests that distributing attention entails the division of the sampling process found for focused attention. It is unknown, however, at what stage of the processing hierarchy this sampling occurs, and whether attentional sampling depends on awareness. Here, we show that unaware selection between the two eyes leads to rhythmic sampling. We presented a display with a single central object to both eyes and manipulated the presentation of a reset event (i.e., cue) and a detection target to either both eyes (binocular) or separately to the different eyes (monocular). We assume that presenting a cue to one eye biases the selection process to content presented in that eye. Although participants were unaware of this manipulation, target detection fluctuated at 8 Hz under the binocular condition, and at 4 Hz when the right (and dominant) eye was cued. These results are consistent with recent findings reporting that competition between receptive fields leads to attentional sampling and demonstrate that this competition does not rely on aware processes. Furthermore, attentional sampling occurs at an early site of competition among monocular channels, before they are fused in the primary visual cortex. |
Isabel Raposo; Sara M. Szczepanski; Kathleen Haaland; Tor Endestad; Anne Kristin Solbakk; Robert T. Knight; Randolph F. Helfrich Periodic attention deficits after frontoparietal lesions provide causal evidence for rhythmic attentional sampling Journal Article In: Current Biology, vol. 33, no. 22, pp. 4893–4904, 2023. @article{Raposo2023, Contemporary models conceptualize spatial attention as a blinking spotlight that sequentially samples visual space. Hence, behavior fluctuates over time, even in states of presumed “sustained” attention. Recent evidence has suggested that rhythmic neural activity in the frontoparietal network constitutes the functional basis of rhythmic attentional sampling. However, causal evidence to support this notion remains absent. Using a lateralized spatial attention task, we addressed this issue in patients with focal lesions in the frontoparietal attention network. Our results revealed that frontoparietal lesions introduce periodic attention deficits, i.e., temporally specific behavioral deficits that are aligned with the underlying neural oscillations. Attention-guided perceptual sensitivity was on par with that of healthy controls during optimal phases but was attenuated during the less excitable sub-cycles. Theta-dependent sampling (3–8 Hz) was causally dependent on the prefrontal cortex, while high-alpha/low-beta sampling (8–14 Hz) emerged from parietal areas. Collectively, our findings reveal that lesion-induced high-amplitude, low-frequency brain activity is not epiphenomenal but has immediate behavioral consequences. More generally, these results provide causal evidence for the hypothesis that the functional architecture of attention is inherently rhythmic. |
Rajani Raman; Anna Bognár; Ghazaleh Ghamkhari Nejad; Nick Taubert; Martin Giese; Rufin Vogels Bodies in motion: Unraveling the distinct roles of motion and shape in dynamic body responses in the temporal cortex Journal Article In: Cell Reports, vol. 42, no. 12, pp. 1–20, 2023. @article{Raman2023, The temporal cortex represents social stimuli, including bodies. We examine and compare the contributions of dynamic and static features to the single-unit responses to moving monkey bodies in and between a patch in the anterior dorsal bank of the superior temporal sulcus (dorsal patch [DP]) and patches in the anterior inferotemporal cortex (ventral patch [VP]), using fMRI guidance in macaques. The response to dynamics varies within both regions, being higher in DP. The dynamic body selectivity of VP neurons correlates with static features derived from convolutional neural networks and motion. DP neurons' dynamic body selectivity is not predicted by static features but is dominated by motion. Whereas these data support the dominance of motion in the newly proposed “dynamic social perception” stream, they challenge the traditional view that distinguishes DP and VP processing in terms of motion versus static features, underscoring the role of inferotemporal neurons in representing body dynamics. |
Masih Rahmati; Clayton E. Curtis; Kartik K. Sreenivasan Mnemonic representations in human lateral geniculate nucleus Journal Article In: Frontiers in Behavioral Neuroscience, vol. 17, pp. 1–11, 2023. @article{Rahmati2023, There is a growing appreciation for the role of the thalamus in high-level cognition. Motivated by findings that internal cognitive state drives activity in feedback layers of primary visual cortex (V1) that target the lateral geniculate nucleus (LGN), we investigated the role of LGN in working memory (WM). Specifically, we leveraged model-based neuroimaging approaches to test the hypothesis that human LGN encodes information about spatial locations temporarily encoded in WM. First, we localized and derived a detailed topographic organization in LGN that accords well with previous findings in humans and non-human primates. Next, we used models constructed on the spatial preferences of LGN populations in order to reconstruct spatial locations stored in WM as subjects performed modified memory-guided saccade tasks. We found that population LGN activity faithfully encoded the spatial locations held in memory in all subjects. Importantly, our tasks and models allowed us to dissociate the locations of retinal stimulation and the motor metrics of memory-guided saccades from the maintained spatial locations, thus confirming that human LGN represents true WM information. These findings add LGN to the growing list of subcortical regions involved in WM, and suggest a key pathway by which memories may influence incoming processing at the earliest levels of the visual hierarchy. |
Aida Rahavi; Manuela Malaspina; Andrea Albonico; Jason J. S. Barton “Looking at nothing”: An implicit ocular motor index of face recognition in developmental prosopagnosia Journal Article In: Cognitive Neuropsychology, vol. 40, no. 2, pp. 59–70, 2023. @article{Rahavi2023, Subjects often look towards to previous location of a stimulus related to a task even when that stimulus is no longer visible. In this study we asked whether this effect would be preserved or reduced in subjects with developmental prosopagnosia. Participants learned faces presented in video-clips and then saw a brief montage of four faces, which was replaced by a screen with empty boxes, at which time they indicated whether the learned face had been present in the montage. Control subjects were more likely to look at the blank location where the learned face had appeared, on both hit and miss trials, though the effect was larger on hit trials. Prosopagnosic subjects showed a reduced effect, though still better on hit than on miss trials. We conclude that explicit accuracy and our implicit looking at nothing effect are parallel effects reflecting the strength of the neural activity underlying face recognition. |
Adam W. Qureshi; Rebecca L. Monk; Shelby Quinn; Bethan Gannon; Kayleigh McNally; Derek Heim Catching a smile from individuals and crowds: Evidence for distinct emotional contagion processes Journal Article In: Journal of Personality and Social Psychology, pp. 1–21, 2023. @article{Qureshi2023, Research examining how crowd emotions impact observers usually requires participants to engage in an atypical mental process whereby (static) arrays of individuals are cognitively integrated to represent a crowd. The present work sought to extend our understanding of how crowd emotions may spread to individuals by assessing self-reported emotions, attention and muscle movement in response to emotions of dynamic, virtually modeled crowd stimuli. Self-reported emotions and attention from thirty-six participants were assessed when foreground and background crowd characters exhibited homogeneous (Study 1) or heterogeneous (Study 2) positive, neutral, or negative emotions. Results suggested that affective responses in observers are shaped by crowd emotions even in the absence of direct attention. Thirty-four participants supplied self-report and facial electromyography responses to the same homogeneous (Study 3) or heterogeneous (Study 4) crowd stimuli. Results indicated that positive crowd emotions appeared to exert greater attentional pull and objective responses, while negative crowd emotions also elicited affective responses. Study 5 (n = 67) introduced a control condition (stimuli containing an individual person) to examine if responses are unique to crowds and found that emotional contagion from crowds was more intense than from individuals. These studies present methodological advances in the study of crowd emotional contagion and have implications for our broader understanding ofhow people process, attend, and affectively respond to crowds. Advancing theory by suggesting that emotional contagion from crowds is distinct from that elicited by individuals, findings may have applications for refining crowd management approaches. |
Zeguo Qiu; Dihua Wu; Benjamin J. Muehlebach Differential modulation on neural activity related to flankers during face processing: A visual crowding study Journal Article In: Neuroscience Letters, vol. 815, no. September, pp. 137496, 2023. @article{Qiu2023a, In this visual crowding study, we manipulated the perceivability of a central crowded face (a fearful or a neutral face) by varying the similarity between the central face and the surrounding flanker stimuli. We presented participants with pairs of visual clutters and recorded their electroencephalography during an emotion judgement task. In an upright flanker condition where both the central target face and flanker faces were upright faces, participants were less likely to report seeing the target face, and their P300 was weakened, compared to a scrambled flanker condition where scrambled face images were used as flankers. Additionally, at ∼ 120 ms post-stimulus, a posterior negativity was found for the upright compared to scrambled flanker condition, however only for fearful face targets. We concluded that early neural responses seem to be affected by the perceptual characteristics of both target and flanker stimuli whereas later-stage neural activity is associated with post-perceptual evaluation of the stimuli in this visual crowding paradigm. |
Zeguo Qiu; Stefanie I. Becker; Hongfeng Xia; Zachary Hamblin-Frohman; Alan J. Pegna Fixation-related electrical potentials during a free visual search task reveal the timing of visual awareness Journal Article In: iScience, vol. 26, no. 7, pp. 1–17, 2023. @article{Qiu2023, It has been repeatedly claimed that emotional faces readily capture attention, and that they may be processed without awareness. Yet some observations cast doubt on these assertions. Part of the problem may lie in the experimental paradigms employed. Here, we used a free viewing visual search task during electroencephalographic recordings, where participants searched for either fearful or neutral facial expressions among distractor expressions. Fixation-related potentials were computed for fearful and neutral targets and the response compared for stimuli consciously reported or not. We showed that awareness was associated with an electrophysiological negativity starting at around 110 ms, while emotional expressions were distinguished on the N170 and early posterior negativity only when stimuli were consciously reported. These results suggest that during unconstrained visual search, the earliest electrical correlate of awareness may emerge as early as 110 ms, and fixating at an emotional face without reporting it may not produce any unconscious processing. |
Nan Qin; Francesca Crespi; Alice Mado Proverbio; Gilles Pourtois A systematic exploration of attentional load effects on the C1 ERP component Journal Article In: Psychophysiology, pp. 1–30, 2023. @article{Qin2023, The C1 ERP component reflects the earliest visual processing in V1. However, it remains debated whether attentional load can influence it or not. We conducted two EEG experiments to investigate the effect of attentional load on the C1. Task difficulty was manipulated at fixation using an oddball detection task that was either easy (low load) or difficult (high load), while the distractor was presented in the upper visual field (UVF) to score the C1. In Experiment 1, we used a block design and the stimulus onset asynchrony (SOA) between the central stimulus and the peripheral distractor was either short or long. In Experiment 2, task difficulty was manipulated on a trial-by-trial basis using a visual cue, and the peripheral distractor was presented either before or after the central stimulus. The results showed that the C1 was larger in the high compared to the low load condition irrespective of SOA in Experiment 1. In Experiment 2, no significant load modulation of the C1 was observed. However, we found that the contingent negative variation (CNV) was larger in the low compared to the high load condition. Moreover, the C1 was larger when the peripheral distractor was presented after than before the central stimulus. Combined together, these results suggest that different top-down control processes can influence the initial feedforward stage of visual processing in V1 captured by the C1 ERP component. |
Minglang Qiao; Yufan Liu; Mai Xu; Xin Deng; Bing Li; Weiming Hu; Ali Borji Joint learning of audio–visual saliency prediction and sound source localization on multi-face videos Journal Article In: International Journal of Computer Vision, pp. 1–23, 2023. @article{Qiao2023, Visual and audio events simultaneously occur and both attract attention. However, most existing saliency prediction works ignore the influence of audio and only consider vision modality. In this paper, we propose a multi-task learning method for audio–visual saliency prediction and sound source localization on multi-face video by leveraging visual, audio and face information. Specifically, we first introduce a large-scale database of multi-face video in visual-audio condition, containing eye-tracking data and sound source annotations. Using this database, we find that sound influences human attention, and conversely attention offers a cue to determine sound source on multi-face video. Guided by these findings, an audio–visual multi-task network (AVM-Net) is introduced to predict saliency and locate sound source. AVM-Net consists of three branches corresponding to visual, audio and face modalities. The visual branch has a two-stream architecture to capture spatial and temporal information. Face and audio branches encode audio signals and faces, respectively. Finally, a spatio-temporal multi- modal graph is constructed to model the interaction among multiple faces. With joint optimization of these branches, the intrinsic correlation of the tasks ofsaliency prediction and sound source localization is utilized and their performance is boosted by each other. Experiments show that the proposed method outperforms 12 state-of-the-art saliency prediction methods, and achieves competitive results in sound source localization. WABBLE |
Linze Qian; Xianliang Ge; Zhao Feng; Sujie Wang; Jingjia Yuan; Yunxian Pan; Hongqi Shi; Jie Xu; Yu Sun Brain network reorganization during visual search task revealed by a network analysis of fixation-related potential Journal Article In: IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 31, pp. 1219–1229, 2023. @article{Qian2023, Visual search is ubiquitous in daily life and has attracted substantial research interest over the past decades. Although accumulating evidence has suggested complex neurocognitive processes underlying visual search, the neural communication across the brain regions remains poorly understood. The present work aimed to fill this gap by investigating functional networks of fixation-related potential (FRP) during the visual search task. Multi-frequency electroencephalogram (EEG) networks were constructed from 70 university students (male/female = 35/35) using FRPs time-locked to target and non-target fixation onsets, which were determined by concurrent eye-tracking data. Then graph theoretical analysis (GTA) and a data-driven classification framework were employed to quantitatively reveal the divergent reorganization between target and non-target FRPs. We found distinct network architectures between target and non-target mainly in the delta and theta bands. More importantly, we achieved a classification accuracy of 92.74% for target and non-target discrimination using both global and nodal network features. In line with the results of GTA, we found that the integration corresponding to target and non-target FRPs significantly differed, while the nodal features contributing most to classification performance primarily resided in the occipital and parietal-temporal areas. Interestingly, we revealed that females exhibited significantly higher local efficiency in delta band when focusing on the search task. In summary, these results provide some of the first quantitative insights into the underlying brain interaction patterns during the visual search process. |
Philip T. Putnam; Cheng Chi J. Chu; Nicholas A. Fagan; Olga Dal Monte; Steve W. C. Chang Dissociation of vicarious and experienced rewards by coupling frequency within the same neural pathway Journal Article In: Neuron, vol. 111, no. 16, pp. 2513–2522, 2023. @article{Putnam2023, Vicarious reward, essential to social learning and decision making, is theorized to engage select brain regions similarly to experienced reward to generate a shared experience. However, it is just as important for neural systems to also differentiate vicarious from experienced rewards for social interaction. Here, we investigated the neuronal interaction between the primate anterior cingulate cortex gyrus (ACCg) and the basolateral amygdala (BLA) when social choices made by monkeys led to either vicarious or experienced reward. Coherence between ACCg spikes and BLA local field potential (LFP) selectively increased in gamma frequencies for vicarious reward, whereas it selectively increased in alpha/beta frequencies for experienced reward. These respectively enhanced couplings for vicarious and experienced rewards were uniquely observed following voluntary choices. Moreover, reward outcomes had consistently strong directional influences from ACCg to BLA. Our findings support a mechanism of vicarious reward where social agency is tagged by interareal coordination frequency within the same shared pathway. |
Vesa Putkinen; Sanaz Nazari-Farsani; Tomi Karjalainen; Severi Santavirta; Matthew Hudson; Kerttu Seppälä; Lihua Sun; Henry K. Karlsson; Jussi Hirvonen; Lauri Nummenmaa Pattern recognition reveals sex-dependent neural substrates of sexual perception Journal Article In: Human Brain Mapping, vol. 44, no. 6, pp. 2543–2556, 2023. @article{Putkinen2023, Sex differences in brain activity evoked by sexual stimuli remain elusive despite robust evidence for stronger enjoyment of and interest toward sexual stimuli in men than in women. To test whether visual sexual stimuli evoke different brain activity patterns in men and women, we measured hemodynamic brain activity induced by visual sexual stimuli in two experiments with 91 subjects (46 males). In one experiment, the subjects viewed sexual and nonsexual film clips, and dynamic annotations for nudity in the clips were used to predict hemodynamic activity. In the second experiment, the subjects viewed sexual and nonsexual pictures in an event-related design. Men showed stronger activation than women in the visual and prefrontal cortices and dorsal attention network in both experiments. Furthermore, using multivariate pattern classification we could accurately predict the sex of the subject on the basis of the brain activity elicited by the sexual stimuli. The classification generalized across the experiments indicating that the sex differences were task-independent. Eye tracking data obtained from an independent sample of subjects (N = 110) showed that men looked longer than women at the chest area of the nude female actors in the film clips. These results indicate that visual sexual stimuli evoke discernible brain activity patterns in men and women which may reflect stronger attentional engagement with sexual stimuli in men. |
Zoe A. Purcell; Colin A. Wastell; Naomi Sweller Eye movements reveal that low confidence precedes deliberation Journal Article In: Quarterly Journal of Experimental Psychology, vol. 76, no. 7, pp. 1539 –1546, 2023. @article{Purcell2023a, Contemporary dual-process models of reasoning maintain that there are two types of thinking –intuitive and deliberative –and that low confidence often leads to deliberation. Previous studies examining the confidence -deliberation relationship have been limited by (1) issues of endogeneity and between-subject comparisons, which we address in this study through debias training and (2) measures of confidence that are taken relatively late in the reasoning process, which we address by measuring confidence via real-time eye-tracking. Self-reported and eye-tracked confidence were both negatively related to deliberative thinking. This finding provides new evidence of the timecourse of the confidence -deliberation relationship and reveals that lowered confidence precedes deliberation. |
Zoe A. Purcell; Andrew J. Roberts; Simon J. Handley; Stephanie Howarth Eye movements, pupil dilation, and conflict detection in reasoning: Exploring the evidence for intuitive logic Journal Article In: Cognitive Science, vol. 47, no. 6, pp. 1–18, 2023. @article{Purcell2023, A controversial claim in recent dual process accounts of reasoning is that intuitive processes not only lead to bias but are also sensitive to the logical status of an argument. The intuitive logic hypothesis draws upon evidence that reasoners take longer and are less confident on belief–logic conflict problems, irrespective of whether they give the correct logical response. In this paper, we examine conflict detection under conditions in which participants are asked to either judge the logical validity or believability of a presented conclusion, accompanied by measures of eye movement and pupil dilation. The findings show an effect of conflict, under both types of instruction, on accuracy, latency, gaze shifts, and pupil dilation. Importantly, these effects extend to conflict trials in which participants give a belief-based response (incorrectly under logic instructions or correctly under belief instructions) demonstrating both behavioral and physiological evidence in support of the logical intuition hypothesis. |
Eva Puimège; Maribel Montero Perez; Elke Peters Promoting L2 acquisition of multiword units through textually enhanced audiovisual input: An eye-tracking study Journal Article In: Second Language Research, vol. 39, no. 2, pp. 471–492, 2023. @article{Puimege2023a, This study examines the effect of textual enhancement on learners' attention to and learning of multiword units from captioned audiovisual input. We adopted a within-participants design in which 28 learners of English as a foreign language (EFL) watched a captioned video containing enhanced (underlined) and unenhanced multiword units. Using eye-tracking, we measured learners' online processing of the multiword units as they appeared in the captions. Form recall pre- and posttests measured learners' acquisition of the target items. The results of mixed effects models indicate that enhanced items received greater visual attention, with longer reading times, less single word skipping and more rereading. Further, a positive relationship was found between amount of visual attention and learning odds: items fixated longer, particularly during the first pass, were more likely to be recalled in an immediate posttest. Our findings provide empirical support for the positive effect of visual attention on form recall of multiword units encountered in captioned television. The results also suggest that item difficulty and amount of attention were more important than textual enhancement in predicting learning gains. |
Eva Puimege; Maribel Montero Perez; Elke Peters The effects of typographic enhancement on L2 collocation processing and learning from reading: An eye-tracking study Journal Article In: Applied Linguistics, pp. 1–24, 2023. @article{Puimege2023, This study examined the effects of TE on online processing of collocations during reading and on L2 collocation knowledge. The eye-tracking results indi- cate that the initial attention-enhancing effect of TE did not carry over to later, unenhanced exposures. Results of post-experiment interviews suggested that learners' primary focus was on meaning comprehension and that TE did not induce conscious attention to the form of the target collocations. One week after the treatment, participants could recognize the correct form of target col- locations, but they could not productively recall most of them. We conclude that a single enhanced exposure does not necessarily affect learners' memory of collocations, or their processing of those collocations in later exposures. The development of L2 collocation knowledge may require a large amount of expo- sure in purely incidental contexts. |
Sophia Antonia Press; Stefanie C. Biehl; Gregor Domes; Jennifer Svaldi; Sophia Antonia Press; Stefanie C. Biehl; Gregor Domes; Jennifer Svaldi; Sophia Antonia Press Increased insula and amygdala activity during selective attention for negatively valenced body parts in binge eating disorder Journal Article In: Journal of Psychopathology and Clinical Science, vol. 132, no. 1, pp. 63–77, 2023. @article{Press2023, Previous studies indicate that participants with eating disorders show an attentional bias for the negatively valenced body parts of their own body. However, the neural basis underlying these processes has not been investigated. We conducted a preregistered combined functional MRI (fMRI)/eye tracking study and presented 35 women with binge eating disorder (BED) and 24 weight-matched control subjects (CG) with body part images of their own body and a weight-matched unknown body. After the fMRI examination, participants rated the attractiveness of the presented body parts. As expected, women with BED responded with significantly higher insula and amygdala activity when viewing the negatively valenced body parts of their own body (compared to all other combinations). However, individuals with BED did not deviate from the CG in the processing of these stimuli in the ventromedial prefrontal cortex, the extrastriate body area or the fusiform body area. Our results indicate that the negative valued body parts carry a particularly strong emotional valence in individuals with BED. These results further emphasize the relevance of processing bias for negatively valenced body parts in the pathology of BED. |
Sabina Poudel; Jianzhong Jin; Hamed Rahimi-Nasrabadi; Stephen Dellostritto; Mitchell W. Dul; Suresh Viswanathan; Jose-Manuel Alonso Contrast sensitivity of ON and OFF human retinal pathways in myopia Journal Article In: The Journal of Neuroscience, vol. 44, no. 3, pp. 1–16, 2023. @article{Poudel2023, The human visual cortex processes light and dark stimuli with ON and OFF pathways that are differently modulated by luminance contrast. We have previously demonstrated that ON cortical pathways have higher contrast sensitivity than OFF cortical pathways and the difference increases with luminance range (defined as the maximum minus minimum luminance in the scene). Here, we demonstrate that these ON-OFF cortical differences are already present in the human retina and that retinal responses measured with electroretinography are more affected by reductions in luminance range than cortical responses measured with electroencephalography. Moreover, we show that ON-OFF pathway differences measured with electroretinography become more pronounced in myopia, a visual disorder that elongates the eye and blurs vision at far distance. We find that, as the eye axial length increases across subjects, ON retinal pathways become less responsive, slower in response latency, less sensitive, and less effective and slower at driving pupil constriction. Based on these results, we conclude that myopia is associated with a deficit in ON pathway function that decreases the ability of the retina to process low contrast and regulate retinal illuminance in bright environments. Significance Statement Contrast sensitivity is an important visual function that allows discriminating faint visual targets slightly lighter or darker than the background. We have previously demonstrated that ON and OFF cortical pathways signaling light and dark stimuli have different contrast sensitivity and the difference increases with luminance range. Here, we demonstrate that these ON-OFF sensitivity differences are inherited from the retina and are affected by myopia (nearsightedness), a visual disorder that blurs vision at far distances and is becoming a world epidemic. We show that myopia is associated with a retinal deficit that makes ON pathways less effective at signaling contrast and regulating retinal illuminance. These results could have clinical implications and may lead to novel approaches for myopia control. |
G. V. Portnova; K. M. Liaukovich; L. N. Vasilieva; E. I. Alshanskaia Autonomic and behavioral indicators on increased cognitive loading in healthy volunteers Journal Article In: Neuroscience and Behavioral Physiology, vol. 53, no. 1, pp. 92–102, 2023. @article{Portnova2023, Cognitive and emotional loading during increases in task difficulty leads to activation of various parts of the autonomic nervous system and can be accompanied by an increase in problem-solving efficiency and may contribute to destabilization of emotional status and decreases in productivity. An increase in cognitive loading in conditions of high motivation of subjects constitutes a stress factor and is expressed in various reactions of the sympathetic and parasympathetic compartments in response to loading. The aim of the present work was to study the features of various autonomic reactions to gradually increasing task difficulty, which included recording pupil area and the number of blinks, as well as the frequency of respiratory movements, measures of heart rate variability, and galvanic skin responses. Ten healthy volunteers took part in the study. The experimental paradigm included six levels of task difficulty requiring the active participation of working memory and attention. Increases in task difficulty from the first level to the sixth led to a gradual increase in pupil area and the number of blinks, which we suggest corresponds to an increase in sympathetic nervous system activation. Linear changes in the autonomic parameters of the respiratory and cardiovascular systems, as well as the electrical activity of the skin, were observed only up to the third level of difficulty. Further increases in difficulty led to opposite changes in these indicators and were accompanied by decreases in problem-solving efficiency. A more marked change in the galvanic skin response during problem-solving correlated with a decrease in mood after the study, indirectly indicating a higher level of emotional stress. |
Brendan L. Portengen; Giorgio L. Porro; Saskia M. Imhof; Marnix Naber The trade-off between luminance and color contrast assessed with pupil responses Journal Article In: Translational Vision Science & Technology, vol. 12, no. 1, pp. 19–25, 2023. @article{Portengen2023, Purpose: A scene consisting of a white stimulus on a black background incorporates strong luminance contrast. When both stimulus and background receive different colors, luminance contrast decreases but color contrast increases. Here, we sought to charac-terize the pattern of stimulus salience across varying trade-offs of color and luminance contrasts by using the pupil light response. Methods: Three experiments were conducted with 17, 16, and 17 healthy adults. For all experiments, a flickering stimulus (2 Hz; alternating color to black) was presented super-imposed on a background with a complementary color to the stimulus (i.e., opponency colors in human color perception: blue and yellow for Experiment 1, red and green for Experiment 2, and equiluminant red and green for Experiment 3). Background luminance varied between 0% and 45% to trade off luminance and color contrast with the stimulus. By comparing the locus of the optimal trade-off between color and luminance across different color axes, we explored the generality of the trade-off. Results: The strongest pupil responses were found when a substantial amount of color contrast was present (at the expense of luminance contrast). Pupil response ampli-tudes increased by 15% to 30% after the addition of color contrast. An optimal pupillary responsiveness was reached at a background luminance setting of 20% to 35% color contrast across several color axes. Conclusions: These findings suggest that a substantial component of pupil light responses incorporates color processing. More sensitive pupil responses and more salient stimulus designs can be achieved by adding subtle levels of color contrast between stimulus and background. Translational Relevance: More robust pupil responses will enhance tests of the visual field with pupil perimetry. |
Brendan L. Portengen; Marnix Naber; Giorgio L. Porro; Douwe Bergsma; Evert J. Veldman; Saskia M. Imhof In: Eye and Brain, vol. 15, pp. 77–89, 2023. @article{Portengen2023a, Purpose: We improve pupillary responses and diagnostic performance of flicker pupil perimetry through alterations in global and local color contrast and luminance contrast in adult patients suffering from visual field defects due to cerebral visual impairment (CVI). Methods: Two experiments were conducted on patients with CVI (Experiment 1: 19 subjects, age M and SD 57.9 ± 14.0; Experiment 2: 16 subjects, age M and SD 57.3 ± 14.7) suffering from absolute homonymous visual field (VF) defects. We altered global color contrast (stimuli consisted of white, yellow, cyan and yellow-equiluminant-to-cyan colored wedges) in Experiment 1, and we manipulated luminance and local color contrast with bright and dark yellow and multicolor wedges in a 2-by-2 design in Experiment 2. Stimuli consecutively flickered across 44 stimulus locations within the inner 60 degrees of the VF and were offset to a contrasting (opponency colored) dark background. Pupil perimetry results were compared to standard automated perimetry (SAP) to assess diagnostic accuracy. Results: A bright stimulus with global color contrast using yellow (p= 0.009) or white (p= 0.006) evoked strongest pupillary responses as opposed to stimuli containing local color contrast and lower brightness. Diagnostic accuracy, however, was similar across global color contrast conditions in Experiment 1 (p= 0.27) and decreased when local color contrast and less luminance contrast was introduced in Experiment 2 (p= 0.02). The bright yellow condition resulted in highest performance (AUC M = 0.85 ± 0.10 |
Dina V. Popovkina; John Palmer; Cathleen M. Moore; Geoffrey M. Boynton Testing hemifield independence for divided attention in visual object tasks Journal Article In: Journal of Vision, vol. 23, no. 13, pp. 1–17, 2023. @article{Popovkina2023, In this study, we asked to what degree hemifields contribute to divided attention effects observed in tasks with object-based judgments. If object recognition processes in the two hemifields were fully independent, then placing stimuli in separate hemifields would eliminate divided attention effects; in the alternative extreme, if object recognition processes in the two hemifields were fully integrated, then placing stimuli in separate hemifields would not modulate divided attention effects. Using a dual-task paradigm, we compared performance in a semantic categorization task for relevant stimuli arranged in the same hemifield to performance for relevant stimuli arranged in separate left and right hemifields. In two experiments, there was a reliable decrease in divided attention effects when stimuli were shown in separate hemifields compared to the same hemifield. However, the effect of divided attention was not eliminated. These results reject both the independent and integrated hypotheses, and instead support a third alternative - that object recognition processes in the two hemifields are partially dependent. More specifically, the magnitude of modulation by hemifields was closer to the prediction of the integrated hypothesis, suggesting that for dual tasks with objects, dependent processing is mostly shared across the visual field. |
Tzvetan Popov; Tobias Staudigl Cortico-ocular coupling in the service of episodic memory formation Journal Article In: Progress in Neurobiology, vol. 227, pp. 1–9, 2023. @article{Popov2023, Encoding of visual information is a necessary requirement for most types of episodic memories. In search for a neural signature of memory formation, amplitude modulation of neural activity has been repeatedly shown to correlate with and suggested to be functionally involved in successful memory encoding. We here report a complementary view on why and how brain activity relates to memory, indicating a functional role of cortico-ocular interactions for episodic memory formation. Recording simultaneous magnetoencephalography and eye tracking in 35 human participants, we demonstrate that gaze variability and amplitude modulations of alpha/beta oscillations (10–20 Hz) in visual cortex covary and predict subsequent memory performance between and within participants. Amplitude variation during pre-stimulus baseline was associated with gaze direction variability, echoing the co-variation observed during scene encoding. We conclude that encoding of visual information engages unison coupling between oculomotor and visual areas in the service of memory formation. |
Tzvetan Popov; Bart Gips; Nathan Weisz; Ole Jensen Brain areas associated with visual spatial attention display topographic organization during auditory spatial attention Journal Article In: Cerebral Cortex, vol. 33, no. 7, pp. 3478–3489, 2023. @article{Popov2023a, Spatially selective modulation of alpha power (8–14 Hz) is a robust finding in electrophysiological studies of visual attention, and has been recently generalized to auditory spatial attention. This modulation pattern is interpreted as reflecting a top-down mechanism for suppressing distracting input from unattended directions of sound origin. The present study on auditory spatial attention extends this interpretation by demonstrating that alpha power modulation is closely linked to oculomotor action. We designed an auditory paradigm in which participants were required to attend to upcoming sounds from one of 24 loudspeakers arranged in a circular array around the head. Maintaining the location of an auditory cue was associated with a topographically modulated distribution of posterior alpha power resembling the findings known from visual attention. Multivariate analyses allowed the prediction of the sound location in the horizontal plane. Importantly, this prediction was also possible, when derived from signals capturing saccadic activity. A control experiment on auditory spatial attention confirmed that, in absence of any visual/auditory input, lateralization of alpha power is linked to the lateralized direction of gaze. Attending to an auditory target engages oculomotor and visual cortical areas in a topographic manner akin to the retinotopic organization associated with visual attention. |
Eva R. Pool; Wolfgang M. Pauli; Logan Cross; John P. O'Doherty Neural substrates of parallel devaluation-sensitive and devaluation-insensitive Pavlovian learning in humans Journal Article In: Nature Communications, vol. 14, no. 1, pp. 1–17, 2023. @article{Pool2023, We aim to differentiate the brain regions involved in the learning and encoding of Pavlovian associations sensitive to changes in outcome value from those that are not sensitive to such changes by combining a learning task with outcome devaluation, eye-tracking, and functional magnetic resonance imaging in humans. Contrary to theoretical expectation, voxels correlating with reward prediction errors in the ventral striatum and subgenual cingulate appear to be sensitive to devaluation. Moreover, regions encoding state prediction errors appear to be devaluation insensitive. We can also distinguish regions encoding predictions about outcome taste identity from predictions about expected spatial location. Regions encoding predictions about taste identity seem devaluation sensitive while those encoding predictions about an outcome's spatial location seem devaluation insensitive. These findings suggest the existence of multiple and distinct associative mechanisms in the brain and help identify putative neural correlates for the parallel expression of both devaluation sensitive and insensitive conditioned behaviors. |
Elie Poncet; Gaelle Nicolas; Nathalie Guyader; Elena Moro; Aurélie Campagne Spatio-temporal attention toward emotional scenes across adulthood Journal Article In: Emotion, vol. 23, no. 6, pp. 1726–1739, 2023. @article{Poncet2023, Research on emotion suggests that the attentional preference observed toward the negative stimuli in young adults tends to disappear in normal aging and, sometimes, to shift toward a preference for positive stimuli. The current eye-tracking study investigated visual exploration of paired natural scenes of different valence (Negative–Neutral, Positive–Neutral, and Negative–Positive pairs) in three age groups (young, middle-aged, and older adults). Two arousal levels of stimuli (high and low arousal) were also considered given role of this factor in age-related effects on emotion. Results showed the automatic attentional orienting toward the negative stimuli was relatively preserved in our three age groups although reduced in the elderly, in both arousal conditions. A similar negativity bias was also observed in initial attention focusing but shifted toward a positivity bias over time in the three age groups. Moreover, it appeared the spatial exploration of emotional scenes evolved over time differently for older adults compared with other age groups. No difference between young adults and middle-aged adults in ocular behavior was observed. This study confirms the interest of studying both spatial and temporal characteristics of oculomotor behaviors to better understand the age-related effects on emotion. |
Stefan Pollmann; Lei Zheng Right-dominant contextual cueing for global configuration cues, but not local position cues Journal Article In: Neuropsychologia, vol. 178, pp. 1–7, 2023. @article{Pollmann2023, Contextual cueing can depend on global configuration or local item position. We investigated the role of these two kinds of cues in the lateralization of contextual cueing effects. Cueing by item position was tested by recombining two previously learned displays, keeping the individual item locations intact, but destroying the global configuration. In contrast, cueing by configuration was investigated by rotating learned displays, thereby keeping the configuration intact but changing all item positions. We observed faster search for targets in the left display half, both for repeated and new displays, along with more first fixation locations on the left. Both position and configuration cues led to faster search, but the search time reduction compared to new displays due to position cues was comparable in the left and right display half. In contrast, configural cues led to increased search time reduction for right half targets. We conclude that only configural cues enabled memory-guided search for targets across the whole search display, whereas position cueing guided search only to targets in the vicinity of the fixation. The right-biased configural cueing effect is a consequence of the initial leftward search bias and does not indicate hemispheric dominance for configural cueing. |
Megan Polden; Trevor J. Crawford Eye movement latency coefficient of variation as a predictor of cognitive impairment: An eye tracking study of cognitive impairment Journal Article In: Vision, vol. 7, no. 2, pp. 1–12, 2023. @article{Polden2023, Studies demonstrated impairment in the control of saccadic eye movements in Alzheimer's disease (AD) and people with mild cognitive impairment (MCI) when conducting the pro-saccade and antisaccade tasks. Research showed that changes in the pro and antisaccade latencies may be particularly sensitive to dementia and general executive functioning. These tasks show potential for diagnostic use, as they provide a rich set of potential eye tracking markers. One such marker, the coefficient of variation (CV), is so far overlooked. For biological markers to be reliable, they must be able to detect abnormalities in preclinical stages. MCI is often viewed as a predecessor to AD, with certain classifications of MCI more likely than others to progress to AD. The current study examined the potential of CV scores on pro and antisaccade tasks to distinguish participants with AD, amnestic MCI (aMCI), non-amnesiac MCI (naMCI), and older controls. The analyses revealed no significant differences in CV scores across the groups using the pro or antisaccade task. Antisaccade mean latencies were able to distinguish participants with AD and the MCI subgroups. Future research is needed on CV measures and attentional fluctuations in AD and MCI individuals to fully assess this measure's potential to robustly distinguish clinical groups with high sensitivity and specificity. |
Timothy J. Pleskac; Shuli Yu; Sergej Grunevski; Taosheng Liu Attention biases preferential choice by enhancing an option's value Journal Article In: Journal of Experimental Psychology: General, vol. 152, no. 4, pp. 993–1010, 2023. @article{Pleskac2023, Does attending to an option lead to liking it? Though attention-induced valuation is often hypothesized, evi- dence for this causal link has remained elusive. We test this hypothesis across 2 studies by manipulating atten- tion during a preferential decision and its perceptual analog. In a free-viewing task, attention impacted choice and eye movement pattern in the preferential decision more than the perceptual analog. Similarly, in a con- trolled-viewing task, attention had a larger effect on choice in the preferential decision than its perceptual ana- log. Across these experimental manipulations of attention, choice and eye-tracking data provide converging evidence that attention enhances value, and computational modeling further supports this attention-induced valuation hypothesis. A possible explanation for our results is a normalization mechanism where attention induces a gain modulation on an option's representation at both the sensory and value processing levels. |
Iván Plaza-Rosales; Enzo Brunetti; Rodrigo Montefusco-Siegmund; Samuel Madariaga; Rodrigo Hafelin; Daniela P. Ponce; María Isabel Behrens; Pedro E. Maldonado; Andrea Paula-Lima Visual-spatial processing impairment in the occipital-frontal connectivity network at early stages of Alzheimer's disease Journal Article In: Frontiers in Aging Neuroscience, vol. 15, pp. 1–14, 2023. @article{PlazaRosales2023, Introduction: Alzheimer's disease (AD) is the leading cause of dementia worldwide, but its pathophysiological phenomena are not fully elucidated. Many neurophysiological markers have been suggested to identify early cognitive impairments of AD. However, the diagnosis of this disease remains a challenge for specialists. In the present cross-sectional study, our objective was to evaluate the manifestations and mechanisms underlying visual-spatial deficits at the early stages of AD. Methods: We combined behavioral, electroencephalography (EEG), and eye movement recordings during the performance of a spatial navigation task (a virtual version of the Morris Water Maze adapted to humans). Participants (69–88 years old) with amnesic mild cognitive impairment–Clinical Dementia Rating scale (aMCI–CDR 0.5) were selected as probable early AD (eAD) by a neurologist specialized in dementia. All patients included in this study were evaluated at the CDR 0.5 stage but progressed to probable AD during clinical follow-up. An equal number of matching healthy controls (HCs) were evaluated while performing the navigation task. Data were collected at the Department of Neurology of the Clinical Hospital of the Universidad de Chile and the Department of Neuroscience of the Faculty of Universidad de Chile. Results: Participants with aMCI preceding AD (eAD) showed impaired spatial learning and their visual exploration differed from the control group. eAD group did not clearly prefer regions of interest that could guide solving the task, while controls did. The eAD group showed decreased visual occipital evoked potentials associated with eye fixations, recorded at occipital electrodes. They also showed an alteration of the spatial spread of activity to parietal and frontal regions at the end of the task. The control group presented marked occipital activity in the beta band (15–20 Hz) at early visual processing time. The eAD group showed a reduction in beta band functional connectivity in the prefrontal cortices reflecting poor planning of navigation strategies. Discussion: We found that EEG signals combined with visual-spatial navigation analysis, yielded early and specific features that may underlie the basis for understanding the loss of functional connectivity in AD. Still, our results are clinically promising for early diagnosis required to improve quality of life and decrease healthcare costs. |
Belinda Platt; Anca Sfärlea; Johanna Löchner; Elske Salemink; Gerd Schulte-Körne The role of cognitive biases and negative life events in predicting later depressive symptoms in children and adolescents Journal Article In: Journal of Experimental Psychopathology, vol. 14, no. 3, pp. 1–16, 2023. @article{Platt2023, Aims: Cognitive models propose that negative cognitive biases in attention (AB) and interpretation (IB) contribute to the onset of depression. This is the first prospective study to test this hypothesis in a sample of youth with no mental disorder. Methods: Participants were 61 youth aged 9–14 years with no mental disorder. At baseline (T1) we measured AB (passive- viewing task), IB (scrambled sentences task) and self-report depressive symptoms. Thirty months later (T2) we measured onset of mental disorder, depressive symptoms and life events (parent- and child-report). The sample included children of parents with (n = 31) and without (n = 30) parental depression. Results: Symptoms of depression at T2 were predicted by IB (ß = .35 |
Rista C. Plate; Tralucia Powell; Rachael Bedford; Tim J. Smith; Ankur Bamezai; Quentin Wedderburn; Alexis Broussard; Natasha Soesanto; Caroline Swetlitz; Rebecca Waller; Nicholas J. Wagner Social threat processing in adults and children: Faster orienting to, but shorter dwell time on, angry faces during visual search Journal Article In: Developmental Science, pp. 1–8, 2023. @article{Plate2023, Attention to emotional signals conveyed by others is critical for gleaning information about potential social partners and the larger social context. Children appear to detect social threats (e.g., angry faces) faster than non-threatening social signals (e.g., neutral faces). However, methods that rely on behavioral responses alone are limited in identifying different attentional processes involved in threat detection or responding. To address this question, we used a visual search paradigm to assess behavioral (i.e., reaction time to select a target image) and attentional (i.e., eye-tracking fixations, saccadic shifts, and dwell time) responses in children (ages 7–10 years old |
Barbara L. Pitts; Michelle L. Eisenberg; Heather R. Bailey; Jeffrey M. Zacks Cueing natural event boundaries improves memory in people with post-traumatic stress disorder Journal Article In: Cognitive Research: Principles and Implications, vol. 8, no. 1, pp. 1–10, 2023. @article{Pitts2023, People with post-traumatic stress disorder (PTSD) often report difficulty remembering information in their everyday lives. Recent findings suggest that such difficulties may be due to PTSD-related deficits in parsing ongoing activity into discrete events, a process called event segmentation. Here, we investigated the causal relationship between event segmentation and memory by cueing event boundaries and evaluating its effect on subsequent memory in people with PTSD. People with PTSD (n = 38) and trauma-matched controls (n = 36) watched and remembered videos of everyday activities that were either unedited, contained visual and auditory cues at event boundaries, or contained visual and auditory cues at event middles. PTSD symptom severity varied substantial within both the group with a PTSD diagnosis and the control group. Memory performance did not differ significantly between groups, but people with high symptoms of PTSD remembered fewer details from the videos than those with lower symptoms of PTSD. Both those with PTSD and controls remembered more information from the videos in the event boundary cue condition than the middle cue or unedited conditions. This finding has important implications for translational work focusing on addressing everyday memory complaints in people with PTSD. |