EyeLink fMRI/MEG出版物
所有EyeLink功能磁共振成像和MEG研究出版物(同时进行眼睛跟踪)至2023年(一些早于2024年)均按年份列出。您可以使用Visual Cortex、Neural Plasticity、MEG等关键词搜索出版物。您还可以搜索个人作者姓名。如果我们错过了任何EyeLink功能磁共振成像或MEG文章,请给我们发电子邮件!
2023 |
Cristina Lozano-Argüelles; Nuria Sagarra; Joseph V. Casillas Interpreting experience and working memory effects on L1 and L2 morphological prediction Journal Article In: Frontiers in Language Sciences, vol. 1, pp. 1–16, 2023. @article{LozanoArgueelles2023, The human brain tries to process information as efficiently as possible through mechanisms like prediction. Native speakers predict linguistic information extensively, but L2 learners show variability. Interpreters use prediction while working and research shows that interpreting experience mediates L2 prediction. However, it is unclear whether advantages related to interpreting are due to higher working memory (WM) capacity, a typical characteristic of professional interpreters. To better understand the role of WM during L1 and L2 prediction, English L2 learners of Spanish with and without interpreting experience and Spanish monolinguals completed a visual-world paradigm eye-tracking task and a number-letter sequencing working memory task. The eye-tracking task measured prediction of verbal morphology (present, past) based on suprasegmental information (lexical stress: paroxytone, oxytone) and segmental information (syllabic structure: CV, CVC). Results revealed that WM mediates L1 prediction, such that higher WM facilitates prediction of morphology in monolinguals. However, higher WM hinders prediction in L2 processing for non-interpreters. Interestingly, interpreters behaved similarly to monolinguals, with higher WM facilitating L2 prediction. This study provides further understanding of the variability in L2 prediction. |
Heather D. Lucas; Ana M. Daugherty; Edward Mcauley; Arthur F. Kramer; Neal J. Cohen Supplemental material for dynamic interactions between memory and viewing behaviors: Insights from dyadic modeling of eye movements Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 49, no. 6, pp. 786–801, 2023. @article{Lucas2023, Humans use eye movements to build visual memories. We investigated how the contributions of specific viewing behaviors to memory formation evolve over individual study epochs. We used dyadic modeling to explain performance on a spatial reconstruction task based on interactions among two gaze measures: (a) the entropy of the scanpath and (b) the frequency of item-to-item gaze transitions. To measure these interactions, our hypothesized model included causal pathways by which early-trial viewing behaviors impacted subsequent memory via downstream effects on later viewing. We found that lower scanpath entropy throughout the trial predicted better memory performance. By contrast, the effect of item-to- item transition frequency changed from negative to positive as the trial progressed. The model also revealed multiple pathways by which early-trial viewing dynamically altered late-trial viewing, thereby impacting memory indirectly. Finally, individual differences in scores on an independent measure of memory ability were found to predict viewing effectiveness, and viewing behaviors partially mediated the relation between memory ability and reconstruction accuracy. In a second experiment, the model showed a good fit for an independent dataset. These results highlight the dynamic nature of memory formation and suggest that the order in which eye movements occur can critically determine their effectiveness. |
Jiří Lukavský; Hauke S. Meyerhoff Gaze coherence reveals distinct tracking strategies in multiple object and multiple identity tracking Journal Article In: Psychonomic Bulletin & Review, pp. 1–10, 2023. @article{Lukavsky2023, In dynamic environments, a central task of the attentional system is to keep track of objects changing their spatial location over time. In some instances, it is sufficient to track only the spatial locations of moving objects (i.e., multiple object tracking; MOT). In other instances, however, it is also important to maintain distinct identities of moving objects (i.e., multiple identity tracking; MIT). Despite previous research, it is not clear whether MOT and MIT performance emerge from the same tracking mechanism. In the present report, we study gaze coherence (i.e., the extent to which participants repeat their gaze behaviour when tracking the same object locations twice) across repeated MOT and MIT trials. We observed more substantial gaze coherence in repeated MOT trials compared to the repeated MIT trials or mixed MOT-MIT trial pairs. A subsequent simulation study suggests that MOT is based more on a grouping mechanism than MIT, whereas MIT is based more on a target-jumping mechanism than MOT. It thus appears unlikely that MOT and MIT emerge from the same basic tracking mechanism. |
Steven G. Luke; Tanner Jensen The effect of sudden-onset distractors on reading efficiency and comprehension Journal Article In: Quarterly Journal of Experimental Psychology, vol. 76, no. 5, pp. 1195 –1206, 2023. @article{Luke2023, Reading is an essential skill that requires focused attention. However, much reading is done in non-optimal environments. These days, reading is often done on digital devices or with a digital device nearby. These devices often introduce momentary distractions during reading, interrupting with alerts, notifications, and pop-ups. In two eye-tracking experiments, we investigated how such momentary distractions affect reading. Participants read paragraphs while their eye movements were monitored. During half of the paragraphs, distractions appeared periodically on the screen that required a response from the participants. In Experiment 1, the distractions were arrows that the participant had to respond to and then could immediately forget. In Experiment 2, the participants performed a 1-back task that required them to remember the identity of the last distractor. Compared with the no-distraction condition, the respond-and-forget distractors of Experiment 1 had minimal impact on reading behaviour and comprehension, but the working-memory-load distractors of Experiment 2 led to increased rereading and decreased reading comprehension. It seems a simple pop-up does not disrupt reading, but a message you must remember will. |
Steven G. Luke; Rachel Yu Liu; Kyle Nelson; Jared Denton; Michael W. Child An ex-Gaussian analysis of eye movements in L2 reading Journal Article In: Bilingualism: Language and Cognition, vol. 26, no. 2, pp. 330–344, 2023. @article{Luke2023a, Second language learners' reading is less efficient and more effortful than native reading. However, the source of their difficulty is unclear; L2 readers might struggle with reading in a different orthography, or they might have difficulty with later stages of linguistic interpretation of the input, or both. The present study explored the source of L2 reading difficulty by analyzing the distribution of fixation durations in reading. In three studies, we observed that L2 readers experience an increase in Mu, which we interpret as indicating early orthographic processing difficulty, when the L2 has a significantly different writing system than the L1 (e.g., Chinese and English) but not when the writing systems were similar (e.g., Portuguese and English). L2 readers also experienced an increase in Tau, indicating later-arising processing difficulty which likely reflects later-stage linguistic processes, when they read for comprehension. L2 readers of Chinese also experienced an additional increase in Tau. |
Changlin Luo; Siyuan Qiao; Xiangling Zhuang; Guojie Ma Dynamic attentional bias for pictorial and textual food cues in the visual search paradigm Journal Article In: Appetite, vol. 180, pp. 1–11, 2023. @article{Luo2023, Previous studies have found that individuals have an attentional bias for food cues, which may be related to the energy level or the type of stimulus (e.g., pictorial or textual food cues) of the food cues. However, the available evidence is inconsistent, and there is no consensus about how the type of stimulus and food energy modulate food-related attentional bias. Searching for food is one of the most important daily behaviors. In this study, a modified visual search paradigm was used to explore the attentional bias for food cues, and eye movements were recorded. Food cues consisted of both food words and food pictures with different energy levels (i.e., high- and low-calorie foods). The results showed that there was an attentional avoidance in the early stage but a later-stage attentional approach for all food cues in the pictorial condition. This was especially true for high-calorie food pictures. Participants showed a later-stage conflicting attentional bias for foods with different energy levels in the textual condition. They showed an attentional approach to high-calorie food words but an attentional avoidance of low-calorie food words. These data show that food-related attentional bias varied along with different time courses, which was also modulated by the type of stimulus and food energy. These findings regarding dynamic attentional bias could be explained using the Goal Conflict Model of eating behavior. |
Junlian Luo; Thérèse Collins The representational similarity between visual perception and recent perceptual history Journal Article In: Journal of Neuroscience, vol. 43, no. 20, pp. 3658–3665, 2023. @article{Luo2023b, From moment to moment, the visual properties of objects in the world fluctuate because of external factors like ambient lighting, occlusion and eye movements, and internal (proximal) noise. Despite this variability in the incoming information, our perception is stable. Serial dependence, the behavioral attraction of current perceptual responses toward previously seen stimuli, may reveal a mechanism underlying stability: a spatiotemporally tuned operator that smooths over spurious fluctuations. The current study examined the neural underpinnings of serial dependence by recording the electroencephalographic (EEG) brain response of female and male human observers to prototypical objects (faces, cars, and houses) and morphs that mixed properties of two prototypes. Behavior was biased toward previously seen objects. Representational similarity analysis (RSA) revealed that responses evoked by visual objects contained information about the previous stimulus. The trace of previous representations in the response to the current object occurred immediately on object appearance, suggesting that serial dependence arises from a brain state or set that precedes processing of new input. However, the brain response to current visual objects was not representationally similar to the trace they leave on subsequent object representations. These results reveal that while past stimulus history influences current representations, this influence does not imply a shared neural code between the previous trial (memory) and the current trial (perception). |
Xiaoxiao Luo; Lihui Wang; Jiayan Gu; Qiongting Zhang; Hongyu Ma; Xiaolin Zhou The benefit of making voluntary choices generalizes across multiple effectors Journal Article In: Psychonomic Bulletin & Review, pp. 1–13, 2023. @article{Luo2023c, It has been shown that cognitive performance could be improved by expressing volition (e.g., making voluntary choices), which necessarily involves the execution of action through a certain effector. However, it is unclear if the benefit of expressing volition can generalize across different effectors. In the present study, participants made a choice between two pictures either voluntarily or forcibly, and subsequently completed a visual search task with the chosen picture as a task-irrelevant background. The effector for choosing a picture could be the hand (pressing a key), foot (pedaling), mouth (commanding), or eye (gazing), whereas the effector for responding to the search target was always the hand. Results showed that participants responded faster and had a more liberal response criterion in the search task after a voluntary choice (vs. a forced choice). Importantly, the improved performance was observed regardless of which effector was used in making the choice, and regardless of whether the effector for making choices was the same as or different from the effector for responding to the search target. Eye-movement data for oculomotor choice showed that the main contributor to the facilitatory effect of voluntary choice was the post-search time in the visual search task (i.e., the time spent on processes after the target was found, such as response selection and execution). These results suggest that the expression of volition may involve the motor control system in which the effector-general, high-level processing of the goal of the voluntary action plays a key role. |
Shira M. Lupkin; Vincent B. McGinty Monkeys exhibit human-like gaze biases in economic decisions Journal Article In: eLife, vol. 12, pp. 1–27, 2023. @article{Lupkin2023, In economic decision-making individuals choose between items based on their perceived value. For both humans and nonhuman primates, these decisions are often carried out while shifting gaze between the available options. Recent studies in humans suggest that these shifts in gaze actively influence choice, manifesting as a bias in favor of the items that are viewed first, viewed last, or viewed for the overall longest duration in a given trial. This suggests a mechanism that links gaze behavior to the neural computations underlying value-based choices. In order to identify this mechanism, it is first necessary to develop and validate a suitable animal model of this behavior. To this end, we have created a novel value-based choice task for macaque monkeys that captures the essential features of the human paradigms in which gaze biases have been observed. Using this task, we identified gaze biases in the monkeys that were both qualitatively and quantita-tively similar to those in humans. In addition, the monkeys' gaze biases were well-explained using a sequential sampling model framework previously used to describe gaze biases in humans—the first time this framework has been used to assess value-based decision mechanisms in nonhuman primates. Together, these findings suggest a common mechanism that can explain gaze-related choice biases across species, and open the way for mechanistic studies to identify the neural origins of this behavior. |
Felipe Luzardo; Wolfgang Einhäuser; Monique Michl; Yaffa Yeshurun Attention does not spread automatically along objects: Evidence from the pupillary light response Journal Article In: Journal of Experimental Psychology: General, vol. 152, no. 7, pp. 2040–2051, 2023. @article{Luzardo2023, Objects influence attention allocation; when a location within an object is cued, participants react faster to targets appearing in a different location within this object than on a different object. Despite consistent demonstrations of this object-based effect, there is no agreement regarding its underlying mechanisms. To test the most common hypothesis that attention spreads automatically along the cued object, we utilized a continuous, response-free measurement of attentional allocation that relies on the modulation of the pupillary light response. In Experiments 1 and 2, attentional spreading was not encouraged because the target appeared often (60%) at the cued location and considerably less often at other locations (20%within the same object and 20% on another object). In Experiment 3, spreading was encouraged because the target appeared equally often in one of the three possible locations within the cued object (cued end, middle, uncued end). In all experiments, we added gray-to-black and gray-to-white luminance gradients to the objects. By cueing the gray ends of the objects, we could track attention. If attention indeed spreads automatically along objects, then pupil size should be greater after the gray-to-dark object is cued because attention spreads toward darker areas of the object than when the gray-to-white object is cued, regardless of the target location probability. However, unequivocal evidence of attentional spreading was only found when spreading was encouraged. These findings do not support an automatic spreading of attention. Instead, they suggest that attentional spreading along the object is guided by cue–target contingencies. |
Anqi Lyu; Larry Abel; Allen M. Y. Cheong Effect of habitual reading direction on saccadic eye movements: A pilot study Journal Article In: PLoS ONE, vol. 18, pp. 1–16, 2023. @article{Lyu2023, Cognitive processes can influence the characteristics of saccadic eye movements. Reading habits, including habitual reading direction, also affect cognitive and visuospatial processes, favouring attention to the side where reading begins. Few studies have investigated the effect of habitual reading direction on saccade directionality of low-cognitive-demand stimuli (such as dots). The current study examined horizontal prosaccade, antisaccade, and self-paced saccade in subjects with two primary habitual reading directions. We hypothesised that saccades responding to the stimuli in subject's habitual reading direction would show a longer prosaccade latency and lower antisaccade error rate (errors being a reflexive glance to a suddenappearing target, rather than a saccade away from it). Sixteen young Chinese participants with primary habitual reading direction from left to right and sixteen young Arabic and Persian participants with primary habitual reading direction from right to left were recruited. All subjects spoke/read English as their second language. Subjects needed to look towards a 5°/10° target in the prosaccade task or look towards the mirror image location of the target in the antisaccade task and look between two 10° targets in the self-paced saccade task. Only Arabic and Persian participants showed a shorter and directional prosaccade latency towards 5° stimuli against their habitual reading direction. No significant effect of reading direction on antisaccade latency towards the correct directions was found. Chinese readers were found to generate significantly shorter prosaccade latencies and higher antisaccade directional errors compared with Arabic and Persian readers for stimuli appearing at their habitual reading side. The present pilot study provides insights into the effect of reading habits on saccadic eye movements of low-cognitive-demand stimuli and offers a platform for future studies to investigate the relationship between reading habits and eye movement behaviours. |
Jialin Ma; Rui Zhang; Yongxin Li Age weakens the other-race effect among Han subjects in recognizing own- and other-ethnicity faces Journal Article In: Behavioral Sciences, vol. 13, no. 8, pp. 1–17, 2023. @article{Ma2023, The development and change in the other-race effect (ORE) in different age groups have always been a focus of researchers. Previous studies have mainly focused on the influence of maturity of life (from infancy to early adulthood) on the ORE, while few researchers have explored the ORE in older people. Therefore, this study used behavioral and eye movement techniques to explore the influence of age on the ORE and the visual scanning pattern of Han subjects recognizing own- and other-ethnicity faces. All participants were asked to complete a study-recognition task for faces, and the behavioral results showed that the ORE of elderly Han subjects was significantly lower than that of young Han subjects. The results of eye movement showed that there were significant differences in the visual scanning pattern of young subjects in recognizing the faces of individuals of their own ethnicity and other ethnicities, which were mainly reflected in the differences in looking at the nose and mouth, while the differences were reduced in the elderly subjects. The elderly subjects used similar scanning patterns to recognize the own- and other-ethnicity faces. This indicates that as age increases, the ORE of older people in recognizing faces of those from different ethnic groups becomes weaker, and elderly subjects have more similar visual scanning patterns in recognizing faces of their own and other ethnicities. |
Xiaochuan Ma; Yikang Liu; Roy Clariana; Chanyuan Gu; Ping Li From eye movements to scanpath networks: A method for studying individual differences in expository text reading Journal Article In: Behavior Research Methods, vol. 55, no. 2, pp. 730–750, 2023. @article{Ma2023b, Eye movements have been examined as an index of attention and comprehension during reading in the literature for over 30 years. Although eye-movement measurements are acknowledged as reliable indicators of readers' comprehension skill, few studies have analyzed eye-movement patterns using network science. In this study, we offer a new approach to analyze eye-movement data. Specifically, we recorded visual scanpaths when participants were reading expository science text, and used these to construct scanpath networks that reflect readers' processing of the text. Results showed that low ability and high ability readers' scanpath networks exhibited distinctive properties, which are reflected in different network metrics including density, centrality, small-worldness, transitivity, and global efficiency. Such patterns provide a new way to show how skilled readers, as compared with less skilled readers, process information more efficiently. Implications of our analyses are discussed in light of current theories of reading comprehension. |
Sylwia Macinska; Shane Lindsay; Tjeerd Jellema Visual attention to dynamic emotional faces in adults on the autism spectrum Journal Article In: Journal of Autism and Developmental Disorders, pp. 1–13, 2023. @article{Macinska2023, Using eye-tracking, we studied allocation of attention to faces where the emotional expression and eye-gaze dynamically changed in an ecologically-valid manner. We tested typically-developed (TD) adults low or high in autistic-like traits (Experiment 1), and adults with high-functioning autism (HFA; Experiment 2). All groups fixated more on the eyes than on any of the other facial area, regardless of emotion and gaze direction, though the HFA group fixated less on the eyes and more on the nose than TD controls. The sequence of dynamic facial changes affected the groups similarly, with reduced attention to the eyes and increased attention to the mouth. The results suggest that dynamic emotional face scanning patterns are stereotypical and differ only modestly between TD and HFA adults. |
Kelsey J. MacKay; Filip Germeys; Wim Van Dooren; Lieven Verschaffel; Koen Luwel The structure of the notation system in adults' number line estimation: An eye-tracking study Journal Article In: Quarterly Journal of Experimental Psychology, vol. 76, no. 3, pp. 538 –553, 2023. @article{MacKay2023, Research on rational numbers suggests that adults experience more difficulties in understanding the numerical magnitude of rational than natural numbers. Within rational numbers, the numerical magnitude of fractions has been found to be more difficult to understand than that of decimals. Using a number line estimation (NLE) task, the current study investigated two sources of difficulty in adults' numerical magnitude understanding: number type (natural vs rational) and structure of the notation system (place-value-based vs non-place-value-based). This within-subjects design led to four conditions: natural numbers (natural/place-value-based), decimals (rational/place-value-based), fractions (rational/non-place-value-based), and separated fractions (natural/non-place-value-based). In addition to percentage absolute error (PAE) and response times, we collected eye-tracking data. Results showed that participants estimated natural and place-value-based notations more accurately than rational and non-place-value-based notations, respectively. Participants were also slower to respond to fractions compared with the three other notations. Consistent with the response time data, eye-tracking data showed that participants spent more time encoding fractions and re-visited them more often than the other notations. Moreover, in general, participants spent more time positioning non-place-value-based than place-value-based notations on the number line. Overall, the present study contends that when both sources of difficulty are present in a notation (i.e., both rational and non-place-value-based), adults understand its numerical magnitude less well than when there is only one source of difficulty present (i.e., either rational or non-place-value-based). When no sources of difficulty are present in a notation (i.e., both natural and place-value-based), adults have the strongest understanding of its numerical magnitude. |
Diane E. MacKenzie; R. Lee Kirby; Cher Smith; Zainab Al Lawati; Eric Lee; Sorayya Askari Novice and expert observer accuracy of the threshold wheelchair skill: A pilot eye-tracking study Journal Article In: The Open Journal of Occupational Therapy, vol. 11, no. 2, pp. 1–10, 2023. @article{MacKenzie2023, Background: Moving a wheelchair over a low threshold is an entry-level mobility skill. Observation is critical to the assessment and training of this skill. The primary objective of this exploratory pilot study was to determine if a difference between novice and expert visual attention allocation pattern was linked to the accuracy of rating skill performance and decision confidence. Methods: Twelve expert occupational therapists and nine non-expert occupational therapy students observed 30 first-attempt recordings of able-bodied persons learning the low threshold skill. Randomized recordings included 10 recordings from each rating group of “pass,” “pass with difficulty" (pwd), and “fail.” Skill ratings, confidence ratings, time to decision, and SR Eyelink 1000+ monitored eye movements were recorded. Results: No significant group differences were found in the correct identification skill rating, though significant relationships were found with experts rating higher confidence in their decision-making and generally faster reaction times. While trends of eye movements differences were found between groups, only the number of areas of interest viewed in pwd videos was a potential rating correctness predictor. Conclusion: Improved confidence in decision-making did not mean improved assessment accuracy. The pwd video stimuli created the opportunity for assessing observation patterns differences. Further study is recommended. |
Helena Palmieri; Antonio Fernández; Marisa Carrasco Microsaccades and temporal attention at different locations of the visual field Journal Article In: Journal of Vision, vol. 23, no. 5, pp. 1–17, 2023. @article{Palmieri2023, Temporal attention, the prioritization of information at specific points in time, improves performance in behavioral tasks but cannot ameliorate the perceptual asymmetries that exist across the visual field. That is, even after attentional deployment, performance is better along the horizontal than vertical meridian and worse at the upper than lower vertical meridian. Here we asked whether and how microsaccades—tiny fixational eye-movements—could mirror or alternatively attempt to compensate for these performance asymmetries by assessing temporal profiles and direction of microsaccades as a function of visual field location. Observers were asked to report the orientation of one of two targets presented at different time points, in one of three blocked locations (fovea, right horizontal meridian, upper vertical meridian).We found the following: (1) Microsaccade occurrence did not affect either task performance or the magnitude of the temporal attention effect. (2) Temporal attention modulated the microsaccade temporal profiles, and this modulation varied with polar angle location. At all locations, microsaccade rates were significantly more suppressed in anticipation of the target when temporally cued than in the neutral condition. Moreover, microsaccade rates were more suppressed during target presentation in the fovea than in the right horizontal meridian. (3) Across locations and attention conditions, there was a pronounced bias toward the upper hemifield. Overall, these results reveal that temporal attention benefits performance similarly around the visual field, microsaccade suppression is more pronounced for attention than expectation (neutral trials) across locations, and the directional bias toward the upper hemifield could reflect an attempt to compensate for typical poor performance at the upper vertical meridian. |
Yafeng Pan; Mikkel C. Vinding; Lei Zhang; Daniel Lundqvist; Andreas Olsson A brain-to-brain mechanism for social transmission of threat learning Journal Article In: Advanced Science, vol. 10, no. 28, pp. 1–18, 2023. @article{Pan2023a, Survival and adaptation in environments require swift and efficacious learning about what is dangerous. Across species, much of such threat learning is acquired socially, e.g., through the observation of others' (“demonstrators'”) defensive behaviors. However, the specific neural mechanisms responsible for the integration of information shared between demonstrators and observers remain largely unknown. This dearth of knowledge is addressed by performing magnetoencephalography (MEG) neuroimaging in demonstrator-observer dyads. A set of stimuli are first shown to a demonstrator whose defensive responses are filmed and later presented to an observer, while neuronal activity is recorded sequentially from both individuals who never interacted directly. These results show that brain-to-brain coupling (BtBC) in the fronto-limbic circuit (including insula, ventromedial, and dorsolateral prefrontal cortex) within demonstrator-observer dyads predict subsequent expressions of learning in the observer. Importantly, the predictive power of BtBC magnifies when a threat is imminent to the demonstrator. Furthermore, BtBC depends on how observers perceive their social status relative to the demonstrator, likely driven by shared attention and emotion, as bolstered by dyadic pupillary coupling. Taken together, this study describes a brain-to-brain mechanism for social threat learning, involving BtBC, which reflects social relationships and predicts adaptive, learned behaviors. |
Ashim Pandey; Sujaya Neupane; Srijana Adhikary; Keepa Vaidya; Christopher C. Pack Cortical visual impairment at birth can be improved rapidly by vision training in adulthood: A case study Journal Article In: Restorative Neurology and Neuroscience, vol. 40, no. 4-6, pp. 261–270, 2023. @article{Pandey2023, Background: Cortical visual impairment (CVI) is a severe loss of visual function caused by damage to the visual cortex or its afferents, often as a consequence of hypoxic insults during birth. It is one of the leading causes of vision loss in children, and it is most often permanent. Objective: Several studies have demonstrated limited vision restoration in adults who trained on well-controlled psychophysical tasks, after acquiring CVI late in life. Other studies have shown improvements in children who underwent vision training. However, little is known about the prospects for the large number of patients who acquired CVI at birth but received no formal therapy as children. Methods: We, therefore, conducted a proof-of-principle study in one CVI patient long after the onset of cortical damage (age 18), to test the training speed, efficacy and generalizability of vision rehabilitation using protocols that had previously proven successful in adults. The patient trained at home and in the laboratory, on a psychophysical task that required discrimination of complex motion stimuli presented in the blind field. Visual function was assessed before and after training, using perimetric measures, as well as a battery of psychophysical tests. Results: The patient showed remarkably rapid improvements on the training task, with performance going from chance to 80% correct over the span of 11 sessions. With further training, improved vision was found for untrained stimuli and for perimetric measures of visual sensitivity. Some, but not all, of these performance gains were retained upon retesting after one year. Conclusions: These results suggest that existing vision rehabilitation programs can be highly effective in adult patients who acquired CVI at a young age. Validation with a large sample size is critical, and future work should also focus on improving the usability and accessibility of these programs for younger patients. |
Shubham Pandey; Rashmi Gupta Implicit angry faces interfere with response inhibition and response adjustment Journal Article In: Cognition and Emotion, vol. 37, no. 2, pp. 303–319, 2023. @article{Pandey2023a, Cognitive control enables people to adjust their thoughts and actions according to the current task demands. Response inhibition and response adjustment are two key aspects of cognitive control. Here, we examined how the implicit processing of emotional information influences these two functions with the help of the double-step saccade task. Each trial had either a single target or two sequential targets. Upon a single target onset, participants were required to make a quick saccade, but upon two target onsets, participants were instructed to inhibit their initial saccades and redirect their gaze to the second target. In three experiments, we manipulated the emotional information of the first and second targets. We found that irrelevant emotional information of the first target impaired response inhibition compared to non-emotional information (geometric shapes) of the first target. When non-emotional information (geometric shape) came as the first target, irrelevant angry emotional faces as the second target interfered with both response inhibition and response adjustment compared to irrelevant happy and neutral faces. We explain these results with previous findings that processing faces with irrelevant angry facial expressions take up many attentional resources, leaving fewer resources available for ongoing activities such as response inhibition and response adjustment. |
Ilenia Paparella; Islay Campbell; Roya Sharifpour; Elise Beckers; Alexandre Berger; Jose Fermin Balda Aizpurua; Ekaterina Koshmanova; Nasrin Mortazavi; Puneet Talwar; Christian Degueldre; Laurent Lamalle; Siya Sherif; Christophe Phillips; Pierre Maquet; Gilles Vandewalle Light modulates task-dependent thalamo-cortical connectivity during an auditory attentional task Journal Article In: Communications Biology, vol. 6, no. 1, pp. 1–10, 2023. @article{Paparella2023, Exposure to blue wavelength light stimulates alertness and performance by modulating a widespread set of task-dependent cortical and subcortical areas. How light affects the crosstalk between brain areas to trigger this stimulating effect is not established. Here we record the brain activity of 19 healthy young participants (24.05±2.63; 12 women) while they complete an auditory attentional task in darkness or under an active (blue-enriched) or a control (orange) light, in an ultra-high-field 7 Tesla MRI scanner. We test if light modulates the effective connectivity between an area of the posterior associative thalamus, encompassing the pulvinar, and the intraparietal sulcus (IPS), key areas in the regulation of attention. We find that only the blue-enriched light strengthens the connection from the posterior thalamus to the IPS. To the best of our knowledge, our results provide the first empirical data supporting that blue wavelength light affects ongoing non-visual cognitive activity by modulating task-dependent information flow from subcortical to cortical areas. |
Nadia Paraskevoudi; Iria SanMiguel Sensory suppression and increased neuromodulation during actions disrupt memory encoding of unpredictable self-initiated stimuli Journal Article In: Psychophysiology, vol. 60, no. 1, pp. 1–25, 2023. @article{Paraskevoudi2023, Actions modulate sensory processing by attenuating responses to self-compared to externally generated inputs, which is traditionally attributed to stimulus-specific motor predictions. Yet, suppression has been also found for stimuli merely coinciding with actions, pointing to unspecific processes that may be driven by neuromodulatory systems. Meanwhile, the differential processing for self-generated stimuli raises the possibility of producing effects also on memory for these stimuli; however, evidence remains mixed as to the direction of the effects. Here, we assessed the effects of actions on sensory processing and memory encoding of concomitant, but unpredictable sounds, using a combination of self-generation and memory recognition task concurrently with EEG and pupil recordings. At encoding, subjects performed button presses that half of the time generated a sound (motor-auditory; MA) and listened to passively presented sounds (auditory-only; A). At retrieval, two sounds were presented and participants had to respond which one was present before. We measured memory bias and memory performance by having sequences where either both or only one of the test sounds were presented at encoding, respectively. Results showed worse memory performance – but no differences in memory bias –, attenuated responses, and larger pupil diameter for MA compared to A sounds. Critically, the larger the sensory attenuation and pupil diameter, the worse the memory performance for MA sounds. Nevertheless, sensory attenuation did not correlate with pupil dilation. Collectively, our findings suggest that sensory attenuation and neuromodulatory processes coexist during actions, and both relate to disrupted memory for concurrent, albeit unpredictable sounds. |
Samantha Parker; Richard Ramsey Exploring the relationship between oculomotor preparation and gaze-cued covert shifts in attention Journal Article In: Journal of Vision, vol. 23, no. 3, pp. 1–18, 2023. @article{Parker2023a, Eye gaze plays dual perceptual and social roles in everyday life. Gaze allows us to select information, while also indicating to others where we are attending. There are situations, however, where revealing the locus of our attention is not adaptive, such as when playing competitive sports or confronting an aggressor. It is in these circumstances that covert shifts in attention are assumed to play an essential role. Despite this assumption, few studies have explored the relationship between covert shifts in attention and eye movements within social contexts. In the present study, we explore this relationship using the saccadic dual-task in combination with the gaze-cueing paradigm. Across two experiments, participants prepared an eye movement or fixated centrally. At the same time, spatial attention was cued with a social (gaze) or non-social (arrow) cue.We used an evidence accumulation model to quantify the contributions of both spatial attention and eye movement preparation to performance on a Landolt gap detection task. Importantly, this computational approach allowed us to extract a measure of performance that could unambiguously compare covert and overt orienting in social and non-social cueing tasks for the first time. Our results revealed that covert and overt orienting make separable contributions to perception during gaze-cueing, and that the relationship between these two types of orienting was similar for both social and non-social cueing. Therefore, our results suggest that covert and overt shifts in attention may be mediated by independent underlying mechanisms that are invariant to social context. |
Ashley C. Parr; Heidi C. Riek; Brian C. Coe; Giovanna Pari; Mario Masellis; Connie Marras; Douglas P. Munoz Genetic variation in the dopamine system is associated with mixed-strategy decision-making in patients with Parkinson's disease Journal Article In: European Journal of Neuroscience, vol. 58, no. 12, pp. 4523–4544, 2023. @article{Parr2023, Decision-making during mixed-strategy games requires flexibly adapting choice strategies in response to others' actions and dynamically tracking outcomes. Such decisions involve diverse cognitive processes, including reinforcement learning, which are affected by disruptions to the striatal dopamine system. We therefore investigated how genetic variation in dopamine function affected mixed-strategy decision-making in Parkinson's disease (PD), which involves striatal dopamine pathology. Sixty-six PD patients (ages 49–85, Hoehn and Yahr Stages 1–3) and 22 healthy controls (ages 54–75) competed in a mixed-strategy game where successful performance depended on minimizing choice biases (i.e., flexibly adapting choices trial by trial). Participants also completed a fixed-strategy task that was matched for sensory input, motor outputs and overall reward rate. Factor analyses were used to disentangle cognitive from motor aspects within both tasks. Using a within-subject, multi-centre design, patients were examined on and off dopaminergic therapy, and genetic variation was examined via a multilocus genetic profile score representing the additive effects of three single nucleotide polymorphisms (SNPs) that influence dopamine transmission: rs4680 (COMT Val158Met), rs6277 (C957T) and rs907094 (encoding DARPP-32). PD and control participants displayed comparable mixed-strategy choice behaviour (overall); however, PD patients with genetic profile scores indicating higher dopamine transmission showed improved performance relative to those with low scores. Exploratory follow-up tests across individual SNPs revealed better performance in individuals with the C957T polymorphism, reflecting higher striatal D2/D3 receptor density. Importantly, genetic variation modulated cognitive aspects of performance, above and beyond motor function, suggesting that genetic variation in dopamine signalling may underlie individual differences in cognitive function in PD. |
Aashay M. Patel; Katsuhisa Kawaguchi; Lenka Seillier; Hendrikje Nienborg In: European Journal of Neuroscience, vol. 57, no. 8, pp. 1368–1382, 2023. @article{Patel2023, Sensory processing is influenced by neuromodulators such as serotonin, thought to relay behavioural state. Recent work has shown that the modulatory effect of serotonin itself differs with the animal's behavioural state. In primates, including humans, the serotonin system is anatomically important in the primary visual cortex (V1). We previously reported that in awake fixating macaques, serotonin reduces the spiking activity by decreasing response gain in V1. But the effect of serotonin on the local network is unknown. Here, we simultaneously recorded single-unit activity and local field potentials (LFPs) while iontophoretically applying serotonin in V1 of alert monkeys fixating on a video screen for juice rewards. The reduction in spiking response we observed previously is the opposite of the known increase of spiking activity with spatial attention. Conversely, in the local network (LFP), the application of serotonin resulted in changes mirroring the local network effects of previous reports in macaques directing spatial attention to the receptive field. It reduced the LFP power and the spike–field coherence, and the LFP became less predictive of spiking activity, consistent with reduced functional connectivity. We speculate that together, these effects may reflect the sensory side of a serotonergic contribution to quiet vigilance: The lower gain reduces the salience of stimuli to suppress an orienting reflex to novel stimuli, whereas at the network level, visual processing is in a state comparable to that of spatial attention. |
Jagruti J. Pattadkal; Carrie Barr; Nicholas J. Priebe Ocular following eye movements in marmosets follow complex motion trajectories Journal Article In: eNeuro, vol. 10, no. 6, pp. 1–9, 2023. @article{Pattadkal2023, Ocular following eye movements help stabilize images on the retina and offer a window to study motion inter-pretation by visual circuits. We use these ocular following eye movements to study motion integration behavior in the marmosets. We characterize ocular following responses in the marmosets using different moving stimuli such as dot patterns, gratings, and plaids. Marmosets track motion along different directions and exhibit spatial frequency and speed sensitivity, which closely matches the sensitivity reported in neurons from their mo-tion-selective area MT. Marmosets are also able to track the integrated motion of plaids, with tracking direction consistent with an intersection of constraints model of motion integration. Marmoset ocular following responses are similar to responses in macaques and humans with certain species-specific differences in peak sensitivities. Such motion-sensitive eye movement behavior in combination with direct access to cortical circuitry makes the marmoset model well suited to study the neural basis of motion integration. |
Candace E. Peacock; Elizabeth H. Hall; John M. Henderson Objects are selected for attention based upon meaning during passive scene viewing Journal Article In: Psychonomic Bulletin & Review, vol. 30, no. 5, pp. 1874–1886, 2023. @article{Peacock2023, While object meaning has been demonstrated to guide attention during active scene viewing and object salience guides attention during passive viewing, it is unknown whether object meaning predicts attention in passive viewing tasks and whether attention during passive viewing is more strongly related to meaning or salience. To answer this question, we used a mixed modeling approach where we computed the average meaning and physical salience of objects in scenes while statistically controlling for the roles of object size and eccentricity. Using eye-movement data from aesthetic judgment and memorization tasks, we then tested whether fixations are more likely to land on high-meaning objects than low-meaning objects while controlling for object salience, size, and eccentricity. The results demonstrated that fixations are more likely to be directed to high meaning objects than low meaning objects regardless of these other factors. Further analyses revealed that fixation durations were positively associated with object meaning irrespective of the other object properties. Overall, these findings provide the first evidence that objects are, in part, selected by meaning for attentional selection during passive scene viewing. |
Candace E. Peacock; Praveena Singh; Taylor R. Hayes; Gwendolyn Rehrig; John M. Henderson Searching for meaning: Local scene semantics guide attention during natural visual search in scenes Journal Article In: Quarterly Journal of Experimental Psychology, vol. 76, no. 3, pp. 632–648, 2023. @article{Peacock2023a, Models of visual search in scenes include image salience as a source of attentional guidance. However, because scene meaning is correlated with image salience, it could be that the salience predictor in these models is driven by meaning. To test this proposal, we generated meaning maps that represented the spatial distribution of semantic informativeness in scenes, and salience maps which represented the spatial distribution of conspicuous image features and tested their influence on fixation densities from two object search tasks in real-world scenes. The results showed that meaning accounted for significantly greater variance in fixation densities than image salience, both overall and in early attention across both studies. Here, meaning explained 58% and 63% of the theoretical ceiling of variance in attention across both studies, respectively. Furthermore, both studies demonstrated that fast initial saccades were not more likely to be directed to higher salience regions than slower initial saccades, and initial saccades of all latencies were directed to regions containing higher meaning than salience. Together, these results demonstrated that even though meaning was task-neutral, the visual system still selected meaningful over salient scene regions for attention during search. |
Salome Pedrett; Alain Chavaillaz; Andrea Frick Age-related changes in how 3.5- to 5.5-year-olds observe and imagine rotational object motion Journal Article In: Spatial Cognition & Computation, vol. 23, no. 2, pp. 83–111, 2023. @article{Pedrett2023, Mental representations of rotation were investigated in 3.5- to 5.5-year-olds (N = 74) using a multi-method approach. In a novel mental-rotation task, children were asked to choose one of two rotated shapes that would fit onto a counterpart. The developmental trajectory of mental rotation was compared to eye-tracking results on how the same children observed and anticipated circular object motion. On the mental-rotation task, children below age 4 performed above chance up to angles of 150°, and performance improved with age. Eye-tracking results indicated that mental representations of circular motion were largely developed by the age of 3.5 years. In contrast, perception of rotational motion and mental rotation of asymmetrical shapes continued to develop between 3.5 and 5.5 years of age. |
Marek A. Pedziwiatr; Elisabeth Hagen; Christoph Teufel Knowledge-driven perceptual organization reshapes information sampling via eye movements Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 49, no. 3, pp. 408–427, 2023. @article{Pedziwiatr2023, Humans constantly move their eyes to explore the environment. However, how image-computable features and object representations contribute to eye-movement control is an ongoing debate. Recent developments in object perception indicate a complex relationship between features and object representations, where image-independent object knowledge generates objecthood by reconfiguring how feature space is carved up. Here, we adopt this emerging perspective, asking whether object-oriented eye movements result from gaze being guided by image-computable features, or by the fact that these features are bound into an object representation.We recorded eyemovements in response to stimuli that initially appear asmeaningless patches but are experienced as coherent objects once relevant object knowledge has been acquired. We demonstrate that fixations on identical images are more object-centered, less dispersed, and more consistent across observers once these images are organized into objects. Gaze guidance also showed a shift from exploratory information sampling to exploitation of object-related image areas. These effects were evident from the first fixations onwards. Importantly, eye movements were not fully determined by knowledge-dependent object representations but were best explained by the integration of these representations with image-computable features. Overall, the results show how information sampling via eye movements is guided by a dynamic interaction between image-computable features and knowledge-driven perceptual organization. |
Ana Pelegrino; Anna Luiza Guimaraes; Walter Sena; Nwabunwanne Emele; Linda Scoriels; Rogerio Panizzutti Dysregulated noradrenergic response is associated with symptom severity in individuals with schizophrenia Journal Article In: Frontiers in Psychiatry, vol. 14, pp. 1–9, 2023. @article{Pelegrino2023, Introduction: The locus coeruleus-noradrenaline (LC-NA) system is involved in a wide range of cognitive functions and may be altered in schizophrenia. A non-invasive method to indirectly measure LC activity is task-evoked pupillary response. Individuals with schizophrenia present reduced pupil dilation compared to healthy subjects, particularly when task demand increases. However, the extent to which alteration in LC activity contributes to schizophrenia symptomatology remains largely unexplored. We aimed to investigate the association between symptomatology, cognition, and noradrenergic response in individuals with schizophrenia. Methods: We assessed task-evoked pupil dilation during a pro- and antisaccade task in 23 individuals with schizophrenia and 28 healthy subjects. Results: Both groups showed similar preparatory pupil dilation during prosaccade trials, but individuals with schizophrenia showed significantly lower pupil dilation compared to healthy subjects in antisaccade trials. Importantly, reduced preparatory pupil dilation for antisaccade trials was associated with worse general symptomatology in individuals with schizophrenia. Discussion: Our findings suggest that changes in LC-NA activity – measured by task-evoked pupil dilation – when task demand increases is associated with schizophrenia symptoms. Interventions targeting the modulation of noradrenergic responses may be suitable candidates to reduce schizophrenia symptomatology. |
A. I. Pérez; E. Schmidt; I. M. Tsimpli Inferential evaluation and revision in L1 and L2 text comprehension: An eye movement study Journal Article In: Bilingualism, pp. 1–14, 2023. @article{Perez2023, Text comprehension frequently demands the resolution of no longer plausible interpretations to build an accurate situation model, an ability that might be especially challenging during second language comprehension. Twenty-two native English speakers (L1) and twenty-two highly proficient non-native English speakers (L2) were presented with short narratives in English. Each text required the evaluation and revision of an initial prediction. Eye movements in the text and a comprehension sentence indicated less efficient performance in the L2 than in L1 comprehension, in both inferential evaluation and revision. Interestingly, these effects were determined by individual differences in inhibitory control and linguistic proficiency. Higher inhibitory control reduced the time rereading previous parts of the text (better evaluation) as well as revisiting the text before answering the sentence (better revision) in L2 comprehenders, whereas higher proficiency reduced the time in the sentence when the story was coherent, suggesting better general comprehension in both languages. |
Oswaldo Pérez; Sergio Delle Monache; Francesco Lacquaniti; Gianfranco Bosco; Hugo Merchant Rhythmic tapping to a moving beat motion kinematics overrules natural gravity Journal Article In: iScience, vol. 26, no. 9, pp. 1–21, 2023. @article{Perez2023a, Beat induction is the cognitive ability that allows humans to listen to a regular pulse in music and move in synchrony with it. Although auditory rhythmic cues induce more consistent synchronization than flashing visual metronomes, this auditory-visual asymmetry can be canceled by visual moving stimuli. Here, we investigated whether the naturalness of visual motion or its kinematics could provide a synchronization advantage over flashing metronomes. Subjects were asked to tap in sync with visual metronomes defined by vertically accelerating/decelerating motion, either congruent or not with natural gravity; horizontally accelerating/decelerating motion; or flashing stimuli. We found that motion kinematics was the predominant factor determining rhythm synchronization, as accelerating moving metronomes in any cardinal direction produced more precise and predictive tapping than decelerating or flashing conditions. Our results support the notion that accelerating visual metronomes convey a strong sense of beat, as seen in the cueing movements of an orchestra director. |
Alexis Pérez-Bellido; Eelke Spaak; Floris P. Lange Magnetoencephalography recordings reveal the neural mechanisms of auditory contributions to improved visual detection Journal Article In: Communications biology, vol. 6, no. 12, pp. 1–16, 2023. @article{PerezBellido2023, Sounds enhance the detection of visual stimuli while concurrently biasing an observer's decisions. To investigate the neural mechanisms that underlie such multisensory interactions, we decoded time-resolved Signal Detection Theory sensitivity and criterion parameters from magneto-encephalographic recordings of participants that performed a visual detection task. We found that sounds improved visual detection sensitivity by enhancing the accumulation and maintenance of perceptual evidence over time. Meanwhile, criterion decoding analyses revealed that sounds induced brain activity patterns that resembled the patterns evoked by an actual visual stimulus. These two complementary mechanisms of audiovisual interplay differed in terms of their automaticity: Whereas the sound-induced enhancement in visual sensitivity depended on participants being actively engaged in a detection task, we found that sounds activated the visual cortex irrespective of task demands, potentially inducing visual illusory percepts. These results challenge the classical assumption that sound-induced increases in false alarms exclusively correspond to decision-level biases. |
Sonja Perkovic; Martin Schoemann; Carl Johan Lagerkvist; Jacob L. Orquin Covert attention leads to fast and accurate decision-making Journal Article In: Journal of Experimental Psychology: Applied, vol. 29, no. 1, pp. 78–94, 2023. @article{Perkovic2023, Decision-makers are regularly faced with more choice information than they can directly gaze at in a limited amount of time. Many theories assume that because decision-makers attend to information sequentially and overtly, that is, with direct gaze, they must respond to information overload by trading off between speed and decision accuracy. By reanalyzing five published studies, we show that participants, besides using overt attention, also use covert attention. That is, without being instructed to do so, participants attend to information without direct gaze to evaluate choice attributes that lead them to either choose the best or reject theworst option. Weshow that the use of covert attention is common for most participants andmore sowhen information is easily identifiable in the peripheral visual field due to being large or visually salient. Covert attention is associated with faster decision times suggesting that participants might process multiple pieces of information simultaneously using distributed attention. Our findings highlight the importance of covert attention in decision-making and show how decision-makers may be gaining speed while retaining high levels of decision accuracy.We discuss how harnessing covert attention can benefit consumer decision-making of healthy and sustainable products |
Christina U. Pfeuffer; Andrea Kiesel; Lynn Huestegge Similar proactive effect monitoring in free and forced choice action modes Journal Article In: Psychological Research, vol. 87, no. 1, pp. 226–241, 2023. @article{Pfeuffer2023, When our actions yield predictable consequences in the environment, our eyes often already saccade towards the locations we expect these consequences to appear at. Such spontaneous anticipatory saccades occur based on bi-directional associations between action and effect formed by prior experience. That is, our eye movements are guided by expectations derived from prior learning history. Anticipatory saccades presumably reflect a proactive effect monitoring process that prepares a later comparison of expected and actual effect. Here, we examined whether anticipatory saccades emerged under forced choice conditions when only actions but not target stimuli were predictive of future effects and whether action mode (forced choice vs. free choice, i.e., stimulus-based vs. stimulus-independent choice) affected proactive effect monitoring. Participants produced predictable visual effects on the left/right side via forced choice and free choice left/right key presses. Action and visual effect were spatially compatible in one half of the experiment and spatially incompatible in the other half. Irrespective of whether effects were predicted by target stimuli in addition to participants' actions, in both action modes, we observed anticipatory saccades towards the location of future effects. Importantly, neither the frequency, nor latency or amplitude of these anticipatory saccades significantly differed between forced choice and free choice action modes. Overall, our findings suggest that proactive effect monitoring of future action consequences, as reflected in anticipatory saccades, is comparable between forced choice and free choice action modes. |
Zhongling Pi; Wei Liu; Hongjuan Ling; Xingyu Zhang; Xiying Li Does an instructor's facial expressions override their body gestures in video lectures? Journal Article In: Computers and Education, vol. 193, pp. 1–16, 2023. @article{Pi2023a, While teaching, instructors will use unplanned, spontaneous facial expressions and body gestures to express their emotions. There is a growing consensus that an instructor's emotional expressions can trigger students' emotional and psychological responses, thus enhancing or inhibiting their learning in both face-to-face and online teaching contexts. However, little systematic research exists on which specific design features of an instructor's movements can induce emotions in video lectures. Three experiments were conducted in this study. Experiment 1 aimed to test the congruency/incongruency effects of an instructor's facial expressions (happy vs. bored) and body gestures (happy vs. bored) on student learning from video lectures in terms of students' emotions, motivation, cognitive load, and learning performance. Results of Experiment 1 showed that the instructor's happy facial expressions induced more positive emotions, enhanced motivation, and improved learning performance in students than the bored facial expressions, regardless of the instructor's body gestures. Experiment 2 sought to build upon the unexpected finding from Experiment 1 by increasing the frequency of body gestures, seeking evidence from both self-reports and eye movements. Results of Experiment 2 showed that the instructor's happy facial expressions enhanced students' learning performance when the instructor did not use body gestures, but not when they used increased body gestures. Experiment 3 was conducted to further expand upon findings from Experiment 1 and Experiment 2. Results of Experiment 3 confirmed the emotion-motivational and cognitive benefits of the instructor's happy facial expressions. The results have implications for designing features of instructors in video lectures: if instructors are visible, they should be encouraged to exhibit happy facial expressions, using body gestures less frequently or even avoid body gestures entirely. |
Zhongling Pi; Qiuchen Yu; Yi Zhang; Yan Li; Hui Chen; Jiumin Yang Presenting points or rank: The impacts of leaderboard elements on English vocabulary learning through video lectures Journal Article In: Journal of Computer Assisted Learning, pp. 104–117, 2023. @article{Pi2023, Background: Leaderboards are a highly popular gamification component used in student learning to enhance motivation, attentional engagement, and learning performance. However, few studies have examined the effects of individual leaderboard elements on English vocabulary learning through video lectures. Objectives: The present study aimed to examine how different leaderboard elements (i.e., points and rank) may affect students' English vocabulary learning through video lectures. Methods: A total of 34 students were assigned to groups using different leaderboard elements in a counterbalanced order. Participants' motivation, eye movements, and learning performance were measured and analysed. Results and Conclusions: Students' leaderboard rank was shown to increase their motivation regardless of whether other elements were present. Eye movement tracking revealed that the presence of the leaderboard increased students' saccades between the questions and the options, and lengthened their dwell time on the learning materials while reducing their dwell time on the non-learning-related screen areas. Presenting students' rank alone also improved their learning performance. Implications: Our findings strongly support the use of video lectures for English vocabulary learning, with the following recommendations: (1) Instructors should present students' rank on the leaderboard to enhance students' motivation and engagement; (2) Instructors should present only the students' rank on the leaderboard to also enhance students' learning performance. |
Zhongling Pi; Yi Zhang; Ke Xu; Jiumin Yang Does an outline of contents promote learning from videos? A study on learning performance and engagement Journal Article In: Education and Information Technologies, vol. 28, no. 3, pp. 3493–3511, 2023. @article{Pi2023b, It is well known that outlines can help learners establish a conceptual framework that connects new knowledge with prior knowledge, and thus promote learning. However, it is unclear whether outlines are beneficial before learning from watching an educational video. We tested the effects of two goal setting strategies on learning from a video lecture. Learners (N = 87) were randomly assigned to one of three groups: read an instructor-generated outline before the video (n = 29); read the same outline, and based on it, generate their own outline of the key ideas before the video (n = 29); control group (n = 29). The study was conducted in an eye-tracking laboratory. Learners in the instructor-generated outline group reported higher learning engagement than those in the control group. Learners in the reading and generating outline group paid greater attention to the learning materials, and had higher learning performance scores, than those in the control group. The two strategy groups did not differ from each other on learning engagement or learning performance. The findings suggest that: To improve learning, instructors should ask learners to read an instructor-generated outline, and to generate their own outline based on the instructor's outline, before viewing the video lecture. |
Zhongling Pi; Yi Zhang; Fangfang Zhu; Louqi Chen; Xin Guo; Jiumin Yang The mutual influence of an instructor's eye gaze and facial expression in video lectures Journal Article In: Interactive Learning Environments, vol. 31, no. 6, pp. 3664–3681, 2023. @article{Pi2023c, This study tested the mutual effects of the instructor's eye gaze and facial expression on students' eye movements (i.e. first fixation time to the slides, percentage dwell time on the slides, and percentage dwell time on the instructor), parasocial interaction, and learning performance in pre-recorded video lectures. Students (N = 118 undergraduate and graduate students) were assigned to watch one of four videos in a 2 (gaze: direct, guided) × 2 (facial expression: surprised, neutral) between-groups design. Contrary to our hypotheses, eye movement data showed that students who watched the video lecture with the instructor's guided gaze and surprised face showed longer first fixation time to the slides and lower dwell time on the slides; these students also had lower learning scores. Instructor eye gaze and facial expression did not influence students' ratings of parasocial interaction. Our results suggest that in reference to social cues during video lectures with slides, “more” is not necessarily “better.” The findings have practical implications for designing pre-recorded slide-based video lectures: An instructor is cautioned against using multiple social cues simultaneously, especially in video lectures in which the instructor and the visual learning materials compete for students' attention. |
Yair Pinto; Maria Chiara Villa; Sabrina Siliquini; Gabriele Polonara; Claudia Passamonti; Simona Lattanzi; Nicoletta Foschi; Mara Fabri; Edward H. F. Haan Visual integration across fixation: Automatic processes are split but conscious processes remain unified in the split-brain Journal Article In: Frontiers in Human Neuroscience, vol. 17, pp. 1–8, 2023. @article{Pinto2023, The classic view holds that when “split-brain” patients are presented with an object in the right visual field, they will correctly identify it verbally and with the right hand. However, when the object is presented in the left visual field, the patient verbally states that he saw nothing but nevertheless identifies it accurately with the left hand. This interaction suggests that perception, recognition and responding are separated in the two isolated hemispheres. However, there is now accumulating evidence that this interaction is not absolute; for instance, split-brain patients are able to detect and localise stimuli anywhere in the visual field verbally and with either hand. In this study we set out to explore this cross-hemifield interaction in more detail with the split-brain patient DDC and carried out two experiments. The aim of these experiments is to unveil the unity of deliberate and automatic processing in the context of visual integration across hemispheres. Experiment 1 suggests that automatic processing is split in this context. In contrast, when the patient is forced to adopt a conscious, deliberate, approach, processing seemed to be unified across visual fields (and thus across hemispheres). First, we looked at the confidence that DDC has in his responses. The experiment involved a simultaneous “same” versus “different” matching task with two shapes presented either within one hemifield or across fixation. The results showed that we replicated the observation that split brain patients cannot match across fixation, but more interesting, that DDC was very confident in the across-fixation condition while performing at chance-level. On the basis of this result, we hypothesised a two-route explanation. In healthy subjects, the visual information from the two hemifields is integrated in an automatic, unconscious fashion via the intact splenium, and this route has been severed in DDC. However, we know from previous experiments that some transfer of information remains possible. We proposed that this second route (perhaps less visual; more symbolic) may become apparent when he is forced to use a deliberate, consciously controlled approach. In an experiment where he is informed, by a second stimulus presented in one hemifield, what to do with the first stimulus that was presented in the same or the opposite hemifield, we showed that there was indeed interhemispheric transfer of information. We suggest that this two-route model may help in clarifying some of the controversial issues in split-brain research. |
Alessandro Piras; Matteo Bertucco; Francesco Del Santo; Andrea Meoni; Milena Raffi Postural stability assessment in expert versus amateur basketball players during optic flow stimulation Journal Article In: Journal of Electromyography and Kinesiology, vol. 74, pp. 1–8, 2023. @article{Piras2023, We evaluated the role of visual stimulation on postural muscles and the changes in the center of pressure (CoP) during standing posture in expert and amateur basketball players. Participants were instructed to look at a fixation point presented on a screen during foveal, peripheral, and full field optic flow stimuli. Postural mechanisms and motor strategies were assessed by simultaneous recordings of stabilometric, oculomotor, and electromyographic data during visual stimulation. We found significant differences between experts and amateurs in the orientation of visual attention. Experts oriented attention to the right of their visual field, while amateurs to the bottom-right. The displacement in the CoP mediolateral direction showed that experts had a greater postural sway of the right leg, while amateurs on the left leg. The entropy-based data analysis of the CoP mediolateral direction exhibited a greater value in amateurs than in experts. The root-mean-square and the coactivation index analysis showed that experts activated mainly the right leg while amateurs the left leg. In conclusion, playing sports for years seems to have induced some strong differences in the standing posture between the right and left sides. Even during non-ecological visual stimulation, athletes maintain postural adaptations to counteract the body oscillation. |
Aurélie Pistono; Robert J. Hartsuiker Can object identification difficulty be predicted based on disfluencies and eye-movements in connected speech? Journal Article In: PLoS ONE, vol. 18, pp. 1–18, 2023. @article{Pistono2023, In the current study, we asked whether delays in the earliest stages of picture naming elicit disfluency. To address this question, we used a network task, where participants describe the route taken by a marker through visually presented networks of objects. Additionally, given that disfluencies are arguably multifactorial, we combined this task with eye tracking, to be able to disentangle disfluency related to word preparation from other factors (e.g., stalling strategy). We used visual blurring, which hinders visual identification of the items and thereby slows down selection of a lexical concept. We tested the effect of this manipulation on disfluency production and visual attention. Blurriness did not lead to more disfluency on average and viewing times decreased with blurred pictures. However, multivariate pattern analyses revealed that a classifier could predict above chance, from the pattern of disfluency, whether each participant was about to name blurred or control pictures. Impeding the conceptual generation of a message therefore affected the pattern of disfluencies of each participant individually, but this pattern was not consistent from one participant to another. Additionally, some of the disfluency and eye-movement variables correlated with individual cognitive differences, in particular with inhibition. |
Katharina Pittrich; Sascha Schroeder Reading vertically and horizontally mirrored text: An eye movement investigation Journal Article In: Quarterly Journal of Experimental Psychology, vol. 76, no. 2, pp. 271–283, 2023. @article{Pittrich2023, This study examined the cognitive processes involved in reading vertically and horizontally mirrored text. We tracked participants' eye movements while they were reading the Potsdam Sentence Corpus which consists of 144 sentences with target words that are manipulated for length and frequency. Sentences were presented in three different conditions: In the normal condition, text was presented with upright letters, in the vertical condition, each letter was flipped around its vertical (left-right) axis while in the horizontal condition, letters were flipped around their horizontal (up-down) axis. Results show that reading was slowed down in both mirror conditions and that horizontal mirroring was particularly disruptive. In both conditions, we found larger effects of word length than in the normal condition indicating that participants read the sentences more serially and effortfully. Similarly, frequency effects were larger in both mirror conditions in later reading measures (gaze duration, go-past time, and total reading time) and particularly pronounced in the horizontal condition. This indicates that reading mirrored script involves a late checking mechanism that is particularly important for reading a horizontally mirrored script. Together, our findings demonstrate that mirroring affects both early visual identification and later linguistic processes. |
Barbara L. Pitts; Michelle L. Eisenberg; Heather R. Bailey; Jeffrey M. Zacks Cueing natural event boundaries improves memory in people with post-traumatic stress disorder Journal Article In: Cognitive Research: Principles and Implications, vol. 8, no. 1, pp. 1–10, 2023. @article{Pitts2023, People with post-traumatic stress disorder (PTSD) often report difficulty remembering information in their everyday lives. Recent findings suggest that such difficulties may be due to PTSD-related deficits in parsing ongoing activity into discrete events, a process called event segmentation. Here, we investigated the causal relationship between event segmentation and memory by cueing event boundaries and evaluating its effect on subsequent memory in people with PTSD. People with PTSD (n = 38) and trauma-matched controls (n = 36) watched and remembered videos of everyday activities that were either unedited, contained visual and auditory cues at event boundaries, or contained visual and auditory cues at event middles. PTSD symptom severity varied substantial within both the group with a PTSD diagnosis and the control group. Memory performance did not differ significantly between groups, but people with high symptoms of PTSD remembered fewer details from the videos than those with lower symptoms of PTSD. Both those with PTSD and controls remembered more information from the videos in the event boundary cue condition than the middle cue or unedited conditions. This finding has important implications for translational work focusing on addressing everyday memory complaints in people with PTSD. |
Rista C. Plate; Tralucia Powell; Rachael Bedford; Tim J. Smith; Ankur Bamezai; Quentin Wedderburn; Alexis Broussard; Natasha Soesanto; Caroline Swetlitz; Rebecca Waller; Nicholas J. Wagner Social threat processing in adults and children: Faster orienting to, but shorter dwell time on, angry faces during visual search Journal Article In: Developmental Science, pp. 1–8, 2023. @article{Plate2023, Attention to emotional signals conveyed by others is critical for gleaning information about potential social partners and the larger social context. Children appear to detect social threats (e.g., angry faces) faster than non-threatening social signals (e.g., neutral faces). However, methods that rely on behavioral responses alone are limited in identifying different attentional processes involved in threat detection or responding. To address this question, we used a visual search paradigm to assess behavioral (i.e., reaction time to select a target image) and attentional (i.e., eye-tracking fixations, saccadic shifts, and dwell time) responses in children (ages 7–10 years old |
Belinda Platt; Anca Sfärlea; Johanna Löchner; Elske Salemink; Gerd Schulte-Körne The role of cognitive biases and negative life events in predicting later depressive symptoms in children and adolescents Journal Article In: Journal of Experimental Psychopathology, vol. 14, no. 3, pp. 1–16, 2023. @article{Platt2023, Aims: Cognitive models propose that negative cognitive biases in attention (AB) and interpretation (IB) contribute to the onset of depression. This is the first prospective study to test this hypothesis in a sample of youth with no mental disorder. Methods: Participants were 61 youth aged 9–14 years with no mental disorder. At baseline (T1) we measured AB (passive- viewing task), IB (scrambled sentences task) and self-report depressive symptoms. Thirty months later (T2) we measured onset of mental disorder, depressive symptoms and life events (parent- and child-report). The sample included children of parents with (n = 31) and without (n = 30) parental depression. Results: Symptoms of depression at T2 were predicted by IB (ß = .35 |
Iván Plaza-Rosales; Enzo Brunetti; Rodrigo Montefusco-Siegmund; Samuel Madariaga; Rodrigo Hafelin; Daniela P. Ponce; María Isabel Behrens; Pedro E. Maldonado; Andrea Paula-Lima Visual-spatial processing impairment in the occipital-frontal connectivity network at early stages of Alzheimer's disease Journal Article In: Frontiers in Aging Neuroscience, vol. 15, pp. 1–14, 2023. @article{PlazaRosales2023, Introduction: Alzheimer's disease (AD) is the leading cause of dementia worldwide, but its pathophysiological phenomena are not fully elucidated. Many neurophysiological markers have been suggested to identify early cognitive impairments of AD. However, the diagnosis of this disease remains a challenge for specialists. In the present cross-sectional study, our objective was to evaluate the manifestations and mechanisms underlying visual-spatial deficits at the early stages of AD. Methods: We combined behavioral, electroencephalography (EEG), and eye movement recordings during the performance of a spatial navigation task (a virtual version of the Morris Water Maze adapted to humans). Participants (69–88 years old) with amnesic mild cognitive impairment–Clinical Dementia Rating scale (aMCI–CDR 0.5) were selected as probable early AD (eAD) by a neurologist specialized in dementia. All patients included in this study were evaluated at the CDR 0.5 stage but progressed to probable AD during clinical follow-up. An equal number of matching healthy controls (HCs) were evaluated while performing the navigation task. Data were collected at the Department of Neurology of the Clinical Hospital of the Universidad de Chile and the Department of Neuroscience of the Faculty of Universidad de Chile. Results: Participants with aMCI preceding AD (eAD) showed impaired spatial learning and their visual exploration differed from the control group. eAD group did not clearly prefer regions of interest that could guide solving the task, while controls did. The eAD group showed decreased visual occipital evoked potentials associated with eye fixations, recorded at occipital electrodes. They also showed an alteration of the spatial spread of activity to parietal and frontal regions at the end of the task. The control group presented marked occipital activity in the beta band (15–20 Hz) at early visual processing time. The eAD group showed a reduction in beta band functional connectivity in the prefrontal cortices reflecting poor planning of navigation strategies. Discussion: We found that EEG signals combined with visual-spatial navigation analysis, yielded early and specific features that may underlie the basis for understanding the loss of functional connectivity in AD. Still, our results are clinically promising for early diagnosis required to improve quality of life and decrease healthcare costs. |
Timothy J. Pleskac; Shuli Yu; Sergej Grunevski; Taosheng Liu Attention biases preferential choice by enhancing an option's value Journal Article In: Journal of Experimental Psychology: General, vol. 152, no. 4, pp. 993–1010, 2023. @article{Pleskac2023, Does attending to an option lead to liking it? Though attention-induced valuation is often hypothesized, evi- dence for this causal link has remained elusive. We test this hypothesis across 2 studies by manipulating atten- tion during a preferential decision and its perceptual analog. In a free-viewing task, attention impacted choice and eye movement pattern in the preferential decision more than the perceptual analog. Similarly, in a con- trolled-viewing task, attention had a larger effect on choice in the preferential decision than its perceptual ana- log. Across these experimental manipulations of attention, choice and eye-tracking data provide converging evidence that attention enhances value, and computational modeling further supports this attention-induced valuation hypothesis. A possible explanation for our results is a normalization mechanism where attention induces a gain modulation on an option's representation at both the sensory and value processing levels. |
Megan Polden; Trevor J. Crawford Eye movement latency coefficient of variation as a predictor of cognitive impairment: An eye tracking study of cognitive impairment Journal Article In: Vision, vol. 7, no. 2, pp. 1–12, 2023. @article{Polden2023, Studies demonstrated impairment in the control of saccadic eye movements in Alzheimer's disease (AD) and people with mild cognitive impairment (MCI) when conducting the pro-saccade and antisaccade tasks. Research showed that changes in the pro and antisaccade latencies may be particularly sensitive to dementia and general executive functioning. These tasks show potential for diagnostic use, as they provide a rich set of potential eye tracking markers. One such marker, the coefficient of variation (CV), is so far overlooked. For biological markers to be reliable, they must be able to detect abnormalities in preclinical stages. MCI is often viewed as a predecessor to AD, with certain classifications of MCI more likely than others to progress to AD. The current study examined the potential of CV scores on pro and antisaccade tasks to distinguish participants with AD, amnestic MCI (aMCI), non-amnesiac MCI (naMCI), and older controls. The analyses revealed no significant differences in CV scores across the groups using the pro or antisaccade task. Antisaccade mean latencies were able to distinguish participants with AD and the MCI subgroups. Future research is needed on CV measures and attentional fluctuations in AD and MCI individuals to fully assess this measure's potential to robustly distinguish clinical groups with high sensitivity and specificity. |
Stefan Pollmann; Lei Zheng Right-dominant contextual cueing for global configuration cues, but not local position cues Journal Article In: Neuropsychologia, vol. 178, pp. 1–7, 2023. @article{Pollmann2023, Contextual cueing can depend on global configuration or local item position. We investigated the role of these two kinds of cues in the lateralization of contextual cueing effects. Cueing by item position was tested by recombining two previously learned displays, keeping the individual item locations intact, but destroying the global configuration. In contrast, cueing by configuration was investigated by rotating learned displays, thereby keeping the configuration intact but changing all item positions. We observed faster search for targets in the left display half, both for repeated and new displays, along with more first fixation locations on the left. Both position and configuration cues led to faster search, but the search time reduction compared to new displays due to position cues was comparable in the left and right display half. In contrast, configural cues led to increased search time reduction for right half targets. We conclude that only configural cues enabled memory-guided search for targets across the whole search display, whereas position cueing guided search only to targets in the vicinity of the fixation. The right-biased configural cueing effect is a consequence of the initial leftward search bias and does not indicate hemispheric dominance for configural cueing. |
Elie Poncet; Gaelle Nicolas; Nathalie Guyader; Elena Moro; Aurélie Campagne Spatio-temporal attention toward emotional scenes across adulthood Journal Article In: Emotion, vol. 23, no. 6, pp. 1726–1739, 2023. @article{Poncet2023, Research on emotion suggests that the attentional preference observed toward the negative stimuli in young adults tends to disappear in normal aging and, sometimes, to shift toward a preference for positive stimuli. The current eye-tracking study investigated visual exploration of paired natural scenes of different valence (Negative–Neutral, Positive–Neutral, and Negative–Positive pairs) in three age groups (young, middle-aged, and older adults). Two arousal levels of stimuli (high and low arousal) were also considered given role of this factor in age-related effects on emotion. Results showed the automatic attentional orienting toward the negative stimuli was relatively preserved in our three age groups although reduced in the elderly, in both arousal conditions. A similar negativity bias was also observed in initial attention focusing but shifted toward a positivity bias over time in the three age groups. Moreover, it appeared the spatial exploration of emotional scenes evolved over time differently for older adults compared with other age groups. No difference between young adults and middle-aged adults in ocular behavior was observed. This study confirms the interest of studying both spatial and temporal characteristics of oculomotor behaviors to better understand the age-related effects on emotion. |
Eva R. Pool; Wolfgang M. Pauli; Logan Cross; John P. O'Doherty Neural substrates of parallel devaluation-sensitive and devaluation-insensitive Pavlovian learning in humans Journal Article In: Nature Communications, vol. 14, no. 1, pp. 1–17, 2023. @article{Pool2023, We aim to differentiate the brain regions involved in the learning and encoding of Pavlovian associations sensitive to changes in outcome value from those that are not sensitive to such changes by combining a learning task with outcome devaluation, eye-tracking, and functional magnetic resonance imaging in humans. Contrary to theoretical expectation, voxels correlating with reward prediction errors in the ventral striatum and subgenual cingulate appear to be sensitive to devaluation. Moreover, regions encoding state prediction errors appear to be devaluation insensitive. We can also distinguish regions encoding predictions about outcome taste identity from predictions about expected spatial location. Regions encoding predictions about taste identity seem devaluation sensitive while those encoding predictions about an outcome's spatial location seem devaluation insensitive. These findings suggest the existence of multiple and distinct associative mechanisms in the brain and help identify putative neural correlates for the parallel expression of both devaluation sensitive and insensitive conditioned behaviors. |
Tzvetan Popov; Bart Gips; Nathan Weisz; Ole Jensen Brain areas associated with visual spatial attention display topographic organization during auditory spatial attention Journal Article In: Cerebral Cortex, vol. 33, no. 7, pp. 3478–3489, 2023. @article{Popov2023a, Spatially selective modulation of alpha power (8–14 Hz) is a robust finding in electrophysiological studies of visual attention, and has been recently generalized to auditory spatial attention. This modulation pattern is interpreted as reflecting a top-down mechanism for suppressing distracting input from unattended directions of sound origin. The present study on auditory spatial attention extends this interpretation by demonstrating that alpha power modulation is closely linked to oculomotor action. We designed an auditory paradigm in which participants were required to attend to upcoming sounds from one of 24 loudspeakers arranged in a circular array around the head. Maintaining the location of an auditory cue was associated with a topographically modulated distribution of posterior alpha power resembling the findings known from visual attention. Multivariate analyses allowed the prediction of the sound location in the horizontal plane. Importantly, this prediction was also possible, when derived from signals capturing saccadic activity. A control experiment on auditory spatial attention confirmed that, in absence of any visual/auditory input, lateralization of alpha power is linked to the lateralized direction of gaze. Attending to an auditory target engages oculomotor and visual cortical areas in a topographic manner akin to the retinotopic organization associated with visual attention. |
Tzvetan Popov; Tobias Staudigl Cortico-ocular coupling in the service of episodic memory formation Journal Article In: Progress in Neurobiology, vol. 227, pp. 1–9, 2023. @article{Popov2023, Encoding of visual information is a necessary requirement for most types of episodic memories. In search for a neural signature of memory formation, amplitude modulation of neural activity has been repeatedly shown to correlate with and suggested to be functionally involved in successful memory encoding. We here report a complementary view on why and how brain activity relates to memory, indicating a functional role of cortico-ocular interactions for episodic memory formation. Recording simultaneous magnetoencephalography and eye tracking in 35 human participants, we demonstrate that gaze variability and amplitude modulations of alpha/beta oscillations (10–20 Hz) in visual cortex covary and predict subsequent memory performance between and within participants. Amplitude variation during pre-stimulus baseline was associated with gaze direction variability, echoing the co-variation observed during scene encoding. We conclude that encoding of visual information engages unison coupling between oculomotor and visual areas in the service of memory formation. |
Dina V. Popovkina; John Palmer; Cathleen M. Moore; Geoffrey M. Boynton Testing hemifield independence for divided attention in visual object tasks Journal Article In: Journal of Vision, vol. 23, no. 13, pp. 1–17, 2023. @article{Popovkina2023, In this study, we asked to what degree hemifields contribute to divided attention effects observed in tasks with object-based judgments. If object recognition processes in the two hemifields were fully independent, then placing stimuli in separate hemifields would eliminate divided attention effects; in the alternative extreme, if object recognition processes in the two hemifields were fully integrated, then placing stimuli in separate hemifields would not modulate divided attention effects. Using a dual-task paradigm, we compared performance in a semantic categorization task for relevant stimuli arranged in the same hemifield to performance for relevant stimuli arranged in separate left and right hemifields. In two experiments, there was a reliable decrease in divided attention effects when stimuli were shown in separate hemifields compared to the same hemifield. However, the effect of divided attention was not eliminated. These results reject both the independent and integrated hypotheses, and instead support a third alternative - that object recognition processes in the two hemifields are partially dependent. More specifically, the magnitude of modulation by hemifields was closer to the prediction of the integrated hypothesis, suggesting that for dual tasks with objects, dependent processing is mostly shared across the visual field. |
Brendan L. Portengen; Marnix Naber; Giorgio L. Porro; Douwe Bergsma; Evert J. Veldman; Saskia M. Imhof In: Eye and Brain, vol. 15, pp. 77–89, 2023. @article{Portengen2023a, Purpose: We improve pupillary responses and diagnostic performance of flicker pupil perimetry through alterations in global and local color contrast and luminance contrast in adult patients suffering from visual field defects due to cerebral visual impairment (CVI). Methods: Two experiments were conducted on patients with CVI (Experiment 1: 19 subjects, age M and SD 57.9 ± 14.0; Experiment 2: 16 subjects, age M and SD 57.3 ± 14.7) suffering from absolute homonymous visual field (VF) defects. We altered global color contrast (stimuli consisted of white, yellow, cyan and yellow-equiluminant-to-cyan colored wedges) in Experiment 1, and we manipulated luminance and local color contrast with bright and dark yellow and multicolor wedges in a 2-by-2 design in Experiment 2. Stimuli consecutively flickered across 44 stimulus locations within the inner 60 degrees of the VF and were offset to a contrasting (opponency colored) dark background. Pupil perimetry results were compared to standard automated perimetry (SAP) to assess diagnostic accuracy. Results: A bright stimulus with global color contrast using yellow (p= 0.009) or white (p= 0.006) evoked strongest pupillary responses as opposed to stimuli containing local color contrast and lower brightness. Diagnostic accuracy, however, was similar across global color contrast conditions in Experiment 1 (p= 0.27) and decreased when local color contrast and less luminance contrast was introduced in Experiment 2 (p= 0.02). The bright yellow condition resulted in highest performance (AUC M = 0.85 ± 0.10 |
Brendan L. Portengen; Giorgio L. Porro; Saskia M. Imhof; Marnix Naber The trade-off between luminance and color contrast assessed with pupil responses Journal Article In: Translational Vision Science & Technology, vol. 12, no. 1, pp. 19–25, 2023. @article{Portengen2023, Purpose: A scene consisting of a white stimulus on a black background incorporates strong luminance contrast. When both stimulus and background receive different colors, luminance contrast decreases but color contrast increases. Here, we sought to charac-terize the pattern of stimulus salience across varying trade-offs of color and luminance contrasts by using the pupil light response. Methods: Three experiments were conducted with 17, 16, and 17 healthy adults. For all experiments, a flickering stimulus (2 Hz; alternating color to black) was presented super-imposed on a background with a complementary color to the stimulus (i.e., opponency colors in human color perception: blue and yellow for Experiment 1, red and green for Experiment 2, and equiluminant red and green for Experiment 3). Background luminance varied between 0% and 45% to trade off luminance and color contrast with the stimulus. By comparing the locus of the optimal trade-off between color and luminance across different color axes, we explored the generality of the trade-off. Results: The strongest pupil responses were found when a substantial amount of color contrast was present (at the expense of luminance contrast). Pupil response ampli-tudes increased by 15% to 30% after the addition of color contrast. An optimal pupillary responsiveness was reached at a background luminance setting of 20% to 35% color contrast across several color axes. Conclusions: These findings suggest that a substantial component of pupil light responses incorporates color processing. More sensitive pupil responses and more salient stimulus designs can be achieved by adding subtle levels of color contrast between stimulus and background. Translational Relevance: More robust pupil responses will enhance tests of the visual field with pupil perimetry. |
G. V. Portnova; K. M. Liaukovich; L. N. Vasilieva; E. I. Alshanskaia Autonomic and behavioral indicators on increased cognitive loading in healthy volunteers Journal Article In: Neuroscience and Behavioral Physiology, vol. 53, no. 1, pp. 92–102, 2023. @article{Portnova2023, Cognitive and emotional loading during increases in task difficulty leads to activation of various parts of the autonomic nervous system and can be accompanied by an increase in problem-solving efficiency and may contribute to destabilization of emotional status and decreases in productivity. An increase in cognitive loading in conditions of high motivation of subjects constitutes a stress factor and is expressed in various reactions of the sympathetic and parasympathetic compartments in response to loading. The aim of the present work was to study the features of various autonomic reactions to gradually increasing task difficulty, which included recording pupil area and the number of blinks, as well as the frequency of respiratory movements, measures of heart rate variability, and galvanic skin responses. Ten healthy volunteers took part in the study. The experimental paradigm included six levels of task difficulty requiring the active participation of working memory and attention. Increases in task difficulty from the first level to the sixth led to a gradual increase in pupil area and the number of blinks, which we suggest corresponds to an increase in sympathetic nervous system activation. Linear changes in the autonomic parameters of the respiratory and cardiovascular systems, as well as the electrical activity of the skin, were observed only up to the third level of difficulty. Further increases in difficulty led to opposite changes in these indicators and were accompanied by decreases in problem-solving efficiency. A more marked change in the galvanic skin response during problem-solving correlated with a decrease in mood after the study, indirectly indicating a higher level of emotional stress. |
Kazutaka Maeda; Ken Inoue; Masahiko Takada; Okihide Hikosaka Environmental context-dependent activation of dopamine neurons via putative amygdala-nigra pathway in macaques Journal Article In: Nature Communications, vol. 14, no. 1, pp. 1–12, 2023. @article{Maeda2023, Seeking out good and avoiding bad objects is critical for survival. In practice, objects are rarely good every time or everywhere, but only at the right time or place. Whereas the basal ganglia (BG) are known to mediate goal-directed behavior, for example, saccades to rewarding objects, it remains unclear how such simple behaviors are rendered contingent on higher-order factors, including environmental context. Here we show that amygdala neurons are sensitive to environments and may regulate putative dopamine (DA) neurons via an inhibitory projection to the substantia nigra (SN). In male macaques, we combined optogenetics with multi-channel recording to demonstrate that rewarding environments induce tonic firing changes in DA neurons as well as phasic responses to rewarding events. These responses may be mediated by disinhibition via a GABAergic projection onto DA neurons, which in turn is suppressed by an inhibitory projection from the amygdala. Thus, the amygdala may provide an additional source of learning to BG circuits, namely contingencies imposed by the environment. |
Oliver Maith; Javier Baladron; Wolfgang Einhäuser; Fred H. Hamker Exploration behavior after reversals is predicted by STN-GPe synaptic plasticity in a basal ganglia model Journal Article In: iScience, vol. 26, no. 5, pp. 1–23, 2023. @article{Maith2023, Humans can quickly adapt their behavior to changes in the environment. Classical reversal learning tasks mainly measure how well participants can disengage from a previously successful behavior but not how alternative responses are explored. Here, we propose a novel 5-choice reversal learning task with alternating position-reward contingencies to study exploration behavior after a reversal. We compare human exploratory saccade behavior with a prediction obtained from a neuro-computational model of the basal ganglia. A new synaptic plasticity rule for learning the connectivity between the subthalamic nucleus (STN) and external globus pallidus (GPe) results in exploration biases to previously rewarded positions. The model simulations and human data both show that during experimental experience exploration becomes limited to only those positions that have been rewarded in the past. Our study demonstrates how quite complex behavior may result from a simple sub-circuit within the basal ganglia pathways. |
Giorgio L. Manenti; Aslan S. Dizaji; Caspar M. Schwiedrzik Variability in training unlocks generalization in visual perceptual learning through invariant representations Journal Article In: Current Biology, vol. 33, no. 5, pp. 817–826, 2023. @article{Manenti2023, Stimulus and location specificity are long considered hallmarks of visual perceptual learning. This renders visual perceptual learning distinct from other forms of learning, where generalization can be more easily attained, and therefore unsuitable for practical applications, where generalization is key. Based on the hypotheses derived from the structure of the visual system, we test here whether stimulus variability can unlock generalization in perceptual learning. We train subjects in orientation discrimination, while we vary the amount of variability in a task-irrelevant feature, spatial frequency. We find that, independently of task difficulty, this manipulation enables generalization of learning to new stimuli and locations, while not negatively affecting the overall amount of learning on the task. We then use deep neural networks to investigate how variability unlocks generalization. We find that networks develop invariance to the task-irrelevant feature when trained with variable inputs. The degree of learned invariance strongly predicts generalization. A reliance on invariant representations can explain variability-induced generalization in visual perceptual learning. This suggests new targets for understanding the neural basis of perceptual learning in the higher-order visual cortex and presents an easy-to-implement modification of common training paradigms that may benefit practical applications. |
Marcello Maniglia; Kristina M. Visscher; Aaron R. Seitz Consistency of preferred retinal locus across tasks and participants trained with a simulated scotoma Journal Article In: Vision Research, vol. 203, pp. 1–9, 2023. @article{Maniglia2023, After loss of central vision following retinal pathologies such as macular degeneration (MD), patients often adopt compensatory strategies including developing a “preferred retinal locus” (PRL) to replace the fovea in tasks involving fixation. A key question is whether patients develop multi-purpose PRLs or whether their oculomotor strategies adapt to the demands of the task. While most MD patients develop a PRL, clinical evidence suggests that patients may develop multiple PRLs and switch between them according to the task at hand. To understand this, we examined a model of central vision loss in normally seeing individuals and tested whether they used the same or different PRLs across tasks after training. Nineteen participants trained for 10 sessions on contrast detection while in conditions of gaze-contingent, simulated central vision loss. Before and after training, peripheral looking strategies were evaluated during tasks measuring visual acuity, reading abilities and visual search. To quantify strategies in these disparate, naturalistic tasks, we measured and compared the amount of task-relevant information at each of 8 equally spaced, peripheral locations, while participants performed the tasks. Results showed that some participants used consistent viewing strategies across tasks whereas other participants' strategies differed depending on task. This novel method allows quantification of peripheral vision use even in relatively ecological tasks. These results represent one of the first examinations of peripheral viewing strategies across tasks in simulated vision loss. Results suggest that individual differences in peripheral looking strategies following simulated central vision loss may model those developed in pathological vision loss. |
Jiayu Mao; Shuang Qiu; Wei Wei; Huiguang He Cross-modal guiding and reweighting network for multi-modal RSVP-based target detection Journal Article In: Neural Networks, vol. 161, pp. 65–82, 2023. @article{Mao2023, Rapid Serial Visual Presentation (RSVP) based Brain–Computer Interface (BCI) facilities the high-throughput detection of rare target images by detecting evoked event-related potentials (ERPs). At present, the decoding accuracy of the RSVP-based BCI system limits its practical applications. This study introduces eye movements (gaze and pupil information), referred to as EYE modality, as another useful source of information to combine with EEG-based BCI and forms a novel target detection system to detect target images in RSVP tasks. We performed an RSVP experiment, recorded the EEG signals and eye movements simultaneously during a target detection task, and constructed a multi-modal dataset including 20 subjects. Also, we proposed a cross-modal guiding and fusion network to fully utilize EEG and EYE modalities and fuse them for better RSVP decoding performance. In this network, a two-branch backbone was built to extract features from these two modalities. A Cross-Modal Feature Guiding (CMFG) module was proposed to guide EYE modality features to complement the EEG modality for better feature extraction. A Multi-scale Multi-modal Reweighting (MMR) module was proposed to enhance the multi-modal features by exploring intra- and inter-modal interactions. And, a Dual Activation Fusion (DAF) was proposed to modulate the enhanced multi-modal features for effective fusion. Our proposed network achieved a balanced accuracy of 88.00% (±2.29) on the collected dataset. The ablation studies and visualizations revealed the effectiveness of the proposed modules. This work implies the effectiveness of introducing the EYE modality in RSVP tasks. And, our proposed network is a promising method for RSVP decoding and further improves the performance of RSVP-based target detection systems. |
Beatriz Martín-Luengo; Karlos Luna; Yury Shtyrov Conversational pragmatics: Memory reporting strategies in different social contexts Journal Article In: Frontiers in Psychology, vol. 14, pp. 1–10, 2023. @article{MartinLuengo2023, Previous studies in conversational pragmatics have showed that the information people share with others heavily depends on the confidence they have in the correctness of a candidate answer. At the same time, different social contexts prompt different incentive structures, which set a higher or lower confidence criterion to determine which potential answer to report. In this study, we investigated how the different incentive structures of several types of social contexts and how different levels of knowledge affect the amount of information we are willing to share. Participants answered easy, intermediate, and difficult general-knowledge questions and decided whether they would report or withhold their selected answer in different social contexts: formal vs. informal, that could be either constrained (a context that promotes providing only responses we are certain about) or loose (with an incentive structure that maximizes providing any type of answer). Overall, our results confirmed that social contexts are associated with different incentive structures which affects memory reporting strategies. We also found that the difficulty of the questions is an important factor in conversational pragmatics. Our results highlight the relevance of studying different incentive structures of social contexts to understand the underlying processes of conversational pragmatics, and stress the importance of considering metamemory theories of memory reporting. |
Jun Maruta; Lisa A. Spielman; Jamshid Ghajar Visuomotor synchronization: Military normative performance Journal Article In: Military Medicine, vol. 188, no. 3-4, pp. E484–E491, 2023. @article{Maruta2023, Introduction: Cognitive processes such as perception and reasoning are preceded and dependent on attention. Because of the close overlap between neural circuits of attention and eye movement, attention may be objectively quantified with recording of eye movements during an attention-dependent task. Our previous work demonstrated that performance scores on a circular visual tracking task that requires dynamic synchronization of the gaze with the target motion can be impacted by concussion, sleep deprivation, and attention deficit/hyperactivity disorder. The current study examined the characteristics of performance on a standardized predictive visual tracking task in a large sample from a U.S. Military population to provide military normative data. Materials and Methods: The sample consisted of 1,594 active duty military service members of either sex aged 18-29 years old who were stationed at Fort Hood Army Base. The protocol was reviewed and approved by the U.S. Army Medical Research and Materiel Command Institutional Review Board. Demographic, medical, and military history data were collected using questionnaires, and performance-based data were collected using a circular visual tracking test and Trail Making Test. Differences in visual tracking performance by demographic characteristics were examined with a multivariate analysis of variance, as well as a Kolmogorov-Smirnov test and a rank-sum test. Associations with other measures were examined with a rank-sum test or Spearman correlations. Results: Robust sex differences in visual tracking performance were found across the various statistical models, as well as age differences in several isolated comparisons. Accordingly, norms of performance scores, described in terms of percentile standings, were developed adjusting for age and sex. The effects of other measures on visual tracking performance were small or statistically non-significant. An examination of the score distributions of various metrics suggested that strategies preferred by men and women may optimize different aspects of visual tracking performance. Conclusion: This large-scale quantification of attention, using dynamic visuomotor synchronization performance, provides rigorously characterized age- and sex-based military population norms. This study establishes analytics for assessing normal and impaired attention and detecting changes within individuals over time. Practical applications for combat readiness and surveillance of attention impairment from sleep insufficiency, concussion, medication, or attention disorders will be enhanced with portable, easily accessible, fast, and reliable dynamic eye-tracking technologies. |
Jana Masselink; Alexis Cheviet; Caroline Froment-Tilikete; Denis Pélisson; Markus Lappe A triple distinction of cerebellar function for oculomotor learning and fatigue compensation Journal Article In: PLoS Computational Biology, vol. 19, no. 8, pp. 1–37, 2023. @article{Masselink2023, The cerebellum implements error-based motor learning via synaptic gain adaptation of an inverse model, i.e. the mapping of a spatial movement goal onto a motor command. Recently, we modeled the motor and perceptual changes during learning of saccadic eye movements, showing that learning is actually a threefold process. Besides motor recalibration of (1) the inverse model, learning also comprises perceptual recalibration of (2) the visuospatial target map and (3) of a forward dynamics model that estimates the saccade size from corollary discharge. Yet, the site of perceptual recalibration remains unclear. Here we dissociate cerebellar contributions to the three stages of learning by modeling the learning data of eight cerebellar patients and eight healthy controls. Results showed that cerebellar pathology restrains short-term recalibration of the inverse model while the forward dynamics model is well informed about the reduced saccade change. Adaptation of the visuospatial target map trended in learning direction only in control subjects, yet without reaching significance. Moreover, some patients showed a tendency for uncompensated oculomotor fatigue caused by insufficient upregulation of saccade duration. According to our model, this could induce long-term perceptual compensation, consistent with the overestimation of target eccentricity found in the patients' baseline data. We conclude that the cerebellum mediates short-term adaptation of the inverse model, especially by control of saccade duration, while the forward dynamics model was not affected by cerebellar pathology. |
Nicolas Masson; Valérie Dormal; Martine Stephany; Christine Schiltz Eye movements reveal that young school children shift attention when solving additions and subtractions Journal Article In: Developmental Science, pp. 1–12, 2023. @article{Masson2023, Abstract: Adults shift their attention to the right or to the left along a spatial continuum when solving additions and subtractions, respectively. Studies suggest that these shifts not only support the exact computation of the results but also anticipatively narrow down the range of plausible answers when processing the operands. However, little is known on when and how these attentional shifts arise in childhood during the acquisition of arithmetic. Here, an eye-tracker with high spatio-temporal resolution was used to measure spontaneous eye movements, used as a proxy for attentional shifts, while children of 2nd (8 y-o; N = 50) and 4th (10 y-o; N = 48) Grade solved simple additions (e.g., 4+3) and subtractions (e.g., 3-2). Gaze patterns revealed horizontal and vertical attentional shifts in both groups. Critically, horizontal eye movements were observed in 4th Graders as soon as the first operand and the operator were presented and thus before the beginning of the exact computation. In 2nd Graders, attentional shifts were only observed after the presentation of the second operand just before the response was made. This demonstrates that spatial attention is recruited when children solve arithmetic problems, even in the early stages of learning mathematics. The time course of these attentional shifts suggests that with practice in arithmetic children start to use spatial attention to anticipatively guide the search for the answer and facilitate the implementation of solving procedures. Research Highlights: Additions and subtractions are associated to right and left attentional shifts in adults, but it is unknown when these mechanisms arise in childhood. Children of 8–10 years old solved single-digit additions and subtractions while looking at a blank screen. Eye movements showed that children of 8 years old already show spatial biases possibly to represent the response when knowing both operands. Children of 10 years old shift attention before knowing the second operand to anticipatively guide the search for plausible answers. |
Sebastiaan Mathôt; Hermine Berberyan; Philipp Büchel; Veera Ruuskanen; Ana Vilotijević; Wouter Kruijne Effects of pupil size as manipulated through ipRGC activation on visual processing Journal Article In: NeuroImage, vol. 283, pp. 1–13, 2023. @article{Mathot2023, The size of the eyes' pupils determines how much light enters the eye and also how well this light is focused. Through this route, pupil size shapes the earliest stages of visual processing. Yet causal effects of pupil size on vision are poorly understood and rarely studied. Here we introduce a new way to manipulate pupil size, which relies on activation of intrinsically photosensitive retinal ganglion cells (ipRGCs) to induce sustained pupil constriction. We report the effects of both experimentally induced and spontaneous changes in pupil size on visual processing as measured through EEG. We compare these to the effects of stimulus intensity and covert visual attention, because previous studies have shown that these factors all have comparable effects on some common measures of early visual processing, such as detection performance and steady-state visual evoked potentials; yet it is still unclear whether these are superficial similarities, or rather whether they reflect similar underlying processes. Using a mix of neural-network decoding, ERP analyses, and time-frequency analyses, we find that induced pupil size, spontaneous pupil size, stimulus intensity, and covert visual attention all affect EEG responses, mainly over occipital and parietal electrodes, but—crucially—that they do so in qualitatively different ways. Induced and spontaneous pupil-size changes mainly modulate activity patterns (but not overall power or intertrial coherence) in the high-frequency beta range; this may reflect an effect of pupil size on oculomotor activity and/ or visual processing. In addition, spontaneous (but not induced) pupil size tends to correlate positively with intertrial coherence in the alpha band; this may reflect a non-causal relationship, mediated by arousal. Taken together, our findings suggest that pupil size has qualitatively different effects on visual processing from stimulus intensity and covert visual attention. This shows that pupil size as manipulated through ipRGC activation strongly affects visual processing, and provides concrete starting points for further study of this important yet understudied earliest stage of visual processing. |
Siyi Li; Xuemei Zeng; Zhujun Shao; Qing Yu Neural representations in visual and parietal cortex differentiate between imagined, perceived, and illusory experiences Journal Article In: Journal of Neuroscience, vol. 43, no. 38, pp. 6508–6524, 2023. @article{Li2023i, Humans constantly receive massive amounts of information, both perceived from the external environment and imagined from the internal world. To function properly, the brain needs to correctly identity the origin of information being processed. Recent work has suggested common neural substrates for perception and imagery. However, it has remained unclear how the brain differentiates between external and internal experiences with shared neural codes. Here we tested this question in human participants (male and female) by systematically investigating the neural processes underlying the generation and maintenance of visual information from voluntary imagery, veridical perception, and illusion. The inclusion of illusion allowed us to differentiate between objective and subjective internality: while illusion has an objectively internal origin and can be viewed as involuntary imagery, it is also subjectively perceived as having an external origin like perception. Combining fMRI, eye-tracking, multivariate decoding and encoding approaches, we observed superior orientation representations in parietal cortex during imagery compared to perception, and conversely in early visual cortex. This imagery dominance gradually developed along a posterior-to-anterior cortical hierarchy from early visual to parietal cortex, emerged in the early epoch of imagery and sustained into the delay epoch, and persisted across varied imagined contents. Moreover, representational strength of illusion was more comparable to imagery in early visual cortex, but more comparable to perception in parietal cortex, suggesting content-specific representations in parietal cortex differentiate between subjectively internal and external experiences, as opposed to early visual cortex. These findings together support a domain-general engagement ofparietal cortex in internally-generated experience. |
Tianyuan Li; Pok-Man Man Siu State relationship orientation and helping behaviors: The influence of hunger and trait relationship orientations Journal Article In: Current Psychology, vol. 42, no. 31, pp. 27317–27330, 2023. @article{Li2023n, Exchange orientation (EO) and communal orientation (CO) are two fundamental relationship orientations (ROs). We argue that state RO (i.e., the relative activation of the two ROs at a specific moment) varies across situations and should be differentiated from trait ROs. In two studies, we examined how state RO affected subsequent helping behaviors and how it was influenced by a situational factor (i.e., hunger). We also examined whether trait ROs moderated the above links. An eye-tracking paradigm (Study 1) and a scenario-based paradigm (Study 2) were adopted to assess state RO. The two studies consistently found that relatively more activation of state EO over state CO reduced helping tendency toward strangers (Study 1) and acquaintances (Study 2). High trait CO amplified the effect in Study 1. Moreover, hunger heightened the relative activation of state EO over state CO in both studies, but the effect was only significant for participants with high trait EO in Study 1. The results highlight the importance to study the momentary variation of ROs and open new research directions. |
Xinjing Li; Qingqing Qu Verbal working memory capacity modulates semantic and phonological prediction in spoken comprehension Journal Article In: Psychonomic Bulletin & Review, pp. 1–10, 2023. @article{Li2023j, Mounting evidence suggests that people may use multiple cues to predict different levels of representation (e.g., semantic, syntactic, and phonological) during language comprehension. One question that has been less investigated is the relationship between general cognitive processing and the efficiency of prediction at various linguistic levels, such as semantic and phonological levels. To address this research gap, the present study investigated how working memory capacity (WMC) modulates different kinds of prediction behavior (i.e., semantic prediction and phonological prediction) in the visual world. Chinese speakers listened to the highly predictable sentences that contained a highly predictable target word, and viewed a visual display of objects. The visual display of objects contained a target object corresponding to the predictable word, a semantic or a phonological competitor that was semantically or phonologically related to the predictable word, and an unrelated object. We conducted a Chinese version of the reading span task to measure verbal WMC and grouped participants into high- and low-span groups. Participants showed semantic and phonological prediction with comparable size in both groups during language comprehension, with earlier semantic prediction in the high-span group, and a similar time course of phonological prediction in both groups. These results suggest that verbal working memory modulates predictive processing in language comprehension. |
Yijing Li; Xiangling Zhuang; Guojie Ma Use of minimal working memory in visual comparison: An eye-tracking study Journal Article In: Perception, vol. 52, no. 11-12, pp. 759–773, 2023. @article{Li2023k, In this study, we used a novel application of the previous paradigm provided by Pomplun to examine the eye movement strategies of using minimal working memory in visual comparison. This paradigm includes two tasks: one is a free comparison and the other is a single sequential comparison. In the free comparison, participants can freely view two horizontally presented stimuli until they judge whether the two stimuli are the same or not. In the single sequential comparison, participants can only view the left-side stimuli one time, and when their eyes cross the invisible boundary at the center of the screen, the left-side stimuli disappear and the right-side stimuli appear. Participants need to judge whether the right-side stimuli are the same as the disappeared left-side stimuli. Eye movement data showed significant differences between the single sequential comparison and free comparison tasks that suggests the use of minimal working memory in free comparison. Moreover, when the number of items was more than three, an average of 2.87 items would be processed in each view sequence. Participants also used the alternating left-right reference strategy that made the shortest scan path with the use of minimal working memory. The typical eye movement strategy in visual comparison and its theoretical significance were discussed. |
Yongkai Li; Shuai Zhang; Gancheng Zhu; Zehao Huang; Rong Wang; Xiaoting Duan; Zhiguo Wang A CNN-based wearable system for driver drowsiness detection Journal Article In: Sensors, vol. 23, no. 7, 2023. @article{Li2023l, Drowsiness poses a serious challenge to road safety and various in-cabin sensing technologies have been experimented with to monitor driver alertness. Cameras offer a convenient means for contactless sensing, but they may violate user privacy and require complex algorithms to accommodate user (e.g., sunglasses) and environmental (e.g., lighting conditions) constraints. This paper presents a lightweight convolution neural network that measures eye closure based on eye images captured by a wearable glass prototype, which features a hot mirror-based design that allows the camera to be installed on the glass temples. The experimental results showed that the wearable glass prototype, with the neural network in its core, was highly effective in detecting eye blinks. The blink rate derived from the glass output was highly consistent with an industrial gold standard EyeLink eye-tracker. As eye blink characteristics are sensitive measures of driver drowsiness, the glass prototype and the lightweight neural network presented in this paper would provide a computationally efficient yet viable solution for real-world applications. |
Guangsheng Liang; John E. Poquiz; Miranda Scolari Space- and feature-based attention operate both independently and interactively within latent components of perceptual decision making Journal Article In: Journal of Vision, vol. 23, no. 1, pp. 1–17, 2023. @article{Liang2023, Top-down visual attention filters undesired stimuli while selected information is afforded the lion's share of limited cognitive resources. Multiple selection mechanisms can be deployed simultaneously, but how unique influences of each combine to facilitate behavior remains unclear. Previously, we failed to observe an additive perceptual benefit when both space-based attention (SBA) and feature-based attention (FBA) were cued in a sparse display (Liang & Scolari, 2020): FBA was restricted to higher order decision-making processes when combined with a valid spatial cue, whereas SBA additionally facilitated target enhancement. Here, we introduced a series of design modifications across three experiments to elicit both attention mechanisms within signal enhancement while also investigating the impacts on decision making. First, we found that when highly reliable spatial and feature cues made unique contributions to search (experiment 1), or when each cue component was moderately reliable (experiments 2a and 2b), both mechanisms were deployed independently to resolve the target. However, the same manipulations produced interactive attention effects within other latent decision-making components that depended on the probability of the integrated cueing object. Time spent before evidence accumulation was reduced and responses were more conservative for the most likely pre-cue combination—even when it included an invalid component. These data indicate that selection mechanisms operate on sensory signals invariably in an independent manner, whereas a higher-order dependency occurs outside of signal enhancement. |
Junhao Liang; Severin Maher; Li Zhaoping Eye movement evidence for the V1 Saliency Hypothesis and the Central-peripheral Dichotomy theory in an anomalous visual search task Journal Article In: Vision Research, vol. 212, pp. 1–14, 2023. @article{Liang2023a, Typically, searching for a target among uniformly tilted non-targets is easier when this target is perpendicular, rather than parallel, to the non-targets. The V1 Saliency Hypothesis (V1SH) – that V1 creates a saliency map to guide attention exogenously – predicts exactly the opposite in a special case: each target or non-target is a pair of equally-sized disks, a homo-pair of two disks of the same color, black or white, or a hetero-pair of two disks of the opposite color; the inter-disk displacement defines its orientation. This prediction – parallel advantage – was supported by the finding that parallel targets require shorter reaction times (RTs) to report targets' locations. Furthermore, it is stronger for targets further from the center of search images, as predicted by the Central-peripheral Dichotomy (CPD) theory entailing that saliency effects are stronger in peripheral than in central vision. However, the parallel advantage could arise from a shorter time required to recognize – rather than to shift attention to – the parallel target. By gaze tracking, the present study confirms that the parallel advantage is solely due to the RTs for the gaze to reach the target. Furthermore, when the gaze is sufficiently far from the target during search, saccade to a parallel, rather than perpendicular, target is more likely, demonstrating the Central-peripheral Dichotomy more directly. Parallel advantage is stronger among observers encouraged to let their search be guided by spontaneous gaze shifts, which are presumably guided by bottom-up saliency rather than top-down factors. |
Hsin-I Liao; Haruna Fujihira; Shimpei Yamagishi; Yung-Hao Yang; Shigeto Furukawa Seeing an auditory object: Pupillary light response reflects covert attention to auditory space and object Journal Article In: Journal of Cognitive Neuroscience, vol. 35, no. 2, pp. 276–290, 2023. @article{Liao2023a, Attention to the relevant object and space is the brain's strat-egy to effectively process the information of interest in complex environments with limited neural resources. Numerous studies have documented how attention is allocated in the visual domain, whereas the nature of attention in the auditory domain has been much less explored. Here, we show that the pupillary light response can serve as a physiological index of auditory attentional shift and can be used to probe the relationship between space-based and object-based attention as well. Experiments demonstrated that the pupillary response corresponds to the luminance condition where the attended auditory object (e.g., spoken sentence) was located, regardless of whether attention was directed by a spatial (left or right) or nonspatial (e.g., the gender of the talker) cue and regardless of whether the sound was presented via headphones or loudspeakers. These effects on the pupillary light response could not be accounted for as a consequence of small (although observable) biases in gaze position drifting. The overall results imply a uni-fied audiovisual representation of spatial attention. Auditory object-based attention contains the space representation of the attended auditory object, even when the object is oriented without explicit spatial guidance. |
Ming-Ray Liao; Andy J. Kim; Brian A. Anderson Neural correlates of value-driven spatial orienting Journal Article In: Psychophysiology, vol. 60, no. 9, pp. 1–13, 2023. @article{Liao2023, Reward learning has been shown to habitually guide overt spatial attention to specific regions of a scene. However, the neural mechanisms that support this bias are unknown. In the present study, participants learned to orient themselves to a particular quadrant of a scene (a high-value quadrant) to maximize monetary gains. This learning was scene-specific, with the high-value quadrant varying across different scenes. During a subsequent test phase, participants were faster at identifying a target if it appeared in the high-value quadrant (valid), and initial saccades were more likely to be made to the high-value quadrant. fMRI analyses during the test phase revealed learning-dependent priority signals in the caudate tail, superior colliculus, frontal eye field, anterior cingulate cortex, and insula, paralleling findings concerning feature-based, value-driven attention. In addition, ventral regions typically associated with scene selection and spatial information processing, including the hippocampus, parahippocampal gyrus, and temporo-occipital cortex, were also implicated. Taken together, our findings offer new insights into the neural architecture subserving value-driven attention, both extending our understanding of nodes in the attention network previously implicated in feature-based, value-driven attention and identifying a ventral network of brain regions implicated in reward's influence on scene-dependent spatial orienting. |
Justin D. Lieber; Gerick M. Lee; Najib J. Majaj; J. Anthony Movshon Sensitivity to naturalistic texture relies primarily on high spatial frequencies Journal Article In: Journal of Vision, vol. 23, no. 2, pp. 1–25, 2023. @article{Lieber2023, Natural images contain information at multiple spatial scales. Though we understand how early visual mechanisms split multiscale images into distinct spatial frequency channels, we do not know how the outputs of these channels are processed further by mid-level visual mechanisms.We have recently developed a texture discrimination task that uses synthetic, multi-scale, “naturalistic” textures to isolate these mid-level mechanisms. Here, we use three experimental manipulations (image blur, image rescaling, and eccentric viewing) to show that perceptual sensitivity to naturalistic structure is strongly dependent on features at high object spatial frequencies (measured in cycles/image). As a result, sensitivity depends on a texture acuity limit, a property of the visual system that sets the highest retinal spatial frequency (measured in cycles/degree) at which observers can detect naturalistic features. Analysis of the texture images using a model observer analysis shows that naturalistic image features at high object spatial frequencies carry more task-relevant information than those at low object spatial frequencies. That is, the dependence of sensitivity on high object spatial frequencies is a property of the texture images, rather than a property of the visual system. Accordingly, we find human observers' ability to extract naturalistic information (their efficiency) is similar for all object spatial frequencies.We conclude that the mid-level mechanisms that underlie perceptual sensitivity effectively extract information from all image features below the texture acuity limit, regardless of their retinal and object spatial frequency. |
Agnieszka Lijewska The influence of semantic bias on triple non-identical cognates during reading: Evidence from trilinguals' eye movements Journal Article In: Second Language Research, vol. 39, no. 4, pp. 1235 –1263, 2023. @article{Lijewska2023, The current study investigated how the processing of triple cognates (words sharing form and meaning across three languages) is modulated by the semantic bias of sentence context in a reading task. In the study, Polish–German–English trilinguals read English sentences while their eye movements were monitored. The sentences were either semantically biased (high-context) or neutral (low-context) towards target words. The targets were either Polish–German–English cognates whose cross-language form overlap was incomplete (e.g. DIAMENT–DIAMANT–DIAMOND) or English-only controls (e.g. KURCZAK–HÄHNCHEN–CHICKEN). The results revealed a significant effect of context in gaze durations and in total reading time. Importantly, no cognate facilitation effect was identified in any reading measure. The gaze duration data additionally revealed that English-only controls were read slower in low-context sentences than in high-context sentences but gaze durations for cognates were not affected by the sentence context. Thus, prior bilingual findings were only partially replicated in the current study with trilinguals. This suggests that bilingual models of language processing should be carefully adapted to trilinguals. The current data may also mean that non-identical cognates (even those shared across three languages) induce relatively small effects and large samples of participants and items may be needed to detect such effects across reading measures. |
Jaeseob Lim; Sang-Hun Lee Spatial correspondence in relative space regulates serial dependence Journal Article In: Scientific Reports, vol. 13, no. 1, pp. 1–11, 2023. @article{Lim2023, Our perception is often attracted to what we have seen before, a phenomenon called ‘serial dependence.' Serial dependence can help maintain a stable perception of the world, given the statistical regularity in the environment. If serial dependence serves this presumed utility, it should be pronounced when consecutive elements share the same identity when multiple elements spatially shift across successive views. However, such preferential serial dependence between identity-matching elements in dynamic situations has never been empirically tested. Here, we hypothesized that serial dependence between consecutive elements is modulated more effectively by the spatial correspondence in relative space than by that in absolute space because spatial correspondence in relative coordinates can warrant identity matching invariantly to changes in absolute coordinates. To test this hypothesis, we developed a task where two targets change positions in unison between successive views. We found that serial dependence was substantially modulated by the correspondence in relative coordinates, but not by that in absolute coordinates. Moreover, such selective modulation by the correspondence in relative space was also observed even for the serial dependence defined by previous non-target elements. Our findings are consistent with the view that serial dependence subserves object-based perceptual stabilization over time in dynamic situations. |
Juan Linde-Domingo; Bernhard Spitzer Geometry of visuospatial working memory information in miniature gaze patterns Journal Article In: Nature Human Behaviour, pp. 1–16, 2023. @article{LindeDomingo2023, Stimulus-dependent eye movements have been recognized as a potential confound in decoding visual working memory information from neural signals. Here we combined eye-tracking with representational geometry analyses to uncover the information in miniature gaze patterns while participants (n = 41) were cued to maintain visual object orientations. Although participants were discouraged from breaking fixation by means of real-time feedback, small gaze shifts (<1°) robustly encoded the to-be-maintained stimulus orientation, with evidence for encoding two sequentially presented orientations at the same time. The orientation encoding on stimulus presentation was object-specific, but it changed to a more object-independent format during cued maintenance, particularly when attention had been temporarily withdrawn from the memorandum. Finally, categorical reporting biases increased after unattended storage, with indications of biased gaze geometries already emerging during the maintenance periods before behavioural reporting. These findings disclose a wealth of information in gaze patterns during visuospatial working memory and indicate systematic changes in representational format when memory contents have been unattended. |
John P. Liska; Declan P. Rowley; Trevor T. K. Nguyen; Jens-Oliver Muthmann; Daniel A. Butts; Jacob L. Yates; Alexander C. Huk Running modulates primate and rodent visual cortex differently Journal Article In: eLife, vol. 12, no. 415, pp. 1–30, 2023. @article{Liska2023, When mice run, activity in their primary visual cortex (V1) is strongly modulated. This observation has altered conception of a brain region assumed to be a passive image processor. Extensive work has followed to dissect the circuits and functions of running-correlated modulation. However, it remains unclear whether visual processing in primates might similarly change during locomotion. We measured V1 activity in marmosets while they viewed stimuli on a treadmill. In contrast to mouse V1, marmoset V1 was slightly but reliably suppressed during running. Population-level analyses revealed trial-to-trial fluctuations of shared gain across V1 in both species, but these gain modulations were smaller and more often negatively correlated with running in marmosets. Thus, population-scale gain fluctuations of V1 reflect a common feature of mammalian visual cortical function, but important quantitative differences yield distinct consequences for the relation between vision and action in primates versus rodents. |
Chia-Yu Liu; Chao-Jung Wu Effects of working memory and relevant knowledge on reading texts and infographics Journal Article In: Reading and Writing, vol. 36, no. 162, pp. 2319–2343, 2023. @article{Liu2023i, Infographics are a new type of reading material comprising textual and visual information that has been used worldwide. Nonetheless, there has been limited research investigating people's infographic-reading performance and the characteristics of superior readers. This study adopted Chinese texts and infographics as materials and employed eye-tracking technology to assess how working memory and relevant knowledge affected 137 college students' reading comprehension, as indicated by reading accuracy (ACC), and reading efficiency, which in turn was indicated by reading time (RT) and total fixation duration (TFD). For texts, verbal working memory (VWM) exhibited no effects on individuals' reading performance; visuospatial working memory (VSWM) exerted positive effects on both ACC and TFD, and participants with higher knowledge demonstrated better ACC. For infographics, higher-VWM participants showed greater ACC, and higher-VSWM participants displayed a longer RT and TFD, though the effect of knowledge was limited. Moreover, a significant interaction effect of VWM and relevant knowledge on the TFD of infographics was observed, indicating that individuals' prior knowledge or experience might structure schemas in an infographic and then act with VWM to accelerate reading speed. This study improves our understanding of how working memory and relevant knowledge impact the processing of materials with different synthesized levels, and its implications for instruction and research are discussed. |
Chi-Hung Liu; Chun-Wei Chang; June Hung; John J. H. Lin; Pi-Shan Sung; Li-Ang Lee; Cheng-Ting Hsiao; Yi-Ping Chao; Elaine Shinwei Huang; Shu-Ling Wang Brain computed tomography reading of stroke patients by resident doctors from different medical specialities: An eye-tracking study Journal Article In: Journal of Clinical Neuroscience, vol. 117, no. 88, pp. 173–180, 2023. @article{Liu2023a, Background: Using the eye-tracking technique, our work aimed to examine whether difference in clinical background may affect the training outcome of resident doctors' interpretation skills and reading behaviour related to brain computed tomography (CT). Methods: Twelve resident doctors in the neurology, radiology, and emergency departments were recruited. Each participant had to read CT images of the brain for two cases. We evaluated each participant's accuracy of lesion identification. We also used the eye-tracking technique to assess reading behaviour. We recorded dwell times, fixation counts, run counts, and first-run dwell times of target lesions to evaluate visual attention. Transition entropy was applied to assess the temporal relations and spatial dynamics of systematic image reading. Results: The eye-tracking results showed that the image reading sequence examined by transition entropy was comparable among resident doctors from different medical specialties (p = 0.82). However, the dwell time of the target lesions was shorter for the resident doctors from the neurology department (4828.63 ms |
Dongyu Liu; Haibo Yang The improvement of attentional bias in individuals with problematic smartphone use through cognitive reappraisal: An eye-tracking study Journal Article In: Current Psychology, pp. 1–11, 2023. @article{Liu2023b, Attentional bias toward smartphone-related stimuli can intensify Problematic Smartphone Use (PSU) behaviors. The main objective of this study was to investigate the impact of Cognitive Reappraisal (CR) on the attentional bias of individuals with PSU. Twenty-five individuals with PSU (PSUG) and 25 Control Group (CG) participants were recruited and screened using the Smartphone Addiction Scale. The dot-probe paradigm was used in conjunction with eye-tracking technology to examine the CR effect on attentional bias toward smartphone icon stimuli. Under non-reappraisal conditions, the first fixation duration on smartphone icon stimuli was significantly longer than that on neutral stimuli in the PSUG but not the CG. No other expected eye-tracking measures were significant. Additionally, the craving for smartphone icon stimuli of the PSUG was significantly higher than that of the CG under the non-reappraisal condition but not under the reappraisal condition. The findings indicated that CR improves the first fixation duration of attentional bias toward smartphone icon stimuli in the PSUG. This effect may be attributed to CR's ability to reduce cravings for smartphone stimuli and enhance the inhibition capacity. The results of this study could guide interventions for treating PSU and provide theoretical support for such treatment. |
Na Liu; Di Wu; Yifan Wang; Pan Zhang; Yinling Zhang Transcranial random noise stimulation boosts early motion perception learning rather than the later performance plateau Journal Article In: Journal of Cognitive Neuroscience, vol. 35, no. 6, pp. 1021–1031, 2023. @article{Liu2023c, The effect of transcranial random noise stimulation (tRNS) on visual perceptual learning has only been investigated during early training sessions, and the influence of tRNS on later performance is unclear. We engaged participants first in 8 days of training to reach a plateau (Stage 1) and then in continued training for 3 days (Stage 2). In the first group, tRNS was applied to visual areas of the brain while participants were trained on a coherent motion direction identification task over a period of 11 days (Stage 1 + Stage 2). In the second group, participants completed an 8-day training period without any stimulation to reach a plateau (Stage 1); after that, they continued training for 3 days, during which tRNS was administered (Stage 2). In the third group, participants completed the same training as the second group, but during Stage 2, tRNS was replaced by sham stimulation. Coherence thresholds were measured three times: before training, after Stage 1, and after Stage 2. Compared with sham simulation, tRNS did not improve coherence thresholds during the plateau period. The comparison of learning curves between the first and third groups showed that tRNS decreased thresholds in the early training stage, but it failed to improve plateau thresholds. For the second and third groups, tRNS did not further enhance plateau thresholds after the continued 3-day training period. In conclusion, tRNS facilitated visual perceptual learning in the early stage, but its effect disappeared as the training continued. |
Qing Liu; Xueyao Yang; Zekai Chen; Wenjuan Zhang Using synchronized eye movements to assess attentional engagement Journal Article In: Psychological Research, vol. 87, no. 7, pp. 2039–2047, 2023. @article{Liu2023d, The gradual emergence of online education in China in recent years requires new means of real-time monitoring and timely feedback to students. This research examines the effectiveness of synchronized eye movement assessment of attention engagement through two experiments. The first experiment used 24 university students in school as participants and made them watch the same video in high and low attentional engagement states (serial subtraction task) to compare the Inter-Subject Correlation (ISC) of participants' eye movements in different conditions. The results showed that the ISC of eye movements was significantly higher for participants in a high attentional engagement state than for participants in a low attentional engagement state. The second experiment had 26 university students in school as participants, as part of which they were made to watch video materials under the condition of having eye movement modeling examples. The results showed that the ISC of eye movements was significantly lower for participants in the group with eye movement modeling examples than those without eye movement modeling examples. However, overall test scores were significantly higher in the former than the latter. The first experiment showed that the eye movement trajectories of participants with high attentional engagement were more consistent than of those with low attentional engagement. Therefore, the ISC of participants' eye movements could be used as an objective indicator to assess and predict students' attentional conditions during online education. The second experiment showed that the eye movement modeling examples interfered with the participants' attention distribution to some extent; nevertheless, they positively affected the improvement in teaching effectiveness. Overall, the studies showed that the Inter-Subject Correlation is reliable to assess attentional engagement status in domestic online education. |
Tianyuan Liu; Bao Li; Chi Zhang; Panpan Chen; Weichen Zhao; Bin Yan Real-time classification of motor imagery using Dynamic Window-Level Granger Causality analysis of fMRI data Journal Article In: Brain Sciences, vol. 13, no. 10, pp. 1–15, 2023. @article{Liu2023e, This article presents a method for extracting neural signal features to identify the imagination of left- and right-hand grasping movements. A functional magnetic resonance imaging (fMRI) experiment is employed to identify four brain regions with significant activations during motor imagery (MI) and the effective connections between these regions of interest (ROIs) were calculated using Dynamic Window-level Granger Causality (DWGC). Then, a real-time fMRI (rt-fMRI) classification system for left- and right-hand MI is developed using the Open-NFT platform. We conducted data acquisition and processing on three subjects, and all of whom were recruited from a local college. As a result, the maximum accuracy of using Support Vector Machine (SVM) classifier on real-time three-class classification (rest, left hand, and right hand) with effective connections is 69.3%. And it is 3% higher than that of traditional multivoxel pattern classification analysis on average. Moreover, it significantly improves classification accuracy during the initial stage of MI tasks while reducing the latency effects in real-time decoding. The study suggests that the effective connections obtained through the DWGC method serve as valuable features for real-time decoding of MI using fMRI. Moreover, they exhibit higher sensitivity to changes in brain states. This research offers theoretical support and technical guidance for extracting neural signal features in the context of fMRI-based studies. |
Xiaoyi Liu; David Melcher The effect of familiarity on behavioral oscillations in face perception Journal Article In: Scientific Reports, vol. 13, no. 1, pp. 1–15, 2023. @article{Liu2023f, Abstract: Studies on behavioral oscillations demonstrate that visual sensitivity fluctuates over time and visual processing varies periodically, mirroring neural oscillations at the same frequencies. Do these behavioral oscillations reflect fixed and relatively automatic sensory sampling, or top-down processes such as attention or predictive coding? To disentangle these theories, the current study used a dual-target rapid serial visual presentation paradigm, where participants indicated the gender of a face target embedded in streams of distractors presented at 30 Hz. On critical trials, two identical targets were presented with varied stimulus onset asynchrony from 200 to 833 ms. The target was either familiar or unfamiliar faces, divided into different blocks. We found a 4.6 Hz phase-coherent fluctuation in gender discrimination performance across both trial types, consistent with previous reports. In addition, however, we found an effect at the alpha frequency, with behavioral oscillations in the familiar blocks characterized by a faster high-alpha peak than for the unfamiliar face blocks. These results are consistent with the combination of both a relatively stable modulation in the theta band and faster modulation of the alpha oscillations. Therefore, the overall pattern of perceptual sampling in visual perception may depend, at least in part, on task demands. |
Xin He Liu; Lu Gan; Zhi Ting Zhang; Pan Ke Yu; Ji Dai Probing the processing of facial expressions in monkeys via time perception and eye tracking Journal Article In: Zoological Research, vol. 44, no. 5, pp. 882–893, 2023. @article{Liu2023g, Accurately recognizing facial expressions is essential for effective social interactions. Non-human primates (NHPs) are widely used in the study of the neural mechanisms underpinning facial expression processing, yet it remains unclear how well monkeys can recognize the facial expressions of other species such as humans. In this study, we systematically investigated how monkeys process the facial expressions of conspecifics and humans using eye-tracking technology and sophisticated behavioral tasks, namely the temporal discrimination task (TDT) and face scan task (FST). We found that monkeys showed prolonged subjective time perception in response to Negative facial expressions in monkeys while showing longer reaction time to Negative facial expressions in humans. Monkey faces also reliably induced divergent pupil contraction in response to different expressions, while human faces and scrambled monkey faces did not. Furthermore, viewing patterns in the FST indicated that monkeys only showed bias toward emotional expressions upon observing monkey faces. Finally, masking the eye region marginally decreased the viewing duration for monkey faces but not for human faces. By probing facial expression processing in monkeys, our study demonstrates that monkeys are more sensitive to the facial expressions of conspecifics than those of humans, thus shedding new light on inter-species communication through facial expressions between NHPs and humans. |
Yaohui Liu; Peida Zhan; Yanbin Fu; Qipeng Chen; Kaiwen Man; Yikun Luo In: Intelligence, vol. 100, pp. 1–14, 2023. @article{Liu2023h, Previous studies have found that participants use two cognitive strategies—constructive matching and response elimination—in responding to items in the Raven's Advanced Progressive Matrices (APM). This study proposed a multi-strategy psychometric model that builds on item responses and also incorporates eye-tracking measures, including but not limited to the proportional time on matrix area (PTM), the rate of toggling (ROT), and the rate of latency to first toggle (RLT). By jointly analyzing item responses and eye-tracking measures, this model can measure each participant's intelligence and identify the cognitive strategy used by each participant for each item in the APM. Several main findings were revealed from an eye-tracking-based APM study using the proposed model: (1) The effects of PTM and RLT on the constructive matching strategy selection probability were positive and higher for the former than the latter, while the effect of ROT was negligible. (2) The average intelligence of participants who used the constructive matching strategy was higher than that of participants who used the response elimination strategy, and participants with higher intelligence were more likely to use the constructive matching strategy. (3) High-intelligence participants increased their use of the constructive matching strategy as item difficulty increased, whereas low-intelligence participants decreased their use as item difficulty increased. (4) Participants took significantly less time using the constructive matching strategy than the response elimination strategy. Overall, the proposed model follows the theory-driven modeling logic and provides a new way of studying cognitive strategy in the APM by presenting quantitative results. |
Joshua B. Moskowitz; Sarah A. Berger; Jolande Fooken; Monica S. Castelhano; Jason P. Gallivan; J. Randall Flanagan The influence of movement-related costs when searching to act and acting to search Journal Article In: Journal of Neurophysiology, vol. 129, no. 1, pp. 115–130, 2023. @article{Moskowitz2023, Real world search behaviour often involves limb movements, either during search or following search. Here we investigated whether movement-related costs influence search behaviour in two kinds of search tasks. In our visual search tasks, participants made saccades to find a target object among distractors and then moved a cursor, controlled by the handle of a robotic manipulandum, to the target. In our manual search tasks, participants moved the cursor to perform the search, placing it onto objects to reveal their identity as either a target or a distractor. Across experiments, we manipulated either the effort or time costs associated with movement such that these costs varied across the search space. We varied effort by applying different resistive forces to the handle and we varied time costs by altering the speed of the cursor. Our analysis of cursor and eye movements during manual and visual search, respectively, showed that effort influenced manual search but did not influence visual search. In contrast, time costs influenced both visual and manual search. Our results demonstrate that, in addition to perceptual and cognitive factors, movement-related costs can also influence search behaviour. |
Joshua B. Moskowitz; Jolande Fooken; Monica S. Castelhano; Jason P. Gallivan; J. Randall Flanagan Visual search for reach targets in actionable space is influenced by movement costs imposed by obstacles Journal Article In: Journal of Vision, vol. 23, no. 6, pp. 1–17, 2023. @article{Moskowitz2023a, Real world search tasks often involve action on a target object once it has been located. However, few studies have examined whether movement-related costs associated with acting on located objects influence visual search. Here, using a task in which participants reached to a target object after locating it, we examined whether people take into account obstacles that increase movement-related costs for some regions of the reachable search space but not others. In each trial, a set of 36 objects (4 targets and 32 distractors) were displayed on a vertical screen and participants moved a cursor to a target after locating it. Participants had to fixate on an object to determine whether it was a target or distractor. A rectangular obstacle, of varying length, location, and orientation, was briefly displayed at the start of the trial. Participants controlled the cursor by moving the handle of a robotic manipulandum in a horizontal plane. The handle applied forces to simulate contact between the cursor and the unseen obstacle. We found that search, measured using eye movements, was biased to regions of the search space that could be reached without moving around the obstacle. This result suggests that when deciding where to search, people can incorporate the physical structure of the environment so as to reduce the movement-related cost of subsequently acting on the located target. |
Corrin Moss; Sharon Kwabi; Scott P. Ardoin; Katherine S. Binder In: Reading and Writing, pp. 1–27, 2023. @article{Moss2023, The ability to form a mental model of a text is an essential component of successful reading comprehension (RC), and purpose for reading can influence mental model construction. Participants were assigned to one of two conditions during an RC test to alter their purpose for reading: concurrent (texts and questions were presented simultaneously) and sequential (texts were presented first, then questions were shown without text access). Their eye movements were recorded during testing. Working memory capacity (WMC) and centrality of textual information were measured. Participants in the sequential condition had longer first-pass reading times compared to participants in the concurrent condition, while participants in the concurrent condition had longer total processing times per word. In addition, participants with higher WMC had longer total reading times per word. Finally, participants in the sequential condition with higher WMC had longer processing times in central regions. Even among skilled college readers, participants with lower WMC had difficulty adjusting their reading behaviors to meet the task demands such as distinguishing central and peripheral ideas. However, participants with higher WMC increased attention to important text areas. One potential explanation is that participants with higher WMC are better able to construct a coherent mental model of the text, and attending to central text areas is an essential component of mental model formation. Therefore, these results help clarify the relationship between the purpose for reading and mental model development. |
Simar Moussaoui; Christina F. Pereira; Matthias Niemeier Working memory in action: Transsaccadic working memory deficits in the left visual field and after transcallosal remapping Journal Article In: Cortex, vol. 159, pp. 26–38, 2023. @article{Moussaoui2023, Every waking second, we make three saccadic eye movements that move our retinal images. Thus, to attain a coherent image of the world we need to remember visuo-spatial information across saccades. But transsaccadic working memory (tWM) remains poorly understood. Crucially, there has been a debate whether there are any differences in tWM for the left vs. right visual field and depending on saccade direction. However, previous studies have probed tWM with minimal loads whereas spatial differences might arise with higher loads. Here we employed a task that probed higher memory load for spatial information in the left and right visual field and with horizontal as well as vertical saccades. We captured several measures of precision and accuracy of performance that, when submitted to principal component analysis, produced two components. Component 1, mainly associated with precision, yielded greater error for the left than the right visual field. Component 2 was associated with performance accuracy and unexpectedly produced a disadvantage after rightward saccades. Both components showed that performance was worse when rightward or leftward saccades afforded a shift of memory representations between visual fields compared to remapping within the same field. Our study offers several novel findings. It is the first to show that tWM involves at least two components likely reflecting working memory capacity and strategic aspects of working memory, respectively. Reduced capacity for the left, rather than the right visual field is consistent with how the left and right visual fields are known to be represented in the two hemispheres. Remapping difficulties between visual fields is consistent with the limited information transfer across the corpus callosum. Finally, the impact of rightward saccades on working memory might be due to greater interference of the accompanying shifts of attention. Our results highlight the dynamic nature of transsaccadic working memory. |
Sebastián Moyano; Josué Rico-Picó; Ángela Conejero; Ángela Hoyo; María de los Ángeles Ballesteros-Duperón; M. Rosario Rueda Influence of the environment on the early development of attentional control Journal Article In: Infant Behavior and Development, vol. 71, pp. 1–17, 2023. @article{Moyano2023, The control of visual attention is key to learning and has a foundational role in the development of self-regulated behavior. Basic attention control skills emerge early in life and show a protracted development along childhood. Prior research suggests that attentional development is influenced by environmental factors in early and late childhood. Although, much less information is available about the impact of the early environment on emerging endogenous attention skills during infancy. In the current study we aimed to test the impact of parental socioeconomic status (SES) and home environment (chaos) in the emerging control of orienting in a sample of typically-developing infants. A group of 142 (73 female) 6-month-old infants were longitudinally tested at 6, 9 (n = 122; 60 female) and 16–18 (n = 91; 50 female) months of age using the gap-overlap paradigm. Median saccade latency (mdSL) and disengagement failure (DF) were computed as dependent variables for both overlap and gap conditions. Also, composite scores for a Disengagement Cost Index (DCI) and Disengagement Failure Index (DFI) were computed considering mdSL and DF of each condition, respectively. Families reported SES and chaos in the first and last follow-up sessions. Using Linear Mixed Models with Maximum Likelihood estimation (ML) we found a longitudinal decrease in mdSL in the gap but not in the overlap condition, while DF decreased with age independently of the experimental condition. Concerning early environmental factors, an SES index, parental occupation and chaos at 6 months were found to show a negative correlation with DFI at 16–18 months, although in the former case it was only marginally significant. Hierarchical regression models implementing ML showed that both SES and chaos at 6 months significantly predicted a lower DFI at 16–18 months. Results show a longitudinal progression of endogenous orienting between infancy and toddlerhood. With age, an increased endogenous control of orienting is displayed in contexts where visual disengagement is facilitated. Visual orienting involving attention disengagement in contexts of visual competition do not show changes with age. Moreover, these attentional mechanisms of endogenous control seem to be modulated by early experiences of the individual with the environment. |
Kelsey J. Mulder; Louis Williams; Matthew Lickiss; Alison Black; Andrew Charlton-Perez; Rachel McCloy; Eugene McSorley Understanding representations of uncertainty, an eye-tracking study-Part 1: The effect of anchoring Journal Article In: Geoscience Communication, vol. 6, no. 3, pp. 97–110, 2023. @article{Mulder2023, Geoscience communicators must think carefully about how uncertainty is represented and how users may interpret these representations. Doing so will help communicate risk more effectively, which can elicit appropriate responses. Communication of uncertainty is not just a geosciences problem; recently, communication of uncertainty has come to the forefront over the course of the COVID-19 pandemic, but the lessons learned from communication during the pandemic can be adopted across geosciences as well. To test interpretations of environmental forecasts with uncertainty, a decision task survey was administered to 65 participants who saw different hypothetical forecast representations common to presentations of environmental data and forecasts: deterministic, spaghetti plot with and without a median line, fan plot with and without a median line, and box plot with and without a median line. While participants completed the survey, their eye movements were monitored with eye-Tracking software. Participants' eye movements were anchored to the median line, not focusing on possible extreme values to the same extent as when no median line was present. Additionally, participants largely correctly interpreted extreme values from the spaghetti and fan plots, but misinterpreted extreme values from the box plot, perhaps because participants spent little time fixating on the key. These results suggest that anchoring lines, such as median lines, should only be used where users should be guided to particular values and where extreme values are not as important in data interpretation. Additionally, fan or spaghetti plots should be considered instead of box plots to reduce misinterpretation of extreme values. Further study on the role of expertise and the change in eye movements across the graph area and key is explored in more detail in the companion paper to this study (Williams et al., 2023; hereafter Part 2). |
Miranda J. Munoz; Rishabh Arora; Yessenia M. Rivera; Quentin H. Drane; Gian D. Pal; Leo Verhagen Metman; Sepehr B. Sani; Joshua M. Rosenow; Lisa C. Goelz; Daniel M. Corcos; Fabian J. David In: Frontiers in Human Neuroscience, vol. 17, pp. 1–12, 2023. @article{Munoz2023, Background: Antiparkinson medication and subthalamic nucleus deep brain stimulation (STN-DBS), two common treatments of Parkinson's disease (PD), effectively improve skeletomotor movements. However, evidence suggests that these treatments may have differential effects on eye and limb movements, although both movement types are controlled through the parallel basal ganglia loops. Objective: Using a task that requires both eye and upper limb movements, we aimed to determine the effects of medication and STN-DBS on eye and upper limb movement performance. Methods: Participants performed a visually-guided reaching task. We collected eye and upper limb movement data from participants with PD who were tested both OFF and ON medication (n = 34) or both OFF and ON bilateral STN-DBS while OFF medication (n = 11). We also collected data from older adult healthy controls (n = 14). Results: We found that medication increased saccade latency, while having no effect on reach reaction time (RT). Medication significantly decreased saccade peak velocity, while increasing reach peak velocity. We also found that bilateral STN-DBS significantly decreased saccade latency while having no effect on reach RT, and increased saccade and reach peak velocity. Finally, we found that there was a positive relationship between saccade latency and reach RT, which was unaffected by either treatment. Conclusion: These findings show that medication worsens saccade performance and benefits reaching performance, while STN-DBS benefits both saccade and reaching performance. We explore what the differential beneficial and detrimental effects on eye and limb movements suggest about the potential physiological changes occurring due to treatment. |
Tomoya Nakamura; Ikuya Murakami The moment of awareness influences the content of awareness in orientation repulsion Journal Article In: Consciousness and Cognition, vol. 116, pp. 1–12, 2023. @article{Nakamura2023, Through the neurally evolving process of dynamic contextual modulation of perceptual contents, it remains unclear how the content of awareness is determined. Here we quantified the visual illusion of orientation repulsion, wherein the target appears tilted against the surrounding's orientation, and examined whether its extent changed when the target awareness was quickened by a preceding flanker. Independently of spatial cueing, repulsion was reduced when the flanker preceded the target by 100 ms compared with when they appeared simultaneously. We confirmed that the preceding flanker quickened the awareness of a nearby target relative to distant ones by 40 ms. Furthermore, the preceding flanker that was greater than 7 degrees away from the target still evoked such reduction of repulsion. These findings imply that the content of awareness is determined by the temporal interaction of two distinct processes: one controls the moment of awareness, and the other represents the perceptual content. |