EyeLink 认知出版物
All EyeLink cognitive and perception research publications up until 2023 (with some early 2024s) are listed below by year. You can search the publications using keywords such as Visual Search, Scene Perception, Face Processing, etc. You can also search for individual author names. If we missed any EyeLink cognitive or perception articles, please email us!
2018 |
Iiro P. Jääskeläinen; Yuri I. Alexandrov; Enrico Glerean; Janne Kauttonen; Mikko Sams; Elisa Ryyppö; Mareike Bacha-Trams; Emilia Broman; Minna Kauppila A drama movie activates brains of holistic and analytical thinkers differentially Journal Article In: Social Cognitive and Affective Neuroscience, vol. 13, no. 12, pp. 1293–1304, 2018. @article{Jaeaeskelaeinen2018, People socialized in different cultures differ in their thinking styles. Eastern-culture people view objects more holistically by taking context into account, whereas Western-culture people view objects more analytically by focusing on them at the expense of context. Here we studied whether participants, who have different thinking styles but live within the same culture, exhibit differential brain activity when viewing a drama movie. A total of 26 Finnish participants, who were divided into holistic and analytical thinkers based on self-report questionnaire scores, watched a shortened drama movie during functional magnetic resonance imaging. We compared intersubject correlation (ISC) of brain hemodynamic activity of holistic vs analytical participants across the movie viewings. Holistic thinkers showed significant ISC in more extensive cortical areas than analytical thinkers, suggesting that they perceived the movie in a more similar fashion. Significantly higher ISC was observed in holistic thinkers in occipital, prefrontal and temporal cortices. In analytical thinkers, significant ISC was observed in right-hemisphere fusiform gyrus, temporoparietal junction and frontal cortex. Since these results were obtained in participants with similar cultural background, they are less prone to confounds by other possible cultural differences. Overall, our results show how brain activity in holistic vs analytical participants differs when viewing the same drama movie. |
Todd Jackson; Lin Su; Yang Wang Effects of higher versus lower threat contexts on pain‐related attention biases: An eye‐tracking study Journal Article In: European Journal of Pain, vol. 22, no. 6, pp. 1113–1123, 2018. @article{Jackson2018a, Background: Threat is hypothesized to affect the degree to which pain captures attention but little is known about its impact on dynamic courses of attention towards pain. In this eye-tracking study, we evaluated pain-related visual attention biases during image pair presentations in comparatively lower versus higher threat conditions. Methods: Gaze biases of healthy adults (47 women, 35 men) were assessed during image presentation phases standardized across (1) a modified visual dot probe task featuring painful-neutral (pain) and neutral-neutral-contrast (neutral) image pair blocks (lower threat context) and (2) an impending pain task wherein the same image pair blocks, respectively, cued potentially painful post-offset somatosensory stimuli (higher threat context) and its absence. |
Todd Jackson; Lin Su; Yang Wang Effects of higher versus lower threat contexts on pain-related visual attention biases: An eye-tracking study of chronic pain Journal Article In: Journal of Pain, vol. 19, no. 6, pp. 649–659, 2018. @article{Jackson2018, In this research, we examined effects of higher versus lower threat contexts on attention biases in more and less pain-fearful chronic pain subgroups via eye-tracking methodology. Within a mixed chronic pain sample (69 women, 29 men), biases in orienting and maintenance of visual attention were assessed during the standardized image pair presentation phase (2,000 ms) of a modified visual dot probe task featuring painful-neutral (P-N) image pairs (lower threat context) and a parallel task in which these P-N pairs cued potential pain (higher threat context). Across both tasks, participants more often oriented toward, gazed longer at, and made more unique fixations upon pain images during P-N pair presentations. Although trait-based fear of pain was not related to any gaze bias index in either task, between task analyses indicated the sample reported more state fear, directed their initial gaze less often, and displayed longer overall gaze durations toward pain images in the higher threat context in which P-N trials signaled potential pain. Results supported the threat interpretation model premise that persons with chronic pain have difficulty disengaging from moderately threatening visual painful cues. |
Alexandra Trani; Paul Verhaeghen Foggy windows: Pupillary responses during task preparation Journal Article In: Quarterly Journal of Experimental Psychology, vol. 71, no. 10, pp. 2235–2248, 2018. @article{Trani2018, We investigated pupil dilation in 96 subjects during task preparation and during a post-trial interval in a visual search task and an auditory working memory task. Completely informative difficulty cues (easy, medium, or hard) were presented right before task preparation to examine whether pupil dilation indicated advance mobilisation of attentional resources; functional magnetic resonance imaging (fMRI) studies have argued for the existence of such task preparation, and the literature shows that pupil dilation tracks attentional effort during task performance. We found, however, little evidence for such task preparation. In the working memory task, pupil size was identical across cues, and although pupil dilation in the visual search task tracked the cue, pupil dilation predicted subsequent performance in neither task. Pupil dilation patterns in the post-trial interval were more consistent with an effect of emotional reactivity. Our findings suggest that the mobilisation of attentional resources in the service of the task does not occur during the preparatory interval, but is delayed until the task itself is initiated. |
Eoin Travers; Chris D. Frith; Nicholas Shea Learning rapidly about the relevance of visual cues requires conscious awareness Journal Article In: Quarterly Journal of Experimental Psychology, vol. 71, no. 8, pp. 1698–1713, 2018. @article{Travers2018, Humans have been shown to be capable of performing many cognitive tasks using information of which they are not consciously aware. This raises questions about what role consciousness actually plays in cognition. Here, we explored whether participants can learn cue-target contingencies in an attentional learning task when the cues were presented below the level of conscious awareness and how this differs from learning about conscious cues. Participants' manual (Experiment 1) and saccadic (Experiment 2) response speeds were influenced by both conscious and unconscious cues. However, participants were only able to adapt to reversals of the cue-target contingencies (Experiment 1) or changes in the reliability of the cues (Experiment 2) when consciously aware of the cues. Therefore, although visual cues can be processed unconsciously, learning about cues over a few trials requires conscious awareness of them. Finally, we discuss implications for cognitive theories of consciousness. |
Tawny Tsang; Natsuki Atagi; Scott P. Johnson Selective attention to the mouth is associated with expressive language skills in monolingual and bilingual infants Journal Article In: Journal of Experimental Child Psychology, vol. 169, pp. 93–109, 2018. @article{Tsang2018, Infants increasingly attend to the mouths of others during the latter half of the first postnatal year, and individual differences in selective attention to talking mouths during infancy predict verbal skills during toddlerhood. There is some evidence suggesting that trajectories in mouth-looking vary by early language environment, in particular monolingual or bilingual language exposure, which may have differential consequences in developing sensitivity to the communicative and social affordances of the face. Here, we evaluated whether 6- to 12-month-olds' mouth-looking is related to skills associated with concurrent social communicative development—including early language functioning and emotion discriminability. We found that attention to the mouth of a talking face increased with age but that mouth-looking was more strongly associated with concurrent expressive language skills than chronological age for both monolingual and bilingual infants. Mouth-looking was not related to emotion discrimination. These data suggest that selective attention to a talking mouth may be one important mechanism by which infants learn language regardless of home language environment. |
Tawny Tsang; Marissa Ogren; Yujia Peng; Bryan Nguyen; Kerri L. Johnson; Scott P. Johnson Infant perception of sex differences in biological motion displays Journal Article In: Journal of Experimental Child Psychology, vol. 173, pp. 338–350, 2018. @article{Tsang2018a, We examined mechanisms underlying infants' ability to categorize human biological motion stimuli from sex-typed walk motions, focusing on how visual attention to dynamic information in point-light displays (PLDs) contributes to infants' social category formation. We tested for categorization of PLDs produced by women and men by habituating infants to a series of female or male walk motions and then recording posthabituation preferences for new PLDs from the familiar or novel category (Experiment 1). We also tested for intrinsic preferences for female or male walk motions (Experiment 2). We found that infant boys were better able to categorize PLDs than were girls and that male PLDs were preferred overall. Neither of these effects was found to change with development across the observed age range (∼4–18 months). We conclude that infants' categorization of walk motions in PLDs is constrained by intrinsic preferences for higher motion speeds and higher spans of motion and, relatedly, by differences in walk motions produced by men and women. |
Philip Tseng; Mu-Chen Wang; Yu Hui Lo; Chi-Hung Juan Anodal and cathodal tDCS over the right frontal eye fields impacts spatial probability processing differently in pro- and anti-saccades Journal Article In: Frontiers in Neuroscience, vol. 12, pp. 421, 2018. @article{Tseng2018, Learning regularities that exist in the environment can help the visual system achieve optimal efficiency while reducing computational burden. Using a pro-and anti-saccade task, studies have shown that probabilistic information regarding spatial locations can be a strong modulator of frontal eye fields (FEF) activities and consequently alter saccadic behavior. One recent study has also shown that FEF activities can be modulated by transcranial direct current stimulation, where anodal tDCS facilitated prosaccades but cathodal tDCS prolonged antisaccades. These studies together suggest that location probability and tDCS can both alter FEF activities and oculomotor performance, yet how these two modulators interact with each other remains unclear. In this study, we applied anodal or cathodal tDCS over right FEF, and participants performed an interleaved pro-and anti-saccade task. Location probability was manipulated in prosaccade trials but not antisaccade trials. We observed that anodal tDCS over rFEF facilitated prosaccdes toward low-probability locations but not to high-probability locations; whereas cathodal tDCS facilitated antisaccades away from the high-probability location (i.e., same location as the low-probability locations in prosaccades). These observed effects were specific to rFEF as tDCS over the SEF in a separate control experiment did not yield similar patterns. These effects were also more pronounced in low-performers who had slower saccade reaction time. Together, we conclude that (1) the overlapping spatial endpoint between prosaccades (i.e., toward low-probability location) and antisaccades (i.e., away from high-probability location) possibly suggest an endpoint-selective mechanism within right FEF, (2) anodal tDCS and location probability cannot be combined to produce a bigger facilitative effect, and (3) anodal rFEF tDCS works best on low-performers who had slower saccade reaction time. These observations are consistent with the homeostasis account of tDCS effect and FEF functioning. |
Inna Tsirlin; Linda Colpa; Herbert C. Goltz; Agnes M. F. Wong Visual search deficits in amblyopia Journal Article In: Journal of Vision, vol. 18, no. 4, pp. 1–16, 2018. @article{Tsirlin2018, Amblyopia is a neurodevelopmental disorder defined as a reduction in visual acuity that cannot be corrected by optical means. It has been associated with low-level deficits. However, research has demonstrated a link between amblyopia and visual attention deficits in counting, tracking, and identifying objects. Visual search is a useful tool for assessing visual attention but has not been well studied in amblyopia. Here, we assessed the extent of visual search deficits in amblyopia using feature and conjunction search tasks. We compared the performance of participants with amblyopia (n = 10) to those of controls (n = 12) on both feature and conjunction search tasks using Gabor patch stimuli, varying spatial bandwidth and orientation. To account for the low-level deficits inherent in amblyopia, we measured individual contrast and crowding thresholds and monitored eye movements. The display elements were then presented at suprathreshold levels to ensure that visibility was equalized across groups. There was no performance difference between groups on feature search, indicating that our experimental design controlled successfully for low-level amblyopia deficits. In contrast, during conjunction search, median reaction times and reaction time slopes were significantly larger in participants with amblyopia compared with controls. Amblyopia differentially affects performance on conjunction visual search, a more difficult task that requires feature binding and possibly the involvement of higher-level attention processes. Deficits in visual search may affect day-to-day functioning in people with amblyopia. |
Massimo Turatto; Francesca Bonetti; David Pascucci Filtering visual onsets via habituation: A context-specific long-term memory of irrelevant stimuli Journal Article In: Psychonomic Bulletin & Review, vol. 25, no. 3, pp. 1028–1034, 2018. @article{Turatto2018a, The fact that we are often immediately attracted by sudden visual onsets provides a clear advantage for our survival. However, how can we resist from being continuously distracted by irrelevant repetitive onsets? Since the seminal work of Sokolov (1963), habituation of the orienting of attention has long been proposed to be a possible filtering mechanism. Here, in two experiments, we provide novel evidence showing that (a) habituation of capture of focused visual attention relies on a stored representation of the distractor onsets in relation to their context, and (b) that once formed such representation endures unchanged for weeks without any further exposure to the distractors. In agreement with the proposal of Wagner (1979) concerning the associative nature of habituation, the results of Experiment 1 suggest that habituation of attentional capture is context specific. Furthermore, the results of Experiment 2 show that to filter visual distractors our cognitive system uses long-lasting memories of the irrelevant information. Although distractor filtering can be implemented via top-down inhibitory control, neural and cognitive mechanisms underlying habituation provide a straightforward explanation for the reduced distraction obtained with training, thus working like an automatic filter that prevents irrelevant recurring stimuli from gaining access to higher stages of analysis |
Massimo Turatto; Francesca Bonetti; David Pascucci; Leonardo Chelazzi Desensitizing the attention system to distraction while idling: A new latent learning phenomenon in the visual attention domain Journal Article In: Journal of Experimental Psychology: General, vol. 147, no. 12, pp. 1827–1850, 2018. @article{Turatto2018, For the good and the bad, the world around us is full of distraction. In particular, onset stimuli that appear abruptly in the scene grab attention, thus disrupting the ongoing task. Different cognitive mechanisms for distractor filtering have been proposed, but prevalent accounts share the idea that filtering is accomplished to shield target processing from interference. Here we provide novel evidence that challenges this view, as passive exposure to a repeating visual onset is sufficient to trigger learning-dependent mechanisms to filter the unwanted stimulation. In other words, our study shows that during passive exposure the cognitive system is capable of learning about the characteristics of the salient yet irrelevant stimulation, and to reduce the responsiveness of the attention system to it, thus significantly decreasing the impact of the distractor upon start of an active task. However, despite passive viewing efficiently attenuates the spatial capture of attention, a short-lived performance cost is found when the distractor is initially encountered within the context of the active task. This cost, which dissipates in a few trials, likely reflects the need to familiarize with the distractor, already seen during passive viewing, in the new context of the active task. Although top-down inhibitory signals can be applied to distractors for the successful completion of goal-directed behavior, our results emphasize the role of more automatic habituation mechanisms for distraction exclusion based on a neural model of the history of the irrelevant stimulation |
Athina Tzovara; Christoph W. Korn; Dominik R. Bach Human Pavlovian fear conditioning conforms to probabilistic learning Journal Article In: PLoS computational biology, vol. 14, no. 8, pp. e1006243, 2018. @article{Tzovara2018, Learning to predict threat from environmental cues is a fundamental skill in changing environments. This aversive learning process is exemplified by Pavlovian threat conditioning. Despite a plethora of studies on the neural mechanisms supporting the formation of associations between neutral and aversive events, our computational understanding of this process is fragmented. Importantly, different computational models give rise to different and partly opposing predictions for the trial-by-trial dynamics of learning, for example expressed in the activity of the autonomic nervous system (ANS). Here, we investigate human ANS responses to conditioned stimuli during Pavlovian fear conditioning. To obtain precise, trial-by-trial, single-subject estimates of ANS responses, we build on a statistical framework for psychophysiological modelling. We then consider previously proposed non-probabilistic models, a simple probabilistic model, and non-learning models, as well as different observation functions to link learning models with ANS activity. Across three experiments, and both for skin conductance (SCR) and pupil size responses (PSR), a probabilistic learning model best explains ANS responses. Notably, SCR and PSR reflect different quantities of the same model: SCR track a mixture of expected outcome and uncertainty, while PSR track expected outcome alone. In summary, by combining psychophysiological modelling with computational learning theory, we provide systematic evidence that the formation and maintenance of Pavlovian threat predictions in humans may rely on probabilistic inference and includes estimation of uncertainty. This could inform theories of neural implementation of aversive learning. |
Hiroshi Ueda; Naotoshi Abekawa; Hiroaki Gomi In: PLoS ONE, vol. 13, no. 8, pp. e0201610, 2018. @article{Ueda2018, When the inside texture of a moving object moves, the perceived motion of the object is often distorted toward the direction of the texture's motion (motion-induced position shift), and such perceptual distortion accumulates while the object is watched, causing what is known as the curveball illusion. In a recent study, however, the accumulation of the position error was not observed in saccadic eye movements. Here, we examined whether the position of the illusory object is represented independently in the perceptual and saccadic systems. In the experiments, the stimulus of the curveball illusion was adopted to examine the temporal change in the position representation for saccadic eye movements and for perception by varying the elapsed time from the input of visual information to saccade onset and perceptual judgment, respectively. The results showed that the temporal accumulation of the motion-induced position shift is observed not only in perception but also in saccadic eye movements. In the saccade tasks, the landing positions of saccades gradually shifted to the illusory perceived position as the elapsed time from the target offset to the saccade “go” signal increased. Furthermore, in the perception task, shortening the time between the target offset and the perceptual judgment reduced the size of the illusion effect. Therefore, these results argue against the idea of dissociation between saccadic and perceptual localization of a moving object suggested in the previous study, in which saccades were measured in a rushed way while perceptual responses were measured without time constraint. Instead, the similar temporal trends of these effects imply a common or similar target representation for perception and eye movements which dynamically changes over the course of evidence accumulation. |
Metin Uengoer; John M. Pearce; Harald Lachnit; Stephan Koenig Context modulation of learned attention deployment Journal Article In: Learning and Behavior, vol. 46, no. 1, pp. 23–37, 2018. @article{Uengoer2018, In three experiments, we investigated the contextual control of attention in human discrimination learning. In each experiment, participants initially received discrimination training in which the cues from Dimension A were relevant in Context 1 but irrelevant in Context 2, whereas the cues from Dimension B were irrelevant in Context 1 but relevant in Context 2. In Experiment 1, the same cues from each dimension were used in Contexts 1 and 2, whereas in Experiments 2 and 3, the cues from each dimension were changed across contexts. In each experiment, participants were subsequently shifted to a transfer discrimination involving novel cues from either dimension, to assess the contextual control of attention. In Experiment 1, measures of eye gaze during the transfer discrimination revealed that Dimension A received more attention than Dimension B in Context 1, whereas the reverse occurred in Context 2. Corresponding results indicating the contextual control of attention were found in Experiments 2 and 3, in which we used the speed of learning (associability) as an indirect marker of learned attentional changes. Implications of our results for current theories of learning and attention are discussed. |
Matteo Valsecchi; Jan Koenderink; Andrea Doorn; Karl R. Gegenfurtner Prediction shapes peripheral appearance Journal Article In: Journal of Vision, vol. 18, no. 13, pp. 1–12, 2018. @article{Valsecchi2018, Peripheral perception is limited in terms of visual acuity, contrast sensitivity, and positional uncertainty. In the present study we used an image-manipulation algorithm (the Eidolon Factory) based on a formal description of the visual field as a tool to investigate how peripheral stimuli appear in the presence of such limitations. Observers were asked to match central and peripheral stimuli, both configurations of superimposed geometric shapes and patches of natural images, in terms of the parameters controlling the amplitude of the perturbation (reach) and the cross-scale similarity of the perturbation (coherence). We found that observers systematically tended to report the peripheral stimuli as having shorter reach and higher coherence. This means that their matches both were less distorted and had sharper edges relative to the actual stimulus. Overall, the results indicate that the way we see objects in our peripheral visual field is complemented by our assumptions about the way the same objects would appear if they were viewed foveally. |
Quan Wang; Lauren DiNicola; Perrine Heymann; Michelle Hampson; Katarzyna Chawarska Impaired value learning for faces in preschoolers with Autism Spectrum Disorder Journal Article In: Journal of the American Academy of Child and Adolescent Psychiatry, vol. 57, no. 1, pp. 33–40, 2018. @article{Wang2018l, Objective One of the common findings in autism spectrum disorder (ASD) is limited selective attention toward social objects, such as faces. Evidence from both human and nonhuman primate studies suggests that selection of objects for processing is guided by the appraisal of object values. We hypothesized that impairments in selective attention in ASD may reflect a disruption of a system supporting learning about object values in the social domain. Method We examined value learning in social (faces) and nonsocial (fractals) domains in preschoolers with ASD (n = 25) and typically developing (TD) controls (n = 28), using a novel value learning task implemented on a gaze-contingent eye-tracking platform consisting of value learning and a selective attention choice test. Results Children with ASD performed more poorly than TD controls on the social value learning task, but both groups performed similarly on the nonsocial task. Within-group comparisons indicated that value learning in TD children was enhanced on the social compared to the nonsocial task, but no such enhancement was seen in children with ASD. Performance in the social and nonsocial conditions was correlated in the ASD but not in the TD group. Conclusion The study provides support for a domain-specific impairment in value learning for faces in ASD, and suggests that, in ASD, value learning in social and nonsocial domains may rely on a shared mechanism. These findings have implications both for models of selective social attention deficits in autism and for identification of novel treatment targets. |
Shuo Wang Face size biases emotion judgment through eye movement Journal Article In: Scientific Reports, vol. 8, pp. 317, 2018. @article{Wang2018i, Faces are the most commonly used stimuli to study emotions. Researchers often manipulate the emotion contents and facial features to study emotion judgment, but rarely manipulate low-level stimulus features such as face sizes. Here, I investigated whether a mere difference in face size would cause differences in emotion judgment. Subjects discriminated emotions in fear-happy morphed faces. When subjects viewed larger faces, they had an increased judgment of fear and showed a higher specificity in emotion judgment, compared to when they viewed smaller faces. Concurrent high-resolution eye tracking further provided mechanistic insights: subjects had more fixations onto the eyes when they viewed larger faces whereas they had a wider dispersion of fixations when they viewed smaller faces. The difference in eye movement was present across fixations in serial order but independent of morph level, ambiguity level, or behavioral judgment. Together, this study not only suggested a link between emotion judgment and eye movement, but also showed importance of equalizing stimulus sizes when comparing emotion judgments. |
Shuo Wang; Adam N. Mamelak; Ralph Adolphs; Ueli Rutishauser Encoding of target detection during visual search by single neurons in the human brain Journal Article In: Current Biology, vol. 28, pp. 2058–2069, 2018. @article{Wang2018f, Neurons in the primate medial temporal lobe (MTL) respond selectively to visual categories such as faces, contributing to how the brain represents stim- ulus meaning. However, it remains unknown whether MTL neurons continue to encode stimulus meaning when it changes flexibly as a function of variable task demands imposed by goal-directed behavior. While classically associated with long-term memory, recent lesion and neuroimaging studies show that the MTL also contributes critically to the online guid- ance of goal-directed behaviors such as visual search. Do such tasks modulate responses of neu-rons in the MTL, and if so, do their responses mirror bottom-up input from visual cortices or do they reflect more abstract goal-directed properties? To answer these questions, we performed concurrent recordings of eye movements and single neurons in the MTL and medial frontal cortex (MFC) in human neuro-surgical patients performing a memory-guided visual search task. We identified a distinct population of target-selective neurons in both the MTL and MFC whose response signaled whether the currently fixated stimulus was a target or distractor. This target-selective responsewas invariant to visual category and predicted whether a target was detected or missed behaviorally during a given fixation. The response latencies, relative to fixation onset, of MFC target-selective neurons preceded those in the MTL by ~200 ms, suggesting a frontal origin for the target signal. The human MTL thus represents not only fixed stimulus identity, but also task-specified stimulus relevance due to top-down goal relevance. |
Xi Wang; Sebastian Koch; Kenneth Holmqvist; Marc Alexa Tracking the gaze on objects in 3D: How do people really look at the bunny? Journal Article In: ACM Transactions on Graphics, vol. 37, no. 6, pp. 1–18, 2018. @article{Wang2018m, We provide the first large dataset of human fixations on physical 3D objects presented in varying viewing conditions and made of different materials. Our experimental setup is carefully designed to allow for accurate calibration and measurement. We estimate a mapping from the pair of pupil positions to 3D coordinates in space and register the presented shape with the eye tracking setup. By modeling the fixated positions on 3D shapes as a probability distribution, we analysis the similarities among different conditions. The resulting data indicates that salient features depend on the viewing direction. Stable features across different viewing directions seem to be connected to semantically meaningful parts. We also show that it is possible to estimate the gaze density maps from view dependent data. The dataset provides the necessary ground truth data for computational models of human perception in 3D. |
Xiangyan Wang; Leilei Zhong; Jiwei Si; Weixing Yang; Yalin Yang Cognitive switching affects arithmetic strategy selection: Evidence from eye-gaze and behavioral measures Journal Article In: Anales de Psicologia, vol. 34, no. 3, pp. 571–579, 2018. @article{Wang2018k, Although many studies of cognitive switching have been conducted, little is known about whether and how cognitive switching affects individuals' use of arithmetic strategies. We used estimation and numerical comparison tasks within the operand recognition paradigm and the choice/no-choice paradigm to explore the effects of cognitive switching on the process of arithmetic strategy selection. Results showed that individuals' performance in the baseline task was superior to that in the switching task. Presentation mode and cognitive switching clearly influenced eye-gaze patterns during strategy selection, with longer fixation duration in the number presentation mode than in the clock presentation mode. Furthermore, the number of fixation was greater in the switching task than it was in the the baseline task. These results indicate that the effects of cognitive switching on arithmetic strategy selection are clearly constrained by the manner in which numbers are presented. |
Michael L. Waskom; Janeen Asfour; Roozbeh Kiani Perceptual insensitivity to higher-order statistical moments of coherent random dot motion Journal Article In: Journal of Vision, vol. 18, no. 6, pp. 1–10, 2018. @article{Waskom2018, When the visual system analyzes distributed patterns of sensory inputs, what features of those distributions does it use? It has been previously demonstrated that higher-order statistical moments of luminance distributions influence perception of static surfaces and textures. Here, we tested whether the brain also represents higher-order moments of dynamic stimuli. We constructed random dot kinematograms, where dots moved according to probability distributions that selectively differed in terms of their mean, variance, skewness, or kurtosis. When viewing these stimuli, human observers were sensitive to the mean direction of coherent motion and to the variance of dot displacement angles, but they were insensitive to skewness and kurtosis. Observer behavior accorded with a model of directional motion energy, suggesting that information about higher-order moments is discarded early in the visual processing hierarchy. These results demonstrate that use of higher-order moments is not a general property of visual perception. |
Michael L. Waskom; Roozbeh Kiani Decision making through integration of sensory evidence at prolonged timescales Journal Article In: Current BIology, vol. 28, pp. 3850–3856, 2018. @article{Waskom2018a, When multiple pieces of information bear on a decision, the best approach is to combine the evidence provided by each one. Evidence integration models formalize the computations underlying this process [1–3], explain human perceptual discrimination behavior [4–9], and correspond to neuronal re- sponses elicited by discrimination tasks [10–14]. These findings suggest that evidence integration is key to understanding the neural basis of decision making [15–18]. But while evidence integration has most often been studied with simple tasks that limit deliberation to relatively brief periods, many natural decisions unfold over much longer durations. Neural network models imply acute limitations on the timescale of evidence integration [19–23], and it is currently unknown whether existing computational insights can generalize beyond rapid judgments. Here, we introduce a new psychophysical task and report model-based analyses of human behavior that demonstrate evidence integration at long timescales. Our task requires probabilistic inference using brief samples of visual evidence that are separated in time by long and unpredictable gaps. We show through several quantitative assays how decision making can approximate a normative integration process that extends over tens of seconds without accruing significant memory leak or noise. These results support the generalization of evidence integration models to a broader class of behaviors while posing new challenges for models of how these computations are implemented in biological networks. |
Hanna Weichselbaum; Ulrich Ansorge Bottom-up attention capture with distractor and target singletons defined in the same (color) dimension is not a matter of feature uncertainty Journal Article In: Attention, Perception, and Psychophysics, vol. 80, no. 6, pp. 1350–1361, 2018. @article{Weichselbaum2018a, In visual search, attention capture by an irrelevant color-singleton distractor in another feature dimension than the target is dependent on whether or not the distractor changes its feature: Capture is present if the irrelevant color distractor can take on different features across trials, but absent if the distractor takes on only one feature throughout all trials. This influence could be due to down-weighting of the entire color map. Here we tested whether a similar effect could also be brought about by down-weighting of specific color channels within the same maps. We investigated whether a similar dependence of capture on color certainty might hold true if the distractor were defined in the same (color) dimension as the target. At odds with this possibility, in the first and third blocks—in which feature uncertainty was absent—an irrelevant distractor of a certain color captured attention. In addition, in a second block, varying the distractor color created feature uncertainty, but this did not increase capture. Repeating the exact same procedure with the same participants after one week confirmed the stability of the results. The present study showed that a color distractor presented in the same (color) dimension as the target captures attention independent of feature uncertainty. Thus, the down-weighting of single irrelevant color channels within the same feature map used for target search is not a matter of feature uncertainty. |
Hanna Weichselbaum; Christoph Huber-Huber; Ulrich Ansorge Attention capture is temporally stable: Evidence from mixed-model correlations Journal Article In: Cognition, vol. 180, pp. 206–224, 2018. @article{Weichselbaum2018, Studies on domain-specific expertise in visual attention, on its cognitive enhancement, or its pathology require individually reliable measurement of visual attention. Yet, the reliability of the most widely used reaction time (RT) differences measuring visual attention is in doubt or unknown. Therefore, we used novel methods of analyses based on linear mixed models (LMMs) and tested the temporal stability, as one index of reliability, of three attentional RT effects in the popular additional-singleton research protocol: (1) bottom-up, (2) top-down, and (3) memory-driven (intertrial priming) influences on attention capture effects. Participants searched for a target having one specific color in most (Exp. 1) or all (Exp. 2) trials. Together with the target, in half (Exp. 1) or two thirds (Exp. 2) of the trials, a distractor was presented that stood out by the target's (Exp. 1) or a target-similar (Exp. 2) color, therefore matching a top-down search set, or by a different color, capturing attention in a bottom-up way. Also, matching distractors were primed or unprimed by the target color of the preceding trial. We analyzed all three attention capture effects in manual and target fixation RTs at two different times, separated by one (Exp. 1 and 2) or four weeks (only in Exp. 1). Random slope correlations of LMMs and standard correlation coefficients computed on individual participants' effect scores showed that RT capture effects were in general temporally stable for both time intervals and dependent variables. These results demonstrate the test-retest reliability necessary for looking at individual differences of attentional RT effects. |
Hannah Weinberg-Wolf; Nicholas A. Fagan; George M. Anderson; Marios Tringides; Olga Dal Monte; Steve W. C. Chang The effects of 5-hydroxytryptophan on attention and central serotonin neurochemistry in the rhesus macaque Journal Article In: Neuropsychopharmacology, vol. 43, no. 7, pp. 1589–1594, 2018. @article{WeinbergWolf2018, Psychiatric disorders, particularly depression and anxiety, are often associated with impaired serotonergic function. However, serotonergic interventions yield inconsistent effects on behavioral impairments. To better understand serotonin's role in these pathologies, we investigated the role of serotonin in a behavior frequently impaired in depression and anxiety, attention. In this study, we used a quantitative, repeated, within-subject, design to test how L-5-hydroxytryptophan (5-HTP), the immediate serotonin precursor, modulates central serotoninergic function and attention in macaques. We observed that intramuscular 5-HTP administration increased cisternal cerebrospinal fluid (CSF) 5-HTP and serotonin. In addition, individuals' baseline looking duration, during saline sessions, predicted the direction and magnitude in which 5-HTP modulated attention. We found that 5-HTP decreased looking duration in animals with high baseline attention, but increased looking duration in low baseline attention animals. Furthermore, individual differences in 5-HTP's effects were also reflected in how engaged individuals were in the task and how they allocated attention to salient facial features - the eyes and mouth - of stimulus animals. However, 5-HTP constricted pupil size in all animals, suggesting that the bi-directional effects of 5-HTP cannot be explained by serotonin-mediated changes in autonomic arousal. Critically, high and low baseline attention animals exhibited different baseline CSF concentrations of 5-HTP and serotonin, an index of extracellular functionally active serotonin. Thus, our results suggest that baseline central serotonergic functioning may underlie and predict variation in serotonin's effects on cognitive operation. Our findings may help inform serotonin's role in psychopathology and help clinicians predict how serotonergic interventions will influence pathologies. |
Wen Wen; Yin Hou; Sheng Li Memory guidance in distractor suppression is governed by the availability of cognitive control Journal Article In: Attention, Perception, and Psychophysics, vol. 80, no. 5, pp. 1157–1168, 2018. @article{Wen2018, Information stored in the memory systems can affect visual search. Previous studies have shown that holding the to-be-ignored features of distractors in working memory (WM) could accelerate target selection. However, such facilitation effect was only observed when the cued to-be-ignored features remained unchanged within an experimental block (i.e., the fixed cue condition). No search benefit was obtained ifthe to-be-ignored features varied from trial to trial (i.e., the varied cue condition). In the present study, we conducted three behavioral experiments to investigate whether the WM and long-term memory (LTM) representations ofthe to-be-ignored features could facilitate visual search in the fixed cue (Experiment 1) and varied cue (Experiments 2 and 3) conditions. Given the importance ofthe processing time ofcognitive control in distractor suppression, we divided visual search trials into five quintiles based on their reaction times (RTs) and examined the temporal characteristics of the suppression effect. Results showed that both theWMand LTM representations ofthe to-be-ignored features could facilitate distractor suppression in the fixed cue condition, and the facilitation effects were evident across the quintiles in the RT distribution. However, in the varied cue condition, the RT benefits of the WM-matched distractors occurred only in the trials with the longest RTs, whereas no advantage of the LTM-matched distractors was observed. These results suggest that the effective WM-guided distractor sup- pression depends on the availability ofcognitive control and the LTM-guided suppression occurs only ifsufficient WM resource is accessible by LTM reactivation. |
Rick Vliet; Zeb D. Jonker; Suzanne C. Louwen; Marco Heuvelman; Linda Vreede; Gerard M. Ribbers; Chris I. De Zeeuw; Opher Donchin; Ruud W. Selles; Josef N. Geest; Maarten A. Frens Cerebellar transcranial direct current stimulation interacts with BDNF Val66Met in motor learning Journal Article In: Brain Stimulation, vol. 11, no. 4, pp. 759–771, 2018. @article{Vliet2018, Background: Cerebellar transcranial direct current stimulation has been reported to enhance motor associative learning and motor adaptation, holding promise for clinical application in patients with movement disorders. However, behavioral benefits from cerebellar tDCS have been inconsistent. Objective: Identifying determinants of treatment success is necessary. BDNF Val66Met is a candidate determinant, because the polymorphism is associated with motor skill learning and BDNF is thought to mediate tDCS effects. Methods: We undertook two cerebellar tDCS studies in subjects genotyped for BDNF Val66Met. Subjects performed an eyeblink conditioning task and received sham, anodal or cathodal tDCS (N ¼ 117, between- subjects design) or a vestibulo-ocular reflex adaptation task and received sham and anodal tDCS (N ¼ 51 subjects, within-subjects design). Performance was quantified as a learning parameter from 0 to 100%. We investigated (1) the distribution of the learning parameter with mixture modeling presented as the mean (M), standard deviation (S) and proportion (P) of the groups, and (2) the role of BDNF Val66Met and cerebellar tDCS using linear regression presented as the regression coefficients (B) and odds ratios (OR) with equally-tailed intervals (ETIs). Results: For the eyeblink conditioning task, we found distinct groups of learners (MLearner ¼ 67.2%; SLearner ¼ 14.7%; PLearner ¼ 61.6%) and non-learners (MNon-learner ¼ 14.2%; SNon-learner ¼ 8.0%; PNon- learner ¼ 38.4%). Carriers of the BDNF Val66Met polymorphism were more likely to be learners (OR¼ 2.7 [1.2 6.2]). Within the group of learners, anodal tDCS supported eyeblink conditioning in BDNF Val66Met non-carriers (B¼ 11.9% 95%ETI ¼ [0.8 23.0]%), but not in carriers (B¼ 1.0% 95%ETI ¼ [-10.2 12.1]%). For the vestibulo-ocular reflex adaptation task, we found no effect of BDNF Val66Met (B¼?2.0% 95%ETI ¼ [-8.7 4.7]%) or anodal tDCS in either carriers (B¼ 3.4% 95%ETI ¼ [-3.2 9.5]%) or non-carriers (B ¼ 0.6% 95% ETI ¼ [-3.4 4.8]%). Finally, we performed additional saccade and visuomotor adaptation experiments (N ¼ 72) to investigate the general role of BDNF Val66Met in cerebellum-dependent learning and found no difference between carriers and non-carriers for both saccade (B ¼ 1.0% 95%ETI ¼ [-8.6 10.6]%) and visuomotor adaptation (B ¼ 2.7% 95%ETI ¼ [-2.5 7.9]%). Conclusions: The specific role for BDNF Val66Met in eyeblink conditioning, but not vestibulo-ocular reflex adaptation, saccade adaptation or visuomotor adaptation could be related to dominance of the role of simple spike suppression of cerebellar Purkinje cells with a high baseline firing frequency in eyeblink conditioning. Susceptibility of non-carriers to anodal tDCS in eyeblink conditioning might be explained by a relatively larger effect of tDCS-induced subthreshold depolarization in this group, which might increase the spontaneous firing frequency up to the level of that of the carriers. |
Daniel Marten Es; Jan Theeuwes; Tomas Knapen Spatial sampling in human visual cortex is modulated by both spatial and feature-based attention Journal Article In: eLife, vol. 7, pp. 1–28, 2018. @article{Es2018, Spatial attention changes the sampling of visual space. Behavioral studies suggest that feature-based attention modulates this resampling to optimize the attended feature's sampling. We investigate this hypothesis by estimating spatial sampling in visual cortex while independently varying both feature-based and spatial attention. Our results show that spatial and feature-based attention interacted: resampling of visual space depended on both the attended location and feature (color vs. temporal frequency). This interaction occurred similarly throughout visual cortex, regardless of an area's overall feature preference. However, the interaction did depend on spatial sampling properties of voxels that prefer the attended feature. These findings are parsimoniously explained by variations in the precision of an attentional gain field. Our results demonstrate that the deployment of spatial attention is tailored to the spatial sampling properties of units that are sensitive to the attended feature. |
Elle Heusden; Martin Rolfs; Patrick Cavanagh; Hinze Hogendoorn Motion extrapolation for eye movements predicts perceived motion-induced position shifts Journal Article In: Journal of Neuroscience, vol. 38, no. 38, pp. 8243–8250, 2018. @article{Heusden2018, Transmission delays in the nervous system pose challenges for the accurate localization of moving objects as the brain must rely on outdated information to determine their position in space. Acting effectively in the present requires that the brain compensates not only for the time lost in the transmission and processing of sensory information, but also for the expected time that will be spent preparing and executing motor programs. Failure to account for these delays will result in the mislocalization and mistargeting of moving objects. In the visuomotor system, where sensory and motor processes are tightly coupled, this predicts that the perceived position of an object should be related to the latency of saccadic eye movements aimed at it. Here we use the flash-grab effect, a mislocalization of briefly flashed stimuli in the direction of a reversing moving background, to induce shifts of perceived visual position in human observers (male and female). We find a linear relationship between saccade latency and perceived position shift, challenging the classic dissociation between "vision for action" and "vision for perception" for tasks of this kind and showing that oculomotor position representations are either shared with or tightly coupled to perceptual position representations. Altogether, we show that the visual system uses both the spatial and temporal characteristics of an upcoming saccade to localize visual objects for both action and perception. |
Nathalie Van Humbeeck; Radha Nila Meghanathan; Johan Wagemans; Cees Leeuwen; Andrey R. Nikolaev Presaccadic EEG activity predicts visual saliency in free-viewing contour integration Journal Article In: Psychophysiology, vol. 55, no. 12, pp. e13267, 2018. @article{VanHumbeeck2018, While viewing a scene, the eyes are attracted to salient stimuli. We set out to identify the brain signals controlling this process. In a contour integration task, in which participants searched for a collinear contour in a field of randomly oriented Gabor elements, a previously established model was applied to calculate a visual saliency value for each fixation location. We studied brain activity related to the modeled saliency values, using coregistered eye tracking and EEG. To disentangle EEG signals reflecting salience in free viewing from overlapping EEG responses to sequential eye movements, we adopted generalized additive mixed modeling (GAMM) to single epochs of saccade‐related EEG. We found that, when saliency at the next fixation location was high, amplitude of the presaccadic EEG activity was low. Since presaccadic activity reflects covert attention to the saccade target, our results indicate that larger attentional effort is needed for selecting less salient saccade targets than more salient ones. This effect was prominent in contour‐present conditions (half of the trials), but ambiguous in the contour‐absent condition. Presaccadic EEG activity may thus be indicative of bottom‐up factors in saccade guidance. The results underscore the utility of GAMM for EEG—eye movement coregistration research. |
Koen Lith; Dick Johan Veltman; Moran Daniel Cohn; Louise Else Pape; Marieke Eleonora Akker-Nijdam; Amanda Wilhelmina Geertruida Loon; Pierre Bet; Guido Alexander Wingen; Wim Brink; Theo Doreleijers; Arne Popma Effects of methylphenidate during fear learning in antisocial adolescents: A randomized controlled fMRI trial Journal Article In: Journal of the American Academy of Child and Adolescent Psychiatry, vol. 57, no. 12, pp. 934–943, 2018. @article{Lith2018, Objective: Although the neural underpinnings of antisocial behavior have been studied extensively, research on pharmacologic interventions targeting specific neural mechanisms remains sparse. Hypoactivity of the amygdala and ventromedial prefrontal cortex (vmPFC) has been reported in antisocial adolescents, which could account for deficits in fear learning (amygdala) and impairments in decision making (vmPFC), respectively. Limited clinical research suggests positive effects of methylphenidate, a dopamine agonist, on antisocial behavior in adolescents. Dopamine is a key neurotransmitter involved in amygdala and vmPFC functioning. The objective of this study was to investigate whether methylphenidate targets dysfunctions in these brain areas in adolescents with antisocial behavior. Method: A group of 42 clinical referred male adolescents (14–17 years old) with a disruptive behavior disorder performed a fear learning/reversal paradigm in a randomized double-blinded placebo-controlled pharmacologic functional magnetic resonance imaging study. Participants with disruptive behavior disorder were randomized to receive a single dose of methylphenidate 0.3 to 0.4 mg/kg (n = 22) or placebo (n = 20) and were compared with 21 matched healthy controls not receiving medication. Results: In a region-of-interest analysis of functional magnetic resonance imaging data during fear learning, the placebo group showed hyporeactivity of the amygdala compared with healthy controls, whereas amygdala reactivity was normalized in the methylphenidate group. There were no group differences in vmPFC reactivity during fear reversal learning. Whole-brain analyses showed no group differences. Conclusion: These findings suggest that methylphenidate is a promising pharmacologic intervention for youth antisocial behavior that could restore amygdala functioning. |
Anouk Mariette Loon; Katya Olmos-Solis; Johannes J. Fahrenfort; Christian N. L. Olivers Current and future goals are represented in opposite patterns in object-selective cortex Journal Article In: eLife, vol. 7, pp. 1–25, 2018. @article{Loon2018, Adaptive behavior requires the separation of current from future goals in working memory. We used fMRI of object-selective cortex to determine the representational (dis)similarities of memory representations serving current and prospective perceptual tasks. Participants remembered an object drawn from three possible categories as the target for one of two consecutive visual search tasks. A cue indicated whether the target object should be looked for first (currently relevant), second (prospectively relevant), or if it could be forgotten (irrelevant). Prior to the first search, representations of current, prospective and irrelevant objects were similar, with strongest decoding for current representations compared to prospective (Experiment 1) and irrelevant (Experiment 2). Remarkably, during the first search, prospective representations could also be decoded, but revealed anti-correlated voxel patterns compared to currently relevant representations of the same category. We propose that the brain separates current from prospective memories within the same neuronal ensembles through opposite representational patterns. |
Joanne C. Van Slooten; Sara Jahfari; Tomas Knapen; Jan Theeuwes How pupil responses track value-based decision-making during and after reinforcement learning Journal Article In: PLoS Computational Biology, vol. 14, no. 11, pp. e1006632, 2018. @article{VanSlooten2018, Cognition can reveal itself in the pupil, as latent cognitive processes map onto specific pupil responses. For instance, the pupil dilates when we make decisions and these pupil size fluctuations reflect decision-making computations during and after a choice. Surprisingly little is known, however, about how pupil responses relate to decisions driven by the learned value of stimuli. This understanding is important, as most real-life decisions are guided by the outcomes of earlier choices. The goal of this study was to investigate which cognitive processes the pupil reflects during value-based decision-making. We used a reinforcement learning task to study pupil responses during value-based decisions and subsequent decision evaluations, employing computational modeling to quantitatively describe the underlying cognitive processes. We found that the pupil closely tracks reinforcement learning processes independently across participants and across trials. Prior to choice, the pupil dilated as a function of trial-by-trial fluctuations in value beliefs about the to-be chosen option and predicted an individual's tendency to exploit high value options. After feedback a biphasic pupil response was observed, the amplitude of which correlated with participants' learning rates. Furthermore, across trials, early feedback-related dilation scaled with value uncertainty, whereas later constriction scaled with signed reward prediction errors. These findings show that pupil size fluctuations can provide detailed information about the computations underlying value-based decisions and the subsequent updating of value beliefs. As these processes are affected in a host of psychiatric disorders, our results indicate that pupillometry can be used as an accessible tool to non-invasively study the processes underlying ongoing reinforcement learning in the clinic. |
André Vandierendonck; Maaike Loncke; Robert J. Hartsuiker; Timothy Desmet The role of executive control in resolving grammatical number conflict in sentence comprehension Journal Article In: Quarterly Journal of Experimental Psychology, vol. 71, no. 3, pp. 759–778, 2018. @article{Vandierendonck2018, In sentences with a complex subject noun phrase, like “The key to the cabinets is lost”, the grammatical number of the head noun (key) may be the same or different from that of the modifier noun phrase (cabinets). When the number is the same, comprehension is usually easier than when it is different. Grammatical number computation may occur while processing the modifier noun (integration phase) or while processing the verb (checking phase). We investigated at which phase number conflict and plausibility of the modifier noun as subject for the verb affect processing, and we imposed a gaze-contingent tone discrimination task in either phase to test whether number computation involves executive control. At both phases, gaze durations were longer when a concurrent tone task was present. Additionally, at the integration phase, gaze durations were longer under number conflict, and this effect was enhanced by the presence of a tone task, whereas no effects of plausibility of the modifier were observed. The finding that the effect of number match was larger under load shows that computation of the grammatical number of the complex noun phrase requires executive control in the integration phase, but not in the checking phase. |
Caroline Vass; Dan Rigby; Kelly Tate; Andrew J. Stewart; Katherine Payne An exploratory application of eye-tracking methods in a discrete choice experiment Journal Article In: Medical Decision Making, vol. 38, no. 6, pp. 658–672, 2018. @article{Vass2018, Background. Discrete choice experiments (DCEs) are increasingly used to elicit preferences for benefit-risk tradeoffs. The primary aim of this study was to explore how eye-tracking methods can be used to understand DCE respondents' decision-making strategies. A secondary aim was to explore if the presentation and communication of risk affected respondents' choices. Method. Two versions of a DCE were designed to understand the preferences of female members of the public for breast screening that varied in how risk attributes were presented. Risk was communicated as either 1) percentages or 2) icon arrays and percentages. Eye-tracking equipment recorded eye movements 1000 times a second. A debriefing survey collected sociodemographics and self-reported attribute nonattendance (ANA) data. A heteroskedastic conditional logit model analyzed DCE data. Eye-tracking data on pupil size, direction of motion, and total visual attention (dwell time) to predefined areas of interest were analyzed using ordinary least squares regressions. Results. Forty women completed the DCE with eye-tracking. There was no statistically significant difference in attention (fixations) to attributes between the risk communication formats. Respondents completing either version of the DCE with the alternatives presented in columns made more horizontal (left-right) saccades than vertical (up-down). Eye-tracking data confirmed self-reported ANA to the risk attributes with a 40% reduction in mean dwell time to the ''probability of detecting a cancer'' (P = 0.001) and a 25% reduction to the ''risk of unnecessary follow-up'' (P = 0.008). Conclusion. This study is one of the first to show how eye-tracking can be used to understand responses to a health care DCE and highlighted the potential impact of risk communication on respondents' decision-making strategies. The results suggested self-reported ANA to cost attributes may not be reliable. |
Maryam Vaziri-Pashkam; JohnMark Taylor; Yaoda Xu Spatial frequency tolerant visual object representations in the human ventral and dorsal visual processing pathways Journal Article In: Journal of Cognitive Neuroscience, vol. 31, no. 1, pp. 49–63, 2018. @article{VaziriPashkam2018, Primate ventral and dorsal visual pathways both contain visual object representations. Dorsal regions receive more input from magnocellular system while ventral regions receive inputs from both magnocellular and parvocellular systems. Due to potential differences in the spatial sensitivites of man- ocellular and parvocellular systems, object representations in ventral and dorsal regions may differ in how they represent visual input from different spatial scales. To test this prediction, we asked observers to view blocks of images from six object catego- ries, shown in full spectrum, high spatial frequency (SF), or low SF. We found robust object category decoding in all SF conditions as well as SF decoding in nearly all the early visual, ventral, and dorsal regions examined. Cross-SF decoding further revealed that object category representations in all regions exhibited sub- stantial tolerance across the SF components. No difference between ventral and dorsal regions was found in their preference for the different SF components. Further comparisons revealed that, whereas differences in the SF component separated object category representations in early visual areas, such a separation was much smaller in downstream ventral and dorsal regions. In those regions, variations among the object categories played a more significant role in shaping the visual representational structures. Our findings show that ventral and dorsal regions are sim- ilar in how they represent visual input from different spatial scales and argue against a dissociation of these regions based on differential sensitivity to different SFs. |
Laurent Vercueil; Anne Guérin-Dugué; Emmanuelle Kristensen; Anna Tcherkassof; Bertrand Rivet; Raphaëlle N. Roy Temporal dynamics of natural static emotional facial expressions decoding: A study using event- and eye fixation-related potentials Journal Article In: Frontiers in Psychology, vol. 9, pp. 1190, 2018. @article{Vercueil2018, This study aims at examining the precise temporal dynamics of the emotional facial decoding as it unfolds in the brain, according to the emotions displayed. To characterize this processing as it occurs in ecological settings, we focused on unconstrained visual explorations of natural emotional faces (i.e., free eye movements). The General Linear Model (GLM; Smith and Kutas, 2015a,b; Kristensen et al., 2017a) enables such a depiction. It allows deconvolving adjacent overlapping responses of the eye fixation- related potentials (EFRPs) elicited by the subsequent fixations and the event-related potentials (ERPs) elicited at the stimuli onset. Nineteen participants were displayed with spontaneous static facial expressions of emotions (Neutral, Disgust, Surprise, and Happiness) from the DynEmo database (Tcherkassof et al., 2013). Behavioral results on participants' eye movements show that the usual diagnostic features in emotional decoding (eyes for negative facial displays and mouth for positive ones) are consistent with the literature. The impact of emotional category on both the ERPs and the EFRPs elicited by the free exploration of the emotional faces is observed upon the temporal dynamics of the emotional facial expression processing. Regarding the ERP at stimulus onset, there is a significant emotion-dependent modulation of the P2–P3 complex and LPP components' amplitude at the left frontal site for the ERPs computed by averaging. Yet, the GLM reveals the impact of subsequent fixations on the ERPs time- locked on stimulus onset. Results are also in line with the valence hypothesis. The observed differences between the two estimation methods (Average vs. GLM) suggest the predominance of the right hemisphere at the stimulus onset and the implication of the left hemisphere in the processing of the information encoded by subsequent fixations. Concerning the first EFRP, the Lambda response and the P2 component are modulated by the emotion of surprise compared to the neutral emotion, suggesting an impact of high-level factors, in parieto-occipital sites. Moreover, no difference is observed on the second and subsequent EFRP. Taken together, the results stress the significant gain obtained in analyzing the EFRPs using the GLM method and pave the way toward efficient ecological emotional dynamic stimuli analyses. |
Ashika Verghese; Jason B. Mattingley; Phoebe E. Palmer; Paul E. Dux From eyes to hands: Transfer of learning in the Simon task across motor effectors Journal Article In: Attention, Perception, and Psychophysics, vol. 80, no. 1, pp. 193–210, 2018. @article{Verghese2018, Inhibition of irrelevant and conflicting information and responses is crucial for goal-directed behaviour and adapive functioning. In the Simon task, for example, responses are slowed if their mappings are spatially incongruent with stimuli that must be discriminated on a nonspatial dimension. Previous work has shown that practice with incongruent spatial mappings can reduce or even reverse the Simon effect. We asked whether such practice transfers between the manual and oculomotor systems and if so to what extent this occurs across a range of behavioural tasks. In two experiments, one cohort of participants underwent anti-saccade training, during which they repeatedly inhibited the reflexive impulse to look toward a briefly presented target. Additionally, two active-control training groups were included, in which participants either trained on Pro-saccade or Fixation training regimens. In Experiment 1, we probed whether the Simon effect and an- other inhibitory paradigm, the Stroop task, showed differential effects after training. In Experiment 2, we included a larger battery of inhibitory tasks (Simon, Stroop, flanker and stop-signal) and noninhibitory control measures (multitasking and visual search) to assess the limits of transfer. All three training regimens led to behavioural improvements in the trained-upon task, but only the anti-saccade training group displayed benefits that transferred to the manual response modality. This transfer of training benefit replicated across the two experiments but was restricted to the Simon effect. Evidence for transfer of inhibition training across motor systems offers important insights into the nature of stimulus-response representations and their malleability. |
Peter Veto; Immo Schütz; Wolfgang Einhäuser Continuous flash suppression: Manual action affects eye movements but not the reported percept Journal Article In: Journal of Vision, vol. 18, no. 3, pp. 1–10, 2018. @article{Veto2018, Diverse paradigms, including ambiguous stimuli and mental imagery, have suggested a shared representation between motor and perceptual domains. We examined the effects of manual action on ambiguous perception in a continuous flash suppression (CFS) experiment. Specifically, we asked participants to try to perceive a suppressed grating while rotating a manipulandum. In one condition, the grating's motion was fully controlled by the manipulandum movement; in another condition the coupling was weak; and in a third condition, no movement was executed. We found no effect of the movement condition on the subjectively reported visibility of the grating, which is in contrast to previous studies that allowed for more top-down influence. However, we did observe an effect on eye movements: the gain of the optokinetic nystagmus induced by the grating was modulated by its coupling to the manual movement. Our results (a) indicate that action-to-perception transfer can occur on different levels of perceptual organization, (b) demonstrate that CFS involves the shared representations between action and perception differently than paradigms used in earlier studies, and (c) highlight the importance of objective measures beyond subjective report when studying how action affects perception and awareness. |
Kasper Vinken; Hans P. Op de Beeck; Rufin Vogels Face repetition probability does not affect repetition suppression in macaque inferotemporal cortex Journal Article In: Journal of Neuroscience, vol. 38, no. 34, pp. 7492–7504, 2018. @article{Vinken2018, Repetition suppression, which refers to reduced neural activity for repeated stimuli, is typically explained by bottom-up or local adaptation mechanisms. However, recent theories have emphasized the role of top-down processes, suggesting that this response reduction reflects the fulfillment of perceptual expectations. To support this, an influential human functional magnetic resonance imaging (fMRI) study showed that the magnitude of suppression is modulated by the probability of a repetition. No such repetition probability effect was found in macaque inferior temporal (IT) cortex for spiking activity, despite the presence of repetition suppression. Contrary to the human fMRI studies that showed an effect of repetition probability, the macaque single unit study employed a large variety of unfamiliar stimuli and the monkeys were not required to attend the stimuli. Here, as in the human fMRI studies, we employed faces as stimuli and made the monkeys attend to the stimulus content. We simultaneously recorded spiking activity and local field potentials (LFPs) in the middle lateral face patch (ML) of one monkey (male), and a face-responsive region of another (female). While we observed significant repetition suppression of spiking activity and high gamma band LFPs in both animals, there were no effects of repetition probability, even when repetitions were task-relevant and repetition probability affected behavioral decisions. In conclusion, despite the use of face stimuli and a stimulus-related task, no neural signature of repetition probability was present for faces in a face responsive patch of macaque IT. This further challenges a general perceptual expectation account of repetition suppression. |
Jordana S. Wynn; Rosanna K. Olsen; Malcolm A. Binns; Bradley R. Buchsbaum; Jennifer D. Ryan Fixation reinstatement supports visuospatial memory in older adults Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 44, no. 7, pp. 1119–1127, 2018. @article{Wynn2018, Research using eye movement monitoring suggests that recapitulating the pattern of eye movements made during stimulus encoding at subsequent retrieval supports memory by reinstating the spatial layout of the encoded stimulus. In the present study, the authors investigated whether recapitulation of encoding fixations during a poststudy, stimulus-free delay period—an effect that has been previously linked to memory maintenance in younger adults— can support mnemonic performance in older adults. Older adults showed greater delay-period fixation reinstatement than younger adults, and this reinstatement supported age-equivalent performance on a subsequent visuospatial-memory-based change detection task, whereas in younger adults, the performance-enhancing effects of fixation reinstatement increased with task difficulty. Taken together, these results suggest that fixation reinstatement might reflect a compensatory response to increased cognitive load. The present findings provide novel evidence of compensatory fixation reinstatement in older adults and demonstrate the utility of eye movement monitoring for aging and memory research. Public Significance Statement Eye movements can be used to boost memory. Here, we show that when asked to remember the locations of objects within a scene, older adults will spontaneously rehearse the locations by looking with their eyes at the spaces that had been previously occupied by those objects. This gaze pattern supports subsequent memory performance. This study enhances our understanding of the role eye movements play in memory and establishes eye-movement monitoring as a useful method in aging research. |
Xin-Yu Xie; Cong Yu Double training downshifts the threshold vs. noise contrast (TvC) functions with perceptual learning and transfer Journal Article In: Vision Research, vol. 152, pp. 3–9, 2018. @article{Xie2018a, Location specific perceptual learning can transfer to a new location if the new location is trained with a secondary task that by itself does not impact the performance of the primary learning task (double training). Learning may also transfer to other locations when double training is performed at the same location. Here we investigated the mechanisms underlying double-training enabled learning and transfer with an external noise paradigm. Specifically, we measured the Vernier thresholds at various external noise contrasts before and after double training. Double training mainly vertically downshifts the TvC functions at the training and transfer locations, which may be interpreted as improved sampling efficiency in a linear amplifier model or a combination of internal noise reduction and external noise exclusion in a perceptual template model at both locations. The change of the TvC functions appears to be a high-level process that can be remapped from a training location to a new location after double training. |
Yang Xie; Chechang Nie; Tianming Yang Covert shift of attention modulates the value encoding in the orbitofrontal cortex Journal Article In: eLife, vol. 7, pp. 1–21, 2018. @article{Xie2018, During value-based decision making, we often evaluate the value of each option sequentially by shifting our attention, even when the options are presented simultaneously. The orbitofrontal cortex (OFC) has been suggested to encode value during value-based decision making. Yet it is not known how its activity is modulated by attention shifts. We investigated this question by employing a passive viewing task that allowed us to disentangle effects of attention, value, choice and eye movement. We found that the attention modulated OFC activity through a winner-take-all mechanism. When we attracted the monkeys' attention covertly, the OFC neuronal activity reflected the reward value of the newly attended cue. The shift of attention could be explained by a normalization model. Our results strongly argue for the hypothesis that the OFC neuronal activity represents the value of the attended item. They provide important insights toward understanding the OFC's role in value-based decision making. |
Qiang Xing; Cuiliang Rong; Zheyi Lu; Yanfeng Yao; Zhonglu Zhang; Xue Zhao The effect of the embodied guidance in the insight problem solving: An eye movement study Journal Article In: Frontiers in Psychology, vol. 9, pp. 2257, 2018. @article{Xing2018, Insight is an important cognitive process in creative thinking. The present research applied embodied cognitive perspective to explore the effect of embodied guidance on insight problem solving and its underlying mechanisms by two experiments. Experiment 1 used the matchstick arithmetic problem to explore the role of embodied gestures guidance in problem solving. The results showed that the embodied gestures facilitated the participants' performance. Experiment 2 investigated how embodied attention guidance affects insight problem solving. The results showed that participants performed better in prototypical guidance condition. Experiment 2a adopted the Duncker's Radiation problem to explore how embodied behavior and prototypical guidance influence problem solving by attention tracing techniques. Experiment 2b aimed to further examine whether implicit attention transfer was the real cause which resulted in participants over-performing in prototypical guidance condition in experiment 2a. The results demonstrated that overt physical motion was unnecessary for individuals to experience the benefits of embodied guidance in problem solving, which supported the reciprocal relation hypothesis of saccades and attention. In addition, the questionnaire completed after experiments showed that participants did not realize the relation between guidance and insight problem solving. Taken together, the current study provided further evidence for that embodied gesture and embodied attention both facilitated the insight problem solving and the facilitation is implicit. |
Mingliang Xu; Fuhai Chen; Lu Li; Chen Shen; Pei Lv; Bing Zhou; Rongrong Ji Bio-inspired deep attribute learning towards facial aesthetic prediction Journal Article In: IEEE Transactions on Affective Computing, vol. 3045, no. c, pp. 1–12, 2018. @article{Xu2018, Computational prediction of facial aesthetics has attracted ever-increasing research focus. The key challenge lies in extracting discriminative and perception-aware features to characterize the facial beautifulness. To this end, the existing schemes simply adopt a direct feature mapping, which relies on handcraft-designed low-level features that cannot reflect human-level aesthetic perception. In this paper, we present a systematic framework towards designing biology-inspired, discriminative representation for facial aesthetic prediction. First, we design a group of biological experiments that adopt eye tracker to identify spatial regions of interest during the facial aesthetic judgments of subjects, which forms a Bio-inspired Facial Aesthetic Ontology (Bio-FAO) and is made public available. Second, we adopt the cutting-edge convolutional neural network to train a set of Bio-inspired Attribute features, termed Bio-AttriBank, which forms a mid-level interpretable representation corresponding to the aforementioned Bio-FAO. For a given image, the facial aesthetic prediction is then formulated as a classification problem over the Bio-AttriBank descriptor responses, which well bridges the affective gap, and provides explainable evidences on why/how a face is beautiful or not. We have carried out extensive experiments on both JAFFE and FaceWarehouse datasets. Superior performance gains in the experiments have demonstrated the merits of the proposed scheme. |
Motonori Yamaguchi; Ashvanti Valji; Felicity D. A. Wolohan Top-down contributions to attention shifting and disengagement: A template model of visual attention Journal Article In: Journal of Experimental Psychology: General, vol. 147, no. 6, pp. 859–887, 2018. @article{Yamaguchi2018, Two separate systems are involved in the control of spatial attention; one that is driven by a goal, and the other that is driven by stimuli. While the goal- and stimulus-driven systems follow different general principles, they also interplay with each other. However, the mechanism by which the goal-driven system influences the stimulus-driven system is still debated. The present study examined top-down contributions to two components of attention orienting, shifting and disengagement, with an experimental paradigm in which participants held a visual item in short-term memory (STM) and performed a prosaccade task with a manipulation of the gap between fixation offset and target onset. Four experiments showed that the STM content accelerated shifting and impaired disengagement, but the influence on disengagement depended on the utility of STM in guiding attention toward the target. Thus, the use of STM was strategic. Computational models of visual attention were fitted to the experimental data, which suggested that the top-down contributions to shifting was more prominent than those to disengagement. The results indicate that the current modeling framework was particularly useful when examining the contributions of theoretical constructs for the control of visual attention, but it also suggests limitations. |
Yin Yan; Li Zhaoping; Wu Li Bottom-up saliency and top-down learning in the primary visual cortex of monkeys Journal Article In: Proceedings of the National Academy of Sciences, vol. 115, no. 41, pp. 10499–10504, 2018. @article{Yan2018a, Early sensory cortex is better known for representing sensory inputs but less for the effect of its responses on behavior. Here we explore the behavioral correlates of neuronal responses in primary visual cortex (V1) in a task to detect a uniquely oriented bar-the orientation singleton-in a background of uniformly oriented bars. This singleton is salient or inconspicuous when the orientation contrast between the singleton and background bars is sufficiently large or small, respectively. Using implanted microelectrodes, we measured V1 activities while monkeys were trained to quickly saccade to the singleton. A neuron's responses to the singleton within its receptive field had an early and a late component, both increased with the orientation contrast. The early component started from the outset of neuronal responses; it remained unchanged before and after training on the singleton detection. The late component started ∼40 ms after the early one; it emerged and evolved with practicing the detection task. Training increased the behavioral accuracy and speed of singleton detection and increased the amount of information in the late response component about a singleton's presence or absence. Furthermore, for a given singleton, faster detection performance was associated with higher V1 responses; training increased this be-havioral-neural correlate in the early V1 responses but decreased it in the late V1 responses. Therefore, V1's early responses are directly linked with behavior and represent the bottom-up saliency signals. Learning strengthens this link, likely serving as the basis for making the detection task more reflexive and less top-down driven. |
Tao Yao; Stefan Treue; B. Suresh Krishna Saccade-synchronized rapid attention shifts in macaque visual cortical area MT Journal Article In: Nature Communications, vol. 9, pp. 958, 2018. @article{Yao2018, While making saccadic eye-movements to scan a visual scene, humans and monkeys are able to keep track of relevant visual stimuli by maintaining spatial attention on them. This ability requires a shift of attentional modulation from the neuronal population representing the relevant stimulus pre-saccadically to the one representing it post-saccadically. For optimal performance, this trans-saccadic attention shift should be rapid and saccade-synchronized. Whether this is so is not known. We trained two rhesus monkeys to make saccades while maintaining covert attention at a fixed spatial location. We show that the trans-saccadic attention shift in cortical visual medial temporal (MT) area is well synchronized to saccades. Attentional modulation crosses over from the pre-saccadic to the post-saccadic neuronal representation by about 50 ms after a saccade. Taking response latency into account, the trans-saccadic attention shift is well timed to maintain spatial attention on relevant stimuli, so that they can be optimally tracked and processed across saccades. |
Rachel Yep; Stephen Soncin; Donald C. Brien; Brian C. Coe; Alina Marin; Douglas P. Munoz In: Brain and Cognition, vol. 124, pp. 1–13, 2018. @article{Yep2018, Despite distinct diagnostic criteria, attention-deficit hyperactivity disorder (ADHD) and bipolar disorder (BD) share cognitive and emotion processing deficits that complicate diagnoses. The goal of this study was to use an emotional saccade task to characterize executive functioning and emotion processing in adult ADHD and BD. Participants (21 control, 20 ADHD, 20 BD) performed an interleaved pro/antisaccade task (look toward vs. look away from a visual target, respectively) in which the sex of emotional face stimuli acted as the cue to perform either the pro- or antisaccade. Both patient groups made more direction (erroneous prosaccades on antisaccade trials) and anticipatory (saccades made before cue processing) errors than controls. Controls exhibited lower microsaccade rates preceding correct anti- vs. prosaccade initiation, but this task-related modulation was absent in both patient groups. Regarding emotion processing, the ADHD group performed worse than controls on neutral face trials, while the BD group performed worse than controls on trials presenting faces of all valence. These findings support the role of fronto-striatal circuitry in mediating response inhibition deficits in both ADHD and BD, and suggest that such deficits are exacerbated in BD during emotion processing, presumably via dysregulated limbic system circuitry involving the anterior cingulate and orbitofrontal cortex. |
Peng-Yeng Yin; Rong-Fuh Day; Yu-Chi Wang Tabu search-based classification for eye-movement behavioral decisions Journal Article In: Neural Computing and Applications, vol. 29, no. 5, pp. 1433–1443, 2018. @article{Yin2018, Adaptive human–computer interfaces (HCIs) are fundamental to designing adaptive websites and adaptive decision support systems. Integrating these intelligent systems with modern eye trackers provides more effective ways to exploit eye fixation data and offers improved services to users. We develop an exemplar-based classifier using the tabu search algorithm to predict which decision strategy may underlie an empirical search behavior. Our algorithm reduces the size of decision concept representations to find the best exemplars for each concept. Experimental results show that our classifier is highly accurate in classifying the sequence of empirical eye fixations, demonstrating the promise of integrating adaptive HCIs with modern eye trackers. |
Aspen H. Yoo; Zuzanna Klyszejko; Clayton E. Curtis; Wei Ji Ma Strategic allocation of working memory resource Journal Article In: Scientific Reports, vol. 8, pp. 16162, 2018. @article{Yoo2018b, Visual working memory (VWM), the brief retention of past visual information, supports a range of cognitive functions. One of the defining, and largely studied, characteristics of VWM is how resource- limited it is, raising questions about how this resource is shared or split across memoranda. Since objects are rarely equally important in the real world, we ask how people split this resource in settings where objects have different levels of importance. In a psychophysical experiment, participants remembered the location of four targets with different probabilities of being tested after a delay. We then measured their memory accuracy of one of the targets. We found that participants allocated more resource to memoranda with higher priority, but underallocated resource to high- and overallocated to low-priority targets relative to the true probability of being tested. These results are well explained by a computational model in which resource is allocated to minimize expected estimation error. We replicated this finding in a second experiment in which participants bet on their memory fidelity after making the location estimate. The results of this experiment show that people have access to and utilize the quality of their memory when making decisions. Furthermore, people again allocate resource in a way that minimizes memory errors, even in a context in which an alternative strategy was incentivized. Our study not only shows that people are allocating resource according to behavioral relevance, but suggests that they are doing so with the aim of maximizing memory accuracy. |
Sang-Ah Yoo; John K. Tsotsos; Mazyar Fallah The attentional suppressive surround: Eccentricity, location-based and feature-based effects and interactions Journal Article In: Frontiers in Neuroscience, vol. 12, pp. 710, 2018. @article{Yoo2018a, The Selective Tuning model of visual attention (Tsotsos, 1990) has proposed that the focus of attention is surrounded by an inhibitory zone, eliciting a center-surround attentional distribution. This attentional suppressive surround inhibits irrelevant information which is located close to attended information in physical space (e.g., Cutzu and Tsotsos, 2003; Hopf et al., 2010) or in feature space (e.g., Bartsch et al., 2017; Störmer and Alvarez, 2014; Tombu and Tsotsos, 2008). In Experiment 1, we investigate the interaction between location-based and feature-based surround suppression and hypothesize that the attentional surround suppression would be maximized when spatially adjacent stimuli are also represented closely within a feature map. Our results demonstrate that perceptual discrimination is worst when two similar orientations are presented in proximity to each other, suggesting the interplay of the two surround suppression mechanisms. The Selective Tuning model also predicts that the size of the attentional suppressive surround is determined by the receptive field size of the neuron which optimally processes the attended information. The receptive field size of the processing neurons is tightly associated with stimulus size and eccentricity. Therefore, Experiment 2 tested the hypothesis that the size of the attentional suppressive surround would become larger as stimulus size and eccentricity increase, corresponding to an increase in the neuron's receptive field size. We show that stimulus eccentricity but not stimulus size modulates the size of the attentional suppressive surround. These results are consistent for both low- and high-level features (e.g., orientation and human faces). Overall, the present study supports the existence of the attentional suppressive surround and reveals new properties of this selection mechanism. |
Seng Bum Michael Yoo; Brianna J. Sleezer; Benjamin Y. Hayden Robust encoding of spatial information in orbitofrontal cortex and striatum Journal Article In: Journal of Cognitive Neuroscience, vol. 30, no. 6, pp. 898–913, 2018. @article{Yoo2018, Knowing whether core reward regions carry information about the positions of relevant objects is crucial for adjudicating between choice models. One limitation of previous studies, including our own, is that spatial positions can be consistently differentially associated with rewards, and thus position can be confounded with attention, motor plans, or target identity. We circumvented these problems by using a task in which value— and thus choices—was determined solely by a frequently changing rule, which was randomized relative to spatial position on each trial. We presented offers asynchronously, which allowed us to control for reward expectation, spatial attention, and motor plans in our analyses. We find robust encoding of the spatial position of both offers and choices in two core reward regions, orbitofrontal Area 13 and ventral striatum, as well as in dorsal striatum of macaques. The trial-by-trial correlation in noise in encoding of position was associated with variation in choice, an effect known as choice probability correlation, suggesting that the spatial encoding is associated with choice and is not incidental to it. Spatial information and reward information are not carried by separate sets of neurons, although the two forms of information are temporally dissociable. These results highlight the ubiquity of multiplexed information in association cortex and argue against the idea that these ostensible reward regions serve as part of a pure value domain. |
Jorrig Vogels; Vera Demberg; Jutta Kray The index of cognitive activity as a measure of cognitive processing load in dual task settings Journal Article In: Frontiers in Psychology, vol. 9, pp. 2276, 2018. @article{Vogels2018, Increases in pupil size have long been used as an indicator of cognitive load. Recently, the Index of Cognitive Activity (ICA), a novel pupillometric measure has received increased attention. The ICA measures the frequency of rapid pupil dilations, and is an interesting complementary measure to overall pupil size because it disentangles the pupil response to cognitive activity from effects of light input. As such, it has been evaluated as a useful measure of processing load in dual task settings coordinating language comprehension and driving. However, the cognitive underpinnings of pupillometry, and any differences between rapid small dilations as measured by the ICA and overall effects on pupil size are still poorly understood. Earlier work has observed that the ICA and overall pupil size may not always behave in the same way, reporting an increase in overall pupil size but decrease in ICA in a dual task setting. To further investigate this, we systematically tested two new dual-task combinations, combining both language comprehension and simulated driving with a memory task. Our findings confirm that more difficult linguistic processing is reflected in a larger ICA. More importantly, however, the dual task settings did not result in an increase in the ICA as compared to the single task, and, consistent with earlier findings, showed a significant decrease with a more difficult secondary task. This contrasts with our findings for pupil size, which showed an increase with greater secondary task difficulty in both tasks. Our results are compatible with the idea that although both pupillometry measures are indicators of cognitive load, they reflect different cognitive and neuronal processes in dual task situations. |
Dimitris Voudouris; Jeroen B. J. Smeets; Katja Fiehler; Eli Brenner Gaze when reaching to grasp a glass Journal Article In: Journal of Vision, vol. 18, no. 8, pp. 1–12, 2018. @article{Voudouris2018, People have often been reported to look near their index finger's contact point when grasping. They have only been reported to look near the thumb's contact point when grasping an opaque object at eye height with a horizontal grip—thus when the region near the index finger's contact point is occluded. To examine to what extent being able to see the digits' final trajectories influences where people look, we compared gaze when reaching to grasp a glass of water or milk that was placed at eye or hip height. Participants grasped the glass and poured its contents into another glass on their left. Surprisingly, most participants looked nearer to their thumb's contact point. To examine whether this was because gaze was biased toward the position of the subsequent action, which was to the left, we asked participants in a second experiment to grasp a glass and either place it or pour its contents into another glass either to their left or right. Most participants' gaze was biased to some extent toward the position of the next action, but gaze was not influenced consistently across participants. Gaze was also not influenced consistently across the experiments for individual participants—even for those who participated in both experiments. We conclude that gaze is not simply determined by the identity of the digit or by details of the contact points, such as their visibility, but that gaze is just as sensitive to other factors, such as where one will manipulate the object after grasping. |
Weibing Wan; Lingfeng Yuan; Qunfei Zhao; Tao Fang Two-dimensional hidden semantic information model for target saliency detection and eyetracking identification Journal Article In: Journal of Electronic Imaging, vol. 27, no. 1, pp. 1–12, 2018. @article{Wan2018, Saliency detection has been applied to the target acquisition case. This paper proposes a two-dimensional hidden Markov model (2D-HMM) that exploits the hidden semantic information of an image to detect its salient regions. A spatial pyramid histogram of oriented gradient descriptors is used to extract features. After encoding the image by a learned dictionary, the 2D-Viterbi algorithm is applied to infer the saliency map. This model can predict fixation of the targets and further creates robust and effective depictions of the targets' change in posture and viewpoint. To validate the model with a human visual search mechanism, two eyetrack experiments are employed to train our model directly from eye movement data. The results show that our model achieves better performance than visual attention. Moreover, it indicates the plausibility of utilizing visual track data to identify targets. |
Benchi Wang; Jan Theeuwes How to inhibit a distractor location? Statistical learning versus active, top-down suppression Journal Article In: Attention, Perception, and Psychophysics, vol. 80, no. 4, pp. 860–870, 2018. @article{Wang2018b, Recently, Wang and Theeuwes (Journal of Experimental Psychology: Human Perception and Performance, 44(1), 13–17, 2018a) demonstrated the role of lingering selection biases in an additional singleton search task in which the distractor singleton appeared much more often in one location than in all other locations. For this location, there was less capture and selection efficiency was reduced. It was argued that statistical learning induces plasticity within the spatial priority map such that particular locations that are high likely to contain a distractor are suppressed relative to all other locations. The current study replicated these findings regarding statistical learning (Experiment 1) and investigated whether similar effects can be obtained by cueing the distractor location in a top-down way on a trial-by-trial basis. The results show that top-down cueing of the distractor location with long (1,500 ms; Experiment 2) and short stimulus-onset symmetries (SOAs) (600 ms; Experiment 3) does not result in suppression: The amount of capture nor the efficiency of selection was affected by the cue. If anything, we found an attentional benefit (instead of the suppression) for the short SOA. We argue that through statistical learning, weights within the attentional priority map are changed such that one location containing a salient distractor is suppressed relative to all other locations. Our cueing experiments show that this effect cannot be accomplished by active, top-down suppression. Consequences for recent theories of distractor suppression are discussed. |
Benchi Wang; Jan Theeuwes Statistical regularities modulate attentional capture independent of search strategy Journal Article In: Attention, Perception, and Psychophysics, vol. 80, pp. 1763–1774, 2018. @article{Wang2018c, An earlier study using the additional singleton task showed that statistical regularities regarding the distractor location can cause an attentional bias that affects the amount of attentional capture by distractors and the efficiency of selection of targets. The distractor singleton was systematically present more often in one location than in all other locations. The present study investigated whether this bias also occurs when observers adopt a feature search mode, i.e., when they search for a specific feature (circle) between elements with different shapes, while ignoring a colored distractor singleton. It is assumed that in feature search, observers can ignore distractors in a top-downway and as such one expects that statistical regularities about the distractor location should not play a role. Contrary to this prediction, we found that even in feature search, both attentional capture by the distractors and the efficiency ofselecting the target were impacted by these statistical regularities. Moreover, statistical regularities regarding the feature value of the distractor (its color) had no effect on the amount ofcapture or the efficiency ofselection. We claim that statistical regularities cause passive lingering biases of attention such that on the priority map, the location containing a high probability distractor competes less for attention than locations that are less likely to contain distractors. |
Benchi Wang; Chuyao Yan; Raymond M. Klein; Zhiguo Wang Inhibition of return revisited: Localized inhibition on top of a pervasive bias Journal Article In: Psychonomic Bulletin & Review, vol. 25, no. 5, pp. 1861–1867, 2018. @article{Wang2018a, An inhibitory after-effect of attention, frequently referred to as inhibition of return (IOR), operates at a previously attended location to discourage perseverative orienting. Using the classic cueing task, previous work has shown that IOR is not restricted to a previously attended location, but rather spreads to adjacent visual space in a graded manner. The present study expands on this earlier work by exploring a wider visual region and a broader range of cue-target onset asynchronies (CTOAs) to characterize the temporal dynamics of the IOR gradient. The results reveal that the magnitude of IOR generated by cueing decreases exponentially as the CTOA increases. The width of the IOR gradient first increases and then decreases, with a temporal profile that is well captured by an alpha function. Importantly, the present study reveals that in addition to its rapidly decaying local properties, cue-induced IOR can include a pervasive inhibitory bias, which remains relatively stable across IOR's lifetime. |
Chin-An Wang; Talia Baird; Donald C. Brien; Jeff Huang; Douglas P. Munoz; Jonathan D. Coutinho Arousal effects on pupil size, heart rate, and skin conductance in an emotional face task Journal Article In: Frontiers in Neurology, vol. 9, pp. 1029, 2018. @article{Wang2018e, Arousal level changes constantly and it has a profound influence on performance during everyday activities. Fluctuations in arousal are regulated by the autonomic nervous system, which is mainly controlled by the balanced activity of the parasympathetic and sympathetic systems, commonly indexed by heart rate (HR) and galvanic skin response (GSR), respectively. Although a growing number of studies have used pupil size to indicate the level of arousal, research that directly examines the relationship between pupil size and HR or GSR is limited. The goal of this study was to understand how pupil size is modulated by autonomic arousal. Human participants fixated various emotional face stimuli, of which low-level visual properties were carefully controlled, while their pupil size, HR, GSR, and eye position were recorded simultaneously. We hypothesized that a positive correlation between pupil size and HR or GSR would be observed both before and after face presentation. Trial-by-trial positive correlations between pupil diameter and HR and GSR were found before face presentation, with larger pupil diameter observed on trials with higher HR or GSR. However, task-evoked pupil responses after face presentation only correlated with HR. Overall, these results demonstrated a trial-by-trial relationship between pupil size and HR or GSR, suggesting that pupil size can be used as an index for arousal level involuntarily regulated by the autonomic nervous system. |
Chin-An Wang; Jeff Huang; Rachel Yep; Douglas P. Munoz Comparing pupil light response modulation between saccade planning and working memory Journal Article In: Journal of Cognition, vol. 1, no. 1, pp. 1–14, 2018. @article{Wang2018g, The signature of spatial attention effects has been demonstrated through saccade planning and working memory. Although saccade planning and working memory have been commonly linked to attention, the comparison of effects resulting from saccade planning and working memory is less explored. It has recently been shown that spatial attention interacts with local luminance at the attended location. When bright and dark patch stimuli are presented simultaneously in the periphery, thereby producing no change in global luminance, pupil size is nonetheless smaller when the locus of attention overlaps with the bright, compared to the dark patch stimulus (referred to as the local luminance modulation). Here, we used the local luminance modulation to directly compare the effects of saccade planning and spatial working memory. Participants were required to make a saccade towards a visual target location (visual-delay) or a remembered target location (memory-delay) after a variable delay, and the bright and dark patch stimuli were presented during the delay period between target onset and go signal. Greater pupil constriction was observed when the bright patch, compared to the dark patch, was spatially aligned with the target location in both tasks. However, the effects were diminished when there was no contingency implemented between the patch and target locations, particularly in the memory-delay task. Together, our results suggest the involvement of similar, but not identical, attentional mechanisms through saccade planning and working memory, and highlight a promising potential of local pupil luminance responses for probing visuospatial processing. |
Jiahui Wang; Pavlo Antonenko; Mehmet Celepkolu; Yerika Jimenez; Ethan Fieldman; Ashley Fieldman Exploring relationships between eye tracking and traditional usability testing data Journal Article In: International Journal of Human-Computer Interaction, pp. 1–12, 2018. @article{Wang2018d, This study explored the relationships between eye tracking and traditional usability testing data in the context of analyzing the usability of Algebra Nation™, an online system for learning mathematics used by hundreds of thousands of students. Thirty-five undergraduate students (20 females) completed seven usability tasks in the Algebra Nation™ online learning environment. The participants were asked to log in, select an instructor for the instructional video, post a question on the collaborative wall, search for an explanation of a mathematics concept on the wall, find information relating to Karma Points (an incentive for engagement and learning), and watch two instructional videos of varied content difficulty. Participants' eye movements (fixations and saccades) were simultaneously recorded by an eye tracker. Usability testing software was used to capture all participants' interactions with the system, task completion time, and task difficulty ratings. Upon finishing the usability tasks, participants completed the System Usability Scale. Important relationships were identified between the eye movement metrics and traditional usability testing metrics such as task difficulty rating and completion time. Eye tracking data were investigated quantitatively using aggregated fixation maps, and qualitative examination was performed on video replay of participants' fixation behavior. Augmenting the traditional usability testing methods, eye movement analysis provided additional insights regarding revisions to the interface elements associated with these usability tasks. |
Lihui Wang; Sheng Li; Xiaolin Zhou; Jan Theeuwes Stimuli that signal the availability of reward break into attentional focus Journal Article In: Vision Research, vol. 144, pp. 20–28, 2018. @article{Wang2018j, Mounting evidence has shown that a task-irrelevant, previously reward-associated stimulus can capture attention even when attending to this stimulus impairs the processing of the current target. Here we investigate whether a stimulus that merely signals the availability of reward could capture attention and interfere with target processing when it is located outside of attentional focus. In three experiments, a target was always presented at the bottom of the lower visual field to attract focal attention. A distractor signalling high or low reward availability for the current trial was presented around the target with a variable distance between them. This distractor was task-irrelevant; getting distracted by it could potentially result in an omission of reward. For the high-reward condition, the distractor located adjacent to the target more severely interfered with target processing than the distractor at a relatively distant location; for the low-reward condition, distractors at different locations had the same impact upon target processing. Relative to the low-reward distractor, the high-reward distractor impaired target processing, but only at the location adjacent to the target. When the target location was uncertain such that attention was unable to be directed to the target in advance, the high-reward distractor interfered with target processing at both the adjacent and distant locations. Overall, these results suggest that a task-irrelevant stimulus can break into focus of attention by simply signalling the availability of reward even when getting distracted by this stimulus is counterproductive to obtaining reward. |
Bao Zhang; Shuhui Liu; Mattia Doro; Giovanni Galfano Attentional guidance from multiple working memory representations: Evidence from eye movements Journal Article In: Scientific Reports, vol. 8, pp. 13876, 2018. @article{Zhang2018d, Recent studies have shown that the representation of an item in visual working memory (VWM) can bias the deployment of attention to stimuli in the visual scene possessing the same features. When multiple item representations are simultaneously held in VWM, whether these representations, especially those held in a non-prioritized or accessory status, are able to bias attention, is still controversial. In the present study we adopted an eye tracking technique to shed light on this issue. In particular, we implemented a manipulation aimed at prioritizing one of the VWM representation to an active status, and tested whether attention could be guided by both the prioritized and the accessory representations when they reappeared as distractors in a visual search task. Notably, in Experiment 1, an analysis of first fixation proportion (FFP) revealed that both the prioritized and the accessory representations were able to capture attention suggesting a significant attentional guidance effect. However, such effect was not present in manual response times (RT). Most critically, in Experiment 2, we used a more robust experimental design controlling for different factors that might have played a role in shaping these findings. The results showed evidence for attentional guidance from the accessory representation in both manual RTs and FFPs. Interestingly, FFPs showed a stronger attentional bias for the prioritized representation than for the accessory representation across experiments. The overall findings suggest that multiple VWM representations, even the accessory representation, can simultaneously interact with visual attention. |
Jun-Yun Zhang; Cong Yu Vernier learning with short- and long-staircase training and its transfer to a new location with double training Journal Article In: Journal of Vision, vol. 18, no. 13, pp. 1–8, 2018. @article{Zhang2018b, We previously demonstrated that perceptual learning of Vernier discrimination, when paired with orientation learning at the same retinal location, can transfer completely to untrained locations (Wang, Zhang, Klein, Levi, & Yu, 2014; Zhang, Wang, Klein, Levi, & Yu, 2011). However, Hung and Seitz (2014) reported that the transfer is possible only when Vernier is trained with short staircases, but not with very long staircases. Here we ran two experiments to examine Hung and Seitz's conclusions. The first experiment confirmed the transfer effects with short-staircase Vernier training in both our study and Hung and Seitz's. The second experiment revealed that long-staircase training only produced very fast learning at the beginning of the pretraining session, but with no further learning afterward. Moreover, the learning and transfer effects differed insignificantly with a small effect size, making it difficult to support Hung and Seitz's claim that learning with long-staircase training cannot transfer to an untrained retinal location. |
Mengmi Zhang; Jiashi Feng; Keng Teck Ma; Joo Hwee Lim; Qi Zhao; Gabriel Kreiman Finding any Waldo with zero-shot invariant and efficient visual search Journal Article In: Nature Communications, vol. 9, pp. 3730, 2018. @article{Zhang2018, Searching for a target object in a cluttered scene constitutes a fundamental challenge in daily vision. Visual search must be selective enough to discriminate the target from distractors, invariant to changes in the appearance of the target, efficient to avoid exhaustive exploration of the image, and must generalize to locate novel target objects with zero-shot training. Previous work on visual search has focused on searching for perfect matches of a target after extensive category-specific training. Here, we show for the first time that humans can effi- ciently and invariantly search for natural objects in complex scenes. To gain insight into the mechanisms that guide visual search, we propose a biologically inspired computational model that can locate targets without exhaustive sampling and which can generalize to novel objects. The model provides an approximation to the mechanisms integrating bottom-up and top-down signals during search in natural scenes. |
Xilin Zhang; Nicole Mlynaryk; Sara Ahmed; Shruti Japee; Leslie G. Ungerleider The role of inferior frontal junction in controlling the spatially global effect of feature-based attention in human visual areas Journal Article In: PLoS Biology, vol. 16, no. 6, pp. e2005399, 2018. @article{Zhang2018f, Feature-based attention has a spatially global effect, i.e., responses to stimuli that share features with an attended stimulus are enhanced not only at the attended location but throughout the visual field. However, how feature-based attention modulates cortical neural responses at unattended locations remains unclear. Here we used functional magnetic resonance imaging (fMRI) to examine this issue as human participants performed motion- (Experiment 1) and color- (Experiment 2) based attention tasks. Results indicated that, in both experiments, the respective visual processing areas (middle temporal area [MT+] for motion and V4 for color) as well as early visual, parietal, and prefrontal areas all showed the classic feature-based attention effect, with neural responses to the unattended stimulus significantly elevated when it shared the same feature with the attended stimulus. Effective connectivity analysis using dynamic causal modeling (DCM) showed that this spatially global effect in the respective visual processing areas (MT+ for motion and V4 for color), intraparietal sulcus (IPS), frontal eye field (FEF), medial frontal gyrus (mFG), and primary visual cortex (V1) was derived by feedback from the inferior frontal junction (IFJ). Complementary effective connectivity analysis using Granger causality modeling (GCM) confirmed that, in both experiments, the node with the highest outflow and netflow degree was IFJ, which was thus considered to be the source of the network. These results indicate a source for the spatially global effect of feature-based attention in the human prefrontal cortex. |
Yan Zhang; Yu Xiang; Ying Guo; Lili Zhang Beauty-related perceptual bias: Who captures the mind of the beholder? Journal Article In: Brain and Behavior, vol. 8, no. 5, pp. 1–7, 2018. @article{Zhang2018a, Introduction: To explore the beauty- related perceptual bias and answers the question: Who can capture the mind of the beholder? Many studies have explored the specificity of human faces through ERP or other ways, and the materials they used are general human faces and other objects. Therefore, we want to further explore the difference between attractive faces and beautiful objects such as flowers. Methods: We recorded the eye movement of 22 male observers and 23 female observers using a standard two-alternative forced choice. Results: (1) The attractive faces were looked at longer and more often in comparison with the beautiful flowers; (2) fixation counts of female participants are more than male participants; and (3) the participants watched the beautiful flowers first, followed by the attractive faces, but there was no significant difference on the first fixation duration between the beautiful flowers and the attractive faces. Conclusions: The data in this study may suggest that people prefer attractive faces to beautiful flowers. |
Jing Zhao; Hang Yang; Xuchu Weng; Zhiguo Wang Emergent attentional bias toward visual word forms in the environment: Evidence from eye movements Journal Article In: Frontiers in Psychology, vol. 9, pp. 1378, 2018. @article{Zhao2018, Young children are frequently exposed to environmental prints (e.g., billboards and product labels) that contain visual word forms on a daily basis. As the visual word forms in environmental prints are frequently used to convey information critical to an individual's survival and wellbeing (e.g., "STOP" in the stop sign), it is conceivable that an attentional bias toward words in the environment may emerge as the reading ability of young children develops. Empirical findings relevant to this issue, however, are inconclusive so far. The present study examines this issue in children in the early stages of formal reading training (grades 1, 3, and 5) with the eye-tracking technique. Children viewed images with word and non-word visual information (environmental prints) and images with the same words in standard typeface on a plain background (standard prints). For children in grade 1, the latency of their first fixations on words in environmental prints was longer than those in standard prints. This latency cost, however, was markedly reduced in grades 3 and 5, suggesting that in older children an attentional bias toward words has emerged to help filter out the non-word visual information in environmental prints. Importantly, this attentional bias was found to correlate moderately with word reading ability. These findings show that an attentional bias toward visual word forms emerges shortly after the start of formal schooling and it is closely linked to the development of reading skills. |
Wenxi Zhou; Haoyu Chen; Jiongjiong Yang Discriminative learning of similar objects enhances memory for the objects and contexts Journal Article In: Learning and Memory, vol. 25, no. 12, pp. 601–610, 2018. @article{Zhou2018e, How to improve our episodic memory is an important issue in the field of memory. In the present study, we used a discriminative learning paradigm that was similar to a paradigm used in animal studies. In Experiment 1, a picture (e.g., a dog) was either paired with an identical picture, with a similar picture of the same concept (e.g., another dog), or with a picture of a different concept (e.g., a cat). Then, after intervals of 10 min, 1 d, and 1 wk, participants were asked to perform a 2-alternative forced-choice (2AFC) task to discriminate between a repeated and a similar picture, followed by the contextual judgment. In Experiment 2, eye movements were measured when participants encoded the pairs of pictures. The results showed that by discriminative learning, there was better memory performance in the 2AFC task for the "same" and "similar" conditions than for the "different" condition. In addition, there was better contextual memory performance for the "similar" condition than for the other two conditions. With regard to the eye movements, the participants were more likely to fixate on the lure objects and made more saccades between the target and lure objects in the "similar" (versus "different") condition. The number of saccades predicted how well the targets were remembered in both the 2AFC and contextual memory tasks. These results suggested that with discriminative learning of similar objects, detailed information could be better encoded by distinguishing the object from similar interferences, making the details and the contexts better remembered and retained over time. |
Ariel Zylberberg; Daniel M. Wolpert; Michael N. Shadlen Counterfactual reasoning underlies the learning of priors in decision making Journal Article In: Neuron, vol. 99, no. 5, pp. 1083–1097.e6, 2018. @article{Zylberberg2018, Accurate decisions require knowledge of prior probabilities (e.g., prevalence or base rate), but it is unclear how prior probabilities are learned in the absence of a teacher. We hypothesized that humans could learn base rates from experience making decisions, even without feedback. Participants made difficult decisions about the direction of dynamic random dot motion. Across blocks of 15–42 trials, the base rate favoring left or right varied. Participants were not informed of the base rate or choice accuracy, yet they gradually biased their choices and thereby increased accuracy and confidence in their decisions. They achieved this by updating knowledge of base rate after each decision, using a counterfactual representation of confidence that simulates a neutral prior. The strategy is consistent with Bayesian updating of belief and suggests that humans represent both true confidence, which incorporates the evolving belief of the prior, and counterfactual confidence, which discounts the prior. Zylberberg et al. show that human decision makers can learn environmental biases from sequences of difficult decisions, without feedback about accuracy, by calculating the belief that the decisions would have been correct in an unbiased environment—a form of counterfactual confidence. |
Regine Zopf; Marina Butko; Alexandra Woolgar; Mark A. Williams; Anina N. Rich Representing the location of manipulable objects in shape-selective occipitotemporal cortex: Beyond retinotopic reference frames? Journal Article In: Cortex, vol. 106, pp. 132–150, 2018. @article{Zopf2018, When interacting with objects, we have to represent their location relative to our bodies. To facilitate bodily reactions, location may be encoded in the brain not just with respect to the retina (retinotopic reference frame), but also in relation to the head, trunk or arm (collectively spatiotopic reference frames). While spatiotopic reference frames for location encoding can be found in brain areas for action planning, such as parietal areas, there is debate about the existence of spatiotopic reference frames in higher-level occipitotemporal visual areas. In an extensive multi-voxel pattern analysis (MVPA) fMRI study using faces, headless bodies and scenes stimuli, Golomb and Kanwisher (2012) did not find evidence for spatiotopic reference frames in shape-selective occipitotemporal cortex. This finding is important for theories of how stimulus location is encoded in the brain. It is possible, however, that their failure to find spatiotopic reference frames is related to their stimuli: we typically do not manipulate faces, headless bodies or scenes. It is plausible that we only represent body-centred location when viewing objects that are typically manipulated. Here, we tested for object location encoding in shape-selective occipitotemporal cortex using manipulable object stimuli (balls and cups) in a MVPA fMRI study. We employed Bayesian analyses to determine sample size and evaluate the sensitivity of our data to test the hypothesis that location can be encoded in a spatiotopic reference frame in shape-selective occipitotemporal cortex over the null hypothesis of no spatiotopic location encoding. We found strong evidence for retinotopic location encoding consistent with previous findings that retinotopic reference frames are common neural representations of object location. In contrast, when testing for spatiotopic encoding, we found evidence that object location information for small manipulable objects is not decodable in relation to the body in shape-selective occipitotemporal cortex. Post-hoc exploratory analyses suggested that spatiotopic aspects might modulate retinotopic location encoding. |
Rongjuan Zhu; Yangmei Luo; Xuqun You; Ziyu Wang Spatial bias induced by simple addition and subtraction: From eye movement evidence Journal Article In: Perception, vol. 47, no. 2, pp. 143–157, 2018. @article{Zhu2018, The associations between number and space have been intensively investigated. Recent studies indicated that this association could extend to more complex tasks, such as mental arithmetic. However, the mechanism of arithmetic-space associations in mental arithmetic was still a topic of debate. Thus, in the current study, we adopted an eye-tracking technology to investigate whether spatial bias induced by mental arithmetic was related with spatial attention shifts on the mental number line or with semantic link between the operator and space. In Experiment 1, participants moved their eyes to the corresponding response area according to the cues after solving addition and subtraction problems. The results showed that the participants moved their eyes faster to the leftward space after solving subtraction problems and faster to the right after solving addition problems. However, there was no spatial bias observed when the second operand was zero in the same time window, which indicated that the emergence of spatial bias may be associated with spatial attention shifts on the mental number line. In Experiment 2, participants responded to the operator (operation plus and operation minus) with their eyes. The results showed that mere presentation of operator did not cause spatial bias. Therefore, the arithmetic–space associations might be related with the movement along the mental number line. |
Louis Williams; Eugene McSorley; Rachel McCloy The relationship between aesthetic and drawing preferences Journal Article In: Psychology of Aesthetics, Creativity, and the Arts, vol. 12, no. 3, pp. 259–271, 2018. @article{Williams2018, There are suggested to be similarities between what is aesthetically preferred and artistically produced; however, little research has been conducted that directly examines this relationship and its links to expertise. Here, we examined the artistic process of artists and nonartists using geometric shapes as stimuli, investigating aesthetic (how pleasing they find the shapes) and drawing preferences (which shape they would prefer to draw out of a choice of two). We examined the cognitive processes behind these preferences using eye-tracking methods both when viewing stimuli and when making drawing preferences. Drawing preference scores increased with increasing aesthetic ratings regardless of expertise. We find gaze behavior when free-viewing to reflect behavior when making a drawing preference as both artists and nonartists fixated on aesthetically preferred stimuli first, for longer and more often. Artists gaze behavior when free-viewing was also influenced by what they would prefer to draw. This suggests that artists have a more fluid relationship than nonartists between images aesthetically preferred and those preferred for drawing. Overall, we demonstrate that there is a relationship between aesthetic preference and artistic preference for production, and this varies with expertise. |
Tommy J. Wilson; Michael J. Gray; Jan Willem Van Klinken; Melissa Kaczmarczyk; John J. Foxe Macronutrient composition of a morning meal and the maintenance of attention throughout the morning Journal Article In: Nutritional Neuroscience, vol. 21, no. 10, pp. 729–743, 2018. @article{Wilson2018, At present, the impact of macronutrient composition and nutrient intake on sustained attention in adults is unclear, although some prior work suggests that nutritive interventions that engender slow, steady glucose availability support sustained attention after consumption. A separate line of evidence suggests that nutrient consumption may alter electroencephalographic markers of neurophysiological activity, including neural oscillations in the alpha-band (8-14 Hz), which are known to be richly interconnected with the allocation of attention. It is here investigated whether morning ingestion of foodstuffs with differing macronutrient compositions might differentially impact the allocation of sustained attention throughout the day as indexed by both behavior and the deployment of attention-related alpha-band activity. METHODS: Twenty-four adult participants were recruited into a three-day study with a cross-over design that employed a previously validated sustained attention task (the Spatial CTET). On each experimental day, subjects consumed one of three breakfasts with differing carbohydrate availabilities (oatmeal, cornflakes, and water) and completed blocks of the Spatial CTET throughout the morning while behavioral performance, subjective metrics of hunger/fullness, and electroencephalographic (EEG) measurements of alpha oscillatory activity were recorded. RESULTS: Although behavior and electrophysiological metrics changed over the course of the day, no differences in their trajectories were observed as a function of breakfast condition. However, subjective metrics of hunger/fullness revealed that caloric interventions (oatmeal and cornflakes) reduced hunger across the experimental day with respect to the non-caloric, volume-matched control (water). Yet, no differences in hunger/fullness were observed between the oatmeal and cornflakes interventions. CONCLUSION: Observation of a relationship between macronutrient intervention and sustained attention (if one exists) will require further standardization of empirical investigations to aid in the synthesis and replicability of results. In addition, continued implementation of neurophysiological markers in this domain is encouraged, as they often produce nuanced insight into cognition even in the absence of overt behavioral changes. |
Luca Wollenberg; Heiner Deubel; Martin Szinte Visual attention is not deployed at the endpoint of averaging saccades Journal Article In: PLoS Biology, vol. 16, no. 6, pp. e2006548, 2018. @article{Wollenberg2018, The premotor theory of attention postulates that spatial attention arises from the activation of saccade areas and that the deployment of attention is the consequence of motor programming. Yet attentional and oculomotor processes have been shown to be dissociable at the neuronal level in covert attention tasks. To investigate a potential dissociation at the behavioral level, we instructed human participants to move their eyes (saccade) towards 1 of 2 nearby, competing saccade targets. The spatial distribution of visual attention was determined using oriented visual stimuli presented either at the target locations, between them, or at several other equidistant locations. Results demonstrate that accurate saccades towards one of the targets were associated with presaccadic enhancement of visual sensitivity at the respective saccade endpoint compared to the nonsaccaded target location. In contrast, averaging saccades, landing between the 2 targets, were not associated with attentional facilitation at the saccade endpoint. Rather, attention before averaging saccades was equally deployed at the 2 target locations. Taken together, our results reveal that visual attention is not obligatorily coupled to the endpoint of a subsequent saccade. Rather, our results suggest that the oculomotor program depends on the state of attentional selection before saccade onset and that saccade averaging arises from unresolved attentional selection. |
Chia-Chien Wu; Jeremy M. Wolfe Comparing eye movements during position tracking and identity tracking: No evidence for separate systems Journal Article In: Attention, Perception, and Psychophysics, vol. 80, no. 2, pp. 453–460, 2018. @article{Wu2018, There is an ongoing debate as to whether people track multiple moving objects in a serial fashion or with a parallel mechanism. One recent study compared eye movements when observers tracked identical objects (Multiple Object Tracking-MOT task) versus when they tracked the identities of different objects (Multiple Identity Tracking-MIT task). Distinct eye-movement patterns were found and attributed to two separate tracking systems. However, the same results could be caused by differences in the stimuli viewed during tracking. In the present study, object identities in the MIT task were invisible during tracking, so observers performed MOT and MIT tasks with identical stimuli. Observer were able to track either position and identity depending on the task. There was no difference in eye movements between position tracking and identity tracking. This result suggests that, while observers can use different eye-movement strategies in MOT and MIT, it is not necessary. |
Sergej Wuethrich; Deborah E. Hannula; Fred W. Mast; Katharina Henke Subliminal encoding and flexible retrieval of objects in scenes Journal Article In: Hippocampus, vol. 28, no. 9, pp. 633–643, 2018. @article{Wuethrich2018, Our episodic memory stores what happened when and where in life. Episodic memory requires the rapid formation and flexible retrieval of where things are located in space. Consciousness of the encoding scene is considered crucial for episodic memory formation. Here, we question the necessity of consciousness and hypothesize that humans can form unconscious episodic memories. Participants were presented with subliminal scenes, that is, scenes invisible to the conscious mind. The scenes displayed objects at certain locations for participants to form unconscious object-in-space memories. Later, the same scenes were presented supraliminally, that is, visibly, for retrieval testing. Scenes were presented absent the objects and rotated by 90°–270° in perspective to assess the representational flexibility of unconsciously formed memories. During the test phase, participants performed a forced-choice task that required them to place an object in one of two highlighted scene locations and their eye movements were recorded. Evaluation of the eye tracking data revealed that participants remembered object locations unconsciously, irrespective of changes in viewing perspective. This effect of gaze was related to correct placements of objects in scenes, and an intuitive decision style was necessary for unconscious memories to influence intentional behavior to a significant degree. We conclude that conscious perception is not mandatory for spatial episodic memory formation. |
Andreas Wutz; Roman Loonis; Jefferson E. Roy; Jacob A. Donoghue; Earl K. Miller Different levels of category abstraction by different dynamics in different prefrontal areas Journal Article In: Neuron, vol. 97, no. 3, pp. 716–726.e8, 2018. @article{Wutz2018, Categories can be grouped by shared sensory attributes (i.e., cats) or a more abstract rule (i.e., animals). We explored the neural basis of abstraction by recording from multi-electrode arrays in prefrontal cortex (PFC) while monkeys performed a dot-pattern categorization task. Category abstraction was varied by the degree of exemplar distortion from the prototype pattern. Different dynamics in different PFC regions processed different levels of category abstraction. Bottom-up dynamics (stimulus-locked gamma power and spiking) in the ventral PFC processed more low-level abstractions, whereas top-down dynamics (beta power and beta spike-LFP coherence) in the dorsal PFC processed more high-level abstractions. Our results suggest a two-stage, rhythm-based model for abstracting categories. Wutz et al. show that different levels of category abstraction engage different oscillatory dynamics in different prefrontal cortex (PFC) areas. This suggests a functional specialization within PFC for low-level, stimulus-based categories (e.g., cats) and high-level, rule-based categories (e.g., animals). |
Tarkeshwar Singh; Christopher M. Perry; Stacy L. Fritz; Julius Fridriksson; Troy M. Herter Eye movements interfere with limb motor control in stroke survivors Journal Article In: Neurorehabilitation and Neural Repair, vol. 32, no. 8, pp. 724–734, 2018. @article{Singh2018, Background. Humans use voluntary eye movements to actively gather visual information during many activities of daily living, such as driving, walking, and preparing meals. Most stroke survivors have difficulties performing these functional motor tasks, and we recently demonstrated that stroke survivors who require many saccades (rapid eye movements) to plan reaching movements exhibit poor motor performance. However, the nature of this relationship remains unclear. Objective. Here we investigate if saccades interfere with speed and smoothness of reaching movements in stroke survivors, and if excessive saccades are associated with difficulties performing functional tasks. Methods. We used a robotic device and eye tracking to examine reaching and saccades in stroke survivors and age-matched controls who performed the Trail Making Test, a visuomotor task that uses organized patterns of saccades to plan reaching movements. We also used the Stroke Impact Scale to examine difficulties performing functional tasks. Results. Compared with controls, stroke survivors made many saccades during ongoing reaching movements, and most of these saccades closely preceded transient decreases in reaching speed. We also found that the number of saccades that stroke survivors made during ongoing reaching movements was strongly associated with slower reaching speed, decreased reaching smoothness, and greater difficulty performing functional tasks. Conclusions. Our findings indicate that poststroke interference between eye and limb movements may contribute to difficulties performing functional tasks. This suggests that interventions aimed at treating impaired organization of eye movements may improve functional recovery after stroke. |
Magali H. Sivakumaran; Andrew K. Mackenzie; Imogen R. Callan; James A. Ainge; Akira R. O'Connor The discrimination ratio derived from novel object recognition tasks as a measure of recognition memory sensitivity, not bias Journal Article In: Scientific Reports, vol. 8, pp. 11579, 2018. @article{Sivakumaran2018, Translational recognition memory research makes frequent use of the Novel Object Recognition (NOR) paradigm in which animals are simultaneously presented with one new and one old object. The preferential exploration of the new as compared to the old object produces a metric, the Discrimination Ratio (DR), assumed to represent recognition memory sensitivity. Human recognition memory studies typically assess performance using signal detection theory derived measures; sensitivity (d′) and bias (c). How DR relates to d′ and c and whether they measure the same underlying cognitive mechanism is, however, unknown. We investigated the correspondence between DR (eye-tracking-determined), d′ and c in a sample of 37 humans. We used dwell times during a visual paired comparison task (analogous to the NOR) to determine DR, and a separate single item recognition task to derive estimates of response sensitivity and bias. DR was found to be significantly positively correlated to sensitivity but not bias. Our findings confirm that DR corresponds to d′, the primary measure of recognition memory sensitivity in humans, and appears not to reflect bias. These findings are the first of their kind to suggest that animal researchers should be confident in interpreting the DR as an analogue of recognition memory sensitivity. |
Ian W. Skinner; Markus Hübscher; G. Lorimer Moseley; Hopin Lee; Benedict M. Wand; Adrian C. Traeger; Sylvia M. Gustin; James H. McAuley The reliability of eyetracking to assess attentional bias to threatening words in healthy individuals Journal Article In: Behavior Research Methods, vol. 50, no. 5, pp. 1778–1792, 2018. @article{Skinner2018, Eyetracking is commonly used to investigate attentional bias. Although some studies have investigated the internal consistency of eyetracking, data are scarce on the test–retest reliability and agreement of eyetracking to investigate attentional bias. This study reports the test–retest reliability, measurement error, and internal consistency of 12 commonly used outcome measures thought to reflect the different components of attentional bias: overall attention, early attention, and late attention. Healthy participants completed a preferential-looking eyetracking task that involved the presentation of threatening (sensory words, general threat words, and affective words) and nonthreatening words. We used intraclass correlation coefficients (ICCs) to measure test–retest reliability (ICC > .70 indicates adequate reliability). The ICCs(2, 1) ranged from –.31 to .71. Reliability varied according to the outcome measure and threat word category. Sensory words had a lower mean ICC (.08) than either affective words (.32) or general threat words (.29). A longer exposure time was associated with higher test–retest reliability. All of the outcome measures, except second-run dwell time, demonstrated low measurement error ( < 6%). Most of the outcome measures reported high internal consistency (α > .93). Recommendations are discussed for improving the reliability of eyetracking tasks in future research. |
Lauren K. Slone; Scott P. Johnson When learning goes beyond statistics: Infants represent visual sequences in terms of chunks Journal Article In: Cognition, vol. 178, pp. 92–102, 2018. @article{Slone2018, Much research has documented infants' sensitivity to statistical regularities in auditory and visual inputs, however the manner in which infants process and represent statistically defined information remains unclear. Two types of models have been proposed to account for this sensitivity: statistical models, which posit that learners represent statistical relations between elements in the input; and chunking models, which posit that learners represent statistically-coherent units of information from the input. Here, we evaluated the fit of these two types of models to behavioral data that we obtained from 8-month-old infants across four visual sequence-learning experiments. Experiments examined infants' representations of two types of structures about which statistical and chunking models make contrasting predictions: illusory sequences (Experiment 1) and embedded sequences (Experiments 2–4). In all four experiments, infants discriminated between high probability sequences and low probability part-sequences, providing strong evidence of learning. Critically, infants also discriminated between high probability sequences and statistically-matched sequences (illusory sequences in Experiment 1, embedded sequences in Experiments 2–3), suggesting that infants learned coherent chunks of elements. Experiment 4 examined the temporal nature of chunking, and demonstrated that the fate of embedded chunks depends on amount of exposure. These studies contribute important new data on infants' visual statistical learning ability, and suggest that the representations that result from infants' visual statistical learning are best captured by chunking models. |
Christine N. Smith; Larry R. Squire Awareness of what is learned as a characteristic of hippocampus-dependent memory Journal Article In: Proceedings of the National Academy of Sciences, vol. 115, no. 47, pp. 11947–11952, 2018. @article{Smith2018b, We explored the relationship between memory performance and conscious knowledge (or awareness) of what has been learned in memory-impaired patients with hippocampal lesions or larger medial temporal lesions. Participants viewed familiar scenes or familiar scenes where a change had been introduced. Patients identified many fewer of the changes than controls. Across all of the scenes, controls preferentially directed their gaze toward the regions that had been changed whenever they had what we term robust knowledge about the change: They could identify that a change occurred, report what had changed, and indicate where the change occurred. Preferential looking did not occur when they were unaware of the change or had only partial knowledge about it. The patients, overall, did not direct their gaze toward the regions that had been changed, but on the few occasions when they had robust knowledge about the change they (like controls) did exhibit this effect. Patients did not exhibit this effect when they were unaware of the change or had partial knowledge. The findings support the idea that awareness of what has been learned is a key feature of hippocampus-dependent memory. |
Stephanie M. Smith; Ian Krajbich Attention and choice across domains Journal Article In: Journal of Experimental Psychology: General, vol. 147, no. 12, pp. 1810–1826, 2018. @article{Smith2018, When people are faced with a decision, they tend to choose the option that draws their attention. In recent years, correlations between attention and choice have been documented in a variety of domains. This leads to the question of whether there is a general, stable relationship between attention and choice. Here, we examined choice behavior in tasks with and without risk and social considerations, using food or monetary rewards, within a single experiment. This allowed us to test the consistency of the decision-making process across domains. In the aggregate, we identified remarkable consistency in the attention-choice link. At the individual level, subjects with strong attentional effects in one task were likely to have strong attentional effects in the others. The strength of these effects also correlated with individuals' degree of tunnel vision. Thus, the attention–choice relationship appears to be a stable individual trait that is linked to more general attentional constraints. |
Joshua Snell; Sebastiaan Mathôt; Jonathan Mirault; Jonathan Grainger Parallel graded attention in reading: A pupillometric study Journal Article In: Scientific Reports, vol. 8, pp. 3743, 2018. @article{Snell2018b, There are roughly two lines of theory to account for recent evidence that word processing is influenced by adjacent orthographic information. One line assumes that multiple words can be processed simultaneously through a parallel graded distribution of visuo-spatial attention. The other line assumes that attention is strictly directed to single words, but that letter detectors are connected to both foveal and parafoveal feature detectors, as such driving parafoveal-foveal integrative effects. Putting these two accounts to the test, we build on recent research showing that the pupil responds to the brightness of covertly attended (i.e., without looking) locations in the visual field. Experiment 1 showed that foveal target word processing was facilitated by related parafoveal flanking words when these were positioned to the left and right of the target, but not when these were positioned above and below the target. Perfectly in line with this asymmetry, in Experiment 2 we found that the pupil size was contingent with the brightness of the locations of horizontally but not vertically aligned flankers, indicating that attentional resources were allocated to those words involved in the parafoveal-on-foveal effect. We conclude that orthographic parafoveal-on-foveal effects are driven by parallel graded attention. |
Adam C. Snyder; Byron M. Yu; Matthew A. Smith Distinct population codes for attention in the absence and presence of visual stimulation Journal Article In: Nature Communications, vol. 9, pp. 4382, 2018. @article{Snyder2018a, Visual neurons respond more vigorously to an attended stimulus than an unattended one. How the brain prepares for response gain in anticipation of that stimulus is not well understood. One prominent proposal is that anticipation is characterized by gain-like modulations of spontaneous activity similar to gains in stimulus responses. Here we test an alternative idea: anticipation is characterized by a mixture of both increases and decreases of spontaneous firing rates. Such a strategy would be adaptive as it supports a simple linear scheme for disentangling internal, modulatory signals from external, sensory inputs. We recorded populations of V4 neurons in monkeys performing an attention task, and found that attention states are signaled by different mixtures of neurons across the population in the presence or absence of a stimulus. Our findings support a move from a stimulation-invariant account of anticipation towards a richer view of attentional modulation in a diverse neuronal population. |
Rodolfo Solís-Vivanco; Ole Jensen; Mathilde Bonnefond Top–down control of alpha phase adjustment in anticipation of temporally predictable visual stimuli Journal Article In: Journal of Cognitive Neuroscience, vol. 30, no. 8, pp. 1157–1169, 2018. @article{SolisVivanco2018, Alpha oscillations (8–14 Hz) are proposed to represent an active mechanism of functional inhibition of neuronal processing. Specifically, alpha oscillations are associated with pulses of inhibition repeating every ∼100 msec. Whether alpha phase, similar to alpha power, is under top–down control remains unclear. Moreover, the sources of such putative top–down phase control are unknown. We designed a cross-modal (visual/auditory) attention study in which we used magnetoencephalography to record the brain activity from 34 healthy participants. In each trial, a somatosensory cue indicated whether to attend to either the visual or auditory domain. The timing of the stimulus onset was predictable across trials. We found that, when visual information was attended, anticipatory alpha power was reduced in visual areas, whereas the phase adjusted just before the stimulus onset. Performance in each modality was predicted by the phase of the alpha oscillations previous to stimulus onset. Alpha oscillations in the left pFC appeared to lead the adjustment of alpha phase in visual areas. Finally, alpha phase modulated stimulus-induced gamma activity. Our results confirm that alpha phase can be top–down adjusted in anticipation of predictable stimuli and improve performance. Phase adjustment of the alpha rhythm might serve as a neurophysiological resource for optimizing visual processing when temporal predictions are possible and there is considerable competition between target and distracting stimuli. |
Chen Song; Geraint Rees Intra-hemispheric integration underlies perception of tilt illusion Journal Article In: NeuroImage, vol. 175, pp. 80–90, 2018. @article{Song2018, The integration of inputs across the entire visual field into a single conscious experience is fundamental to human visual perception. This integrated nature of visual experience is illustrated by contextual illusions such as the tilt illusion, in which the perceived orientation of a central grating appears tilted away from its physical orientation, due to the modulation by a surrounding grating with a different orientation. Here we investigated the relative contribution of local, intra-hemispheric and global, inter-hemispheric integration mechanisms to perception of the tilt illusion. We used Dynamic Causal Modelling of fMRI signals to estimate effective connectivity in human early visual cortices (V1, V2, V3) during bilateral presentation of a tilt illusion stimulus. Our analysis revealed that neural responses associated with the tilt illusion were modulated by intra- rather than inter-hemispheric connectivity. Crucially, across participants, intra-hemispheric connectivity in V1 correlated with the magnitude of the tilt illusion, while no such correlation was observed for V1 inter-hemispheric connectivity, or V2, V3 connectivity. Moreover, when the illusion stimulus was presented unilaterally rather than bilaterally, the illusion magnitude did not change. Together our findings suggest that perception of the tilt illusion reflects an intra-hemispheric integration mechanism. This is in contrast to the existing literature, which suggests inter-hemispheric modulation of neural activity as early as V1. This discrepancy with our findings may reflect the diversity and complexity of integration mechanisms involved in visual processing and visual perception. |
Teresa Sousa; Alexandre Sayal; João V. Duarte; Gabriel N. Costa; Ricardo Martins; Miguel Castelo-Branco Evidence for distinct levels of neural adaptation to both coherent and incoherently moving visual surfaces in visual area hMT+ Journal Article In: NeuroImage, vol. 179, pp. 540–547, 2018. @article{Sousa2018, Visual adaptation describes the processes by which the visual system alters its operating properties in response to changes in the environment. It is one of the mechanisms controlling visual perceptual bistability – when two perceptual solutions are available – by controlling the duration of each percept. Moving plaids are an example of such ambiguity. They can be perceived as two surfaces sliding incoherently over each other or as a single coherent surface. Here, we investigated, using fMRI, whether activity in the human motion complex (hMT+), a region tightly related to the perceptual integration of visual motion, is modulated by distinct forms of visual adaptation to coherent or incoherent perception of moving plaids. Our hypothesis is that exposure to global coherent or incoherent moving stimuli leads to different levels of measurable adaptation, reflected in hMT+ activity. We found that the strength of the measured visual adaptation effect depended on whether subjects integrated (coherent percept) or segregated (incoherent percept) surface motion signals. Visual motion adaptation was significant both for coherent motion and globally incoherent surface motion. Although not as strong as to the coherent percept, visual adaptation due to the incoherent percept also affects hMT+. This shows that adaptation can contribute to regulate percept duration during visual bistability, with distinct weights, depending on the type of percept. Our findings suggest a link between bistability and adaptation mechanisms, both due to coherent and incoherent motion percepts, but in an asymmetric manner. These asymmetric adaptation weights have strong implications in models of perceptual decision and may explain asymmetry of perceptual interpretation periods. |
Calandra Speirs; Zorry Belchev; Amanda Fernandez; Stephanie Korol; Christopher Sears Are there age differences in attention to emotional images following a sad mood induction? Evidence from a free-viewing eye-tracking paradigm Journal Article In: Aging, Neuropsychology, and Cognition, vol. 25, no. 6, pp. 928–957, 2018. @article{Speirs2018, Two experiments examined age differences in the effect of a sad mood induction (MI) on attention to emotional images. Younger and older adults viewed sets of four images while their eye gaze was tracked throughout an 8-s presentation. Images were viewed before and after a sad MI to assess the effect of a sad mood on attention to positive and negative scenes. Younger and older adults exhibited positively biased attention after the sad MI, significantly increasing their attention to positive images, with no evidence of an age difference in either experiment. A test of participants' recognition memory for the images indicated that the sad MI reduced memory accuracy for sad images for younger and older adults. The results suggest that heightened attention to positive images following a sad MI reflects an affect regulation strategy related to mood repair. The implications for theories of the positivity effect are discussed. |
Sara Spotorno; Megan Evans; Margaret C. Jackson Remembering who was where: A happy expression advantage for face identity-location binding in working memory Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 44, no. 9, pp. 1365–1383, 2018. @article{Spotorno2018, It is well established that visual working memory (WM) for face identity is enhanced when faces display threatening versus nonthreatening expressions. During social interaction, it is also important to bind person identity with location information in WM to remember who was where, but we lack a clear understanding of how emotional expression influences this. Here, we conducted two touchscreen experiments to investigate how angry versus happy expressions displayed at encoding influenced the precision with which participants relocated a single neutral test face to its original position. Maintenance interval was manipulated (Experiment 2; 1 s, 3 s, 6 s) to assess durability of binding. In both experiments, relocation accuracy was enhanced when faces were happy versus angry, and this happy benefit endured from 1-s to 6-s maintenance interval. Eye movement measures during encoding showed no convincing effects of oculomotor behavior that could readily explain the happy benefit. However, accuracy in general was improved, and the happy benefit was strongest for the last, most recent face fixated at encoding. Improved, durable binding of who was where in the presence of a happy expression may reflect the importance of prosocial navigation. |
Tobias Staudigl; Marcin Leszczynski; Joshua Jacobs; Sameer A. Sheth; Charles E. Schroeder; Ole Jensen; Christian F. Doeller Hexadirectional modulation of high-frequency electrophysiological activity in the human anterior medial temporal lobe maps visual space Journal Article In: Current Biology, vol. 28, pp. 1–5, 2018. @article{Staudigl2018, Grid cells are one of the core building blocks of spatial navigation [1]. Single-cell recordings of grid cells in the rodent entorhinal cortex revealed hexagonal coding of the local environment during spatial navigation [1]. Grid-like activity has also been identified in human single-cell recordings during virtual navigation [2]. Human fMRI studies further provide evidence that grid-like signals are also accessible on a macroscopic level [3–7]. Studies in both nonhuman primates [8] and humans [9, 10] suggest that grid-like coding in the entorhinal cortex generalizes beyond spatial navigation during locomotion, providing evidence for grid-like mapping of visual space during visual exploration—akin to the grid cell positional code in rodents during spatial navigation. However, electrophysiological correlates of the grid code in humans remain unknown. Here, we provide evidence for grid-like, hexadirectional coding of visual space by human high-frequency activity, based on two independent datasets: non-invasive magnetoencephalography (MEG) in healthy subjects and entorhinal intracranial electroencephalography (EEG) recordings in an epileptic patient. Both datasets consistently show a hexadirectional modulation of broadband high-frequency activity (60–120 Hz). Our findings provide first evidence for a grid-like MEG signal, indicating that the human entorhinal cortex codes visual space in a grid-like manner [8–10], and support the view that grid coding generalizes beyond environmental mapping during locomotion [4–6, 11]. Due to their millisecond accuracy, MEG recordings allow linking of grid-like activity to epochs during relevant behavior, thereby opening up the possibility for new MEG-based investigations of grid coding at high temporal resolution. |
Maria Steffens; C. Neumann; Anna-Maria Kasparbauer; B. Becker; Bernd Weber; Mitul A. Mehta; R. Hurlemann; Ulrich Ettinger Effects of ketamine on brain function during response inhibition Journal Article In: Psychopharmacology, vol. 235, no. 12, pp. 3559–3571, 2018. @article{Steffens2018, Introduction The uncompetitive N-methyl-D-aspartate (NMDA) receptor (NMDAR) antagonist ketamine has been proposed to model symptoms ofpsychosis. Inhibitory deficits in the schizophrenia spectrumhave been reliably reported using the antisaccade task. Interestingly, although similar antisaccade deficits have been reported following ketamine in non-human primates, ketamine-induced deficits have not been observed in healthy human volunteers. Methods To investigate the effects of ketamine on brain function during an antisaccade task, we conducted a double-blind, placebo-controlled, within-subjects study on n = 15 healthy males. We measured the blood oxygen level dependent (BOLD) response and eye movements during a mixed antisaccade/prosaccade task while participants received a subanesthetic dose of intravenous ketamine (target plasma level 100 ng/ml) on one occasion and placebo on the other occasion. Results While ketamine significantly increased self-ratings of psychosis-like experiences, it did not induce antisaccade or prosaccade performance deficits. At the level of BOLD, we observed an interaction between treatment and task condition in somatosensory cortex, suggesting recruitment of additional neural resources in the antisaccade condition under NMDAR blockage. Discussion Given the robust evidence ofantisaccade deficits in schizophrenia spectrum populations, the current findings suggest that ketamine may not mimic all features ofpsychosis at the dose used in this study. Our findings underline the importance of a more detailed research to further understand and define effects of NMDAR hypofunction on human brain function and behavior, with a view to applying ketamine administration as a model system of psychosis. Future studies with varying doses will be of importance in this context. |
Natalie A. Steinemann; Redmond G. O'Connell; Simon P. Kelly Decisions are expedited through multiple neural adjustments spanning the sensorimotor hierarchy Journal Article In: Nature Communications, vol. 9, pp. 3627, 2018. @article{Steinemann2018, When decisions are made under speed pressure, “urgency” signals elevate neural activity toward action-triggering thresholds independent of the sensory evidence, thus incurring a cost to choice accuracy. While urgency signals have been observed in brain circuits involved in preparing actions, their influence at other levels of the sensorimotor pathway remains unknown. We used a novel contrast-comparison paradigm to simultaneously trace the dynamics of sensory evidence encoding, evidence accumulation, motor preparation, and muscle activation in humans. Results indicate speed pressure impacts multiple sensorimotor levels but in crucially distinct ways. Evidence-independent urgency was applied to cortical action-preparation signals and downstream muscle activation, but not directly to upstream levels. Instead, differential sensory evidence encoding was enhanced in a way that partially countered the negative impact of motor-level urgency on accuracy, and these opposing sensory-boost and motor-urgency effects had knock-on effects on the buildup and pre- response amplitude of a motor-independent representation of cumulative evidence. |
Lisa J. Stephenson; S. Gareth Edwards; Emma E. Howard; Andrew P. Bayliss Eyes that bind us: Gaze leading induces an implicit sense of agency Journal Article In: Cognition, vol. 172, pp. 124–133, 2018. @article{Stephenson2018, Humans feel a sense of agency over the effects their motor system causes. This is the case for manual actions such as pushing buttons, kicking footballs, and all acts that affect the physical environment. We ask whether initiating joint attention – causing another person to follow our eye movement – can elicit an implicit sense of agency over this congruent gaze response. Eye movements themselves cannot directly affect the physical environment, but joint attention is an example of how eye movements can indirectly cause social outcomes. Here we show that leading the gaze of an on-screen face induces an underestimation of the temporal gap between action and consequence (Experiments 1 and 2). This underestimation effect, named ‘temporal binding,' is thought to be a measure of an implicit sense of agency. Experiment 3 asked whether merely making an eye movement in a non-agentic, non-social context might also affect temporal estimation, and no reliable effects were detected, implying that inconsequential oculomotor acts do not reliably affect temporal estimations under these conditions. Together, these findings suggest that an implicit sense of agency is generated when initiating joint attention interactions. This is important for understanding how humans can efficiently detect and understand the social consequences of their actions. |
Emma E. M. Stewart; Alexander C. Schütz Attention modulates trans-saccadic integration Journal Article In: Vision Research, vol. 142, pp. 1–10, 2018. @article{Stewart2018c, With every saccade, humans must reconcile the low resolution peripheral information available before a saccade, with the high resolution foveal information acquired after the saccade. While research has shown that we are able to integrate peripheral and foveal vision in a near-optimal manner, it is still unclear which mechanisms may underpin this important perceptual process. One potential mechanism that may moderate this integration process is visual attention. Pre-saccadic attention is a well documented phenomenon, whereby visual attention shifts to the location of an upcoming saccade before the saccade is executed. While it plays an important role in other peri-saccadic processes such as predictive remapping, the role of attention in the integration process is as yet unknown. This study aimed to determine whether the presentation of an attentional distractor during a saccade impaired trans-saccadic integration, and to measure the time-course of this impairment. Results showed that presenting an attentional distractor impaired integration performance both before saccade onset, and during the saccade, in selected subjects who showed integration in the absence of a distractor. This suggests that visual attention may be a mechanism that facilitates trans-saccadic integration. |
Emma E. M. Stewart; Alexander C. Schütz Optimal trans-saccadic integration relies on visual working memory Journal Article In: Vision Research, vol. 153, pp. 70–81, 2018. @article{Stewart2018b, Saccadic eye movements alter the visual processing of objects of interest by bringing them from the periphery, where there is only low-resolution vision, to the high-resolution fovea. Evidence suggests that people are able to achieve trans-saccadic integration in a near-optimal manner; however the mechanisms underlying integration are still unclear. Visual working memory (VWM) is sustained across a saccade, and it has been suggested that this memory resource is used to store and compare the pre- and post- saccadic percepts. This study directly tested the hypothesis that VWM is necessary for optimal trans-saccadic integration, by introducing memory load during a saccade, and testing subsequent integration performance on feature similar and dissimilar stimuli. Results show that integration performance was impaired when there was an additional memory task. Additionally, performance on the memory task was affected by feature-specific integration stimuli. Our results suggest that VWM supports the integration of pre- and post- saccadic stimuli because integration performance is impaired under VWM load. |
Jacob L. Stubbs; Sherryse L. Corrow; Benjamin R. Kiang; William J. Panenka; Jason J. S. Barton The effects of enhanced attention and working memory on smooth pursuit eye movement Journal Article In: Experimental Brain Research, vol. 236, no. 2, pp. 485–495, 2018. @article{Stubbs2018, It has long been suggested that increasing attentional demands can alter smooth pursuit eye movements, but the precise nature of the changes generated is not clear. Our goal was to examine smooth pursuit with a task that enhanced attention to the target and that increased demands on working memory, without distracting from the target. 15 subjects tracked a target moving around a predictable circular trajectory at a constant tangential velocity. An n-back task with two levels of additional working memory load was integrated into the pursuit target to increase cognitive demands. In the single-task conditions, subjects either performed pursuit alone or the n-back task with a stationary target. In the dual-task conditions, pursuit and the n-back task were performed together. Performance of the n-back tasks was not impaired by simultaneous smooth pursuit. The n-back tasks had negligible effects on horizontal or vertical pursuit gain, but generated increased phase lag and reduced the variability of position error during pursuit. Increasing the difficulty of the n-back task further reduced the variability of position errors. We conclude that enhanced attention does not alter the velocity gain of smooth pursuit but rather improves its consistency. As long as attention remains focused on the target, increased attentional demands further reduce pursuit variability. Increases in phase lag may serve to improve attentional processing of the target. |
Marta Suárez-Pinilla; Anil K. Seth; Warrick Roseboom Serial dependence in the perception of visual variance Journal Article In: Journal of Vision, vol. 18, no. 7, pp. 1–24, 2018. @article{SuarezPinilla2018, The recent history of perceptual experience has been shown to influence subsequent perception. Classically, this dependence on perceptual history has been examined in sensory-adaptation paradigms, wherein prolonged exposure to a particular stimulus (e.g., a vertically oriented grating) produces changes in perception of subsequently presented stimuli (e.g., the tilt aftereffect). More recently, several studies have investigated the influence of shorter perceptual exposure with effects, referred to as serial dependence, being described for a variety of low- and high-level perceptual dimensions. In this study, we examined serial dependence in the processing of dispersion statistics, namely variance-a key descriptor of the environment and indicative of the precision and reliability of ensemble representations. We found two opposite serial dependences operating at different timescales, and likely originating at different processing levels: A positive, Bayesian-like bias was driven by the most recent exposures, dependent on feature-specific decision making and appearing only when high confidence was placed in that decision; and a longer lasting negative bias-akin to an adaptation aftereffect-becoming manifest as the positive bias declined. Both effects were independent of spatial presentation location and the similarity of other close traits, such as mean direction of the visual variance stimulus. These findings suggest that visual variance processing occurs in high-level areas but is also subject to a combination of multilevel mechanisms balancing perceptual stability and sensitivity, as with many different perceptual dimensions. |