EyeLink 认知出版物
所有EyeLink认知和感知研究出版物,直至2023年(部分早于2024年)均按年份列出。您可以使用视觉搜索、场景感知、面部处理等关键词搜索出版物。您还可以搜索单个作者姓名。如果我们错过了任何EyeLink认知或感知文章,请给我们发电子邮件!
2015 |
Stefania Vito; Antimo Buonocore; Jean François Bonnefon; Sergio Della Sala Eye movements disrupt episodic future thinking Journal Article In: Memory, vol. 23, no. 6, pp. 796–805, 2015. @article{Vito2015, Remembering the past and imagining the future both rely on complex mental imagery. We considered the possibility that constructing a future scene might tap a component of mental imagery that is not as critical for remembering past scenes. Whereas visual imagery plays an important role in remembering the past, we predicted that spatial imagery plays a crucial role in imagining the future. For the purpose of teasing apart the different components underpinning scene construction in the two experiences of recalling episodic memories and shaping novel future events, we used a paradigm that might selectively affect one of these components (i.e., the spatial). Participants performed concurrent eye movements while remembering the past and imagining the future. These concurrent eye movements selectively interfere with spatial imagery, while sparing visual imagery. Eye movements prevented participants from imagining complex and detailed future scenes, but had no comparable effect on the recollection of past scenes. Similarities between remembering the past and imagining the future are coupled with some differences. The present findings uncover another fundamental divergence between the two processes. |
Sergio Delle Monache; Francesco Lacquaniti; Gianfranco Bosco Eye movements and manual interception of ballistic trajectories: effects of law of motion perturbations and occlusions Journal Article In: Experimental Brain Research, vol. 233, no. 2, pp. 359–374, 2015. @article{DelleMonache2015, Manual interceptions are known to depend critically on integration of visual feedback information and experience-based predictions of the interceptive event. Within this framework, coupling between gaze and limb movements might also contribute to the interceptive outcome, since eye movements afford acquisition of high-resolution visual information. We investigated this issue by analyzing subjects' head-fixed oculomotor behavior during manual interceptions. Subjects moved a mouse cursor to intercept computer-generated ballistic trajectories either congruent with Earth's gravity or perturbed with weightlessness (0g) or hypergravity (2g) effects. In separate sessions, trajectories were either fully visible or occluded before interception to enforce visual prediction. Subjects' oculomotor behavior was classified in terms of amounts of time they gazed at different visual targets and of overall number of saccades. Then, by way of multivariate analyses, we assessed the following: (1) whether eye movement patterns depended on targets' laws of motion and occlusions; and (2) whether interceptive performance was related to the oculomotor behavior. First, we found that eye movement patterns depended significantly on targets' laws of motion and occlusion, suggesting predictive mechanisms. Second, subjects coupled differently oculomotor and interceptive behavior depending on whether targets were visible or occluded. With visible targets, subjects made smaller interceptive errors if they gazed longer at the mouse cursor. Instead, with occluded targets, they achieved better performance by increasing the target's tracking accuracy and by avoiding gaze shifts near interception, suggesting that precise ocular tracking provided better trajectory predictions for the interceptive response. |
Loni Desanghere; Jonathan J. Marotta The influence of object shape and center of mass on grasp and gaze Journal Article In: Frontiers in Psychology, vol. 6, pp. 1537, 2015. @article{Desanghere2015, Recent experiments examining where participants look when grasping an object found that fixations favour the eventual index finger landing position on the object. Even though the act of picking up an object must involve complex high-level computations such as the visual analysis of object contours, surface properties, knowledge of an object's function and center of mass (COM) location, these investigations have generally used simple symmetrical objects – where COM and horizontal midline overlap. Less research has been aimed at looking at how variations in object properties, such as differences in curvature and changes in COM location, affect visual and motor control. The purpose of this study was to examine grasp and fixation locations when grasping objects whose COM was positioned to the left or right of the objects horizontal midline (Experiment 1) and objects whose COM was moved progressively further from the midline of the objects based on the alteration of the object's shape (Experiment 2). Results from Experiment 1 showed that object COM position influenced fixation locations and grasp locations differently, with fixations not as tightly linked to index finger grasp locations as was previously reported with symmetrical objects. Fixation positions were also found to be more central on the non-symmetrical objects. This difference in gaze position may provide a more holistic view, which would allow both index finger and thumb positions to be monitored while grasping. Finally, manipulations of COM distance (Experiment 2) exerted marked effects on the visual analysis of the objects when compared to its influence on grasp locations, with fixation locations more sensitive to these manipulations. Together, these findings demonstrate how object features differentially influence gaze vs. grasp positions during object interaction. |
Tinne Dewolf; Wim Van Dooren; Frouke Hermens; Lieven Verschaffel Do students attend to representational illustrations of non-standard mathematical word problems, and, if so, how helpful are they? Journal Article In: Instructional Science, vol. 43, no. 1, pp. 147–171, 2015. @article{Dewolf2015, During the last two decades various researchers confronted upper elementary and lower secondary school pupils with word problems that were problematic from a realistic modelling point of view (so-called P-items), and found that pupils in general did not use their everyday knowledge to solve such P-items. Several attempts were undertaken to encourage learners to use their everyday knowledge more when solving such problems, e.g., by presenting the P-items together with representational illustrations that represent the problematic situation described in the problem. These illustrations were expected to help learners to mentally imagine the situation and consequently solve the items more realis- tically. However, no effect of the illustrations was found. In this article we build further on the use of representational illustrations. We report two related experiments with higher education students that investigated whether and how illustrations that represent the problematic situation described in a P-item help to imagine the problem situation and thereby solve the problem more realistically. In Experiment 1 we measured students' eye movements when solving P-items that were accompanied by representational illustrations, to analyse whether the illustrations are processed at all. In Experiment 2 we manipulated the presentation of the illustrations so students could not but look at them, before the word problem appeared. We found that students scarcely looked at the representational illus- trations (Experiment 1) and when they did, there was no effect of the illustrations on the realistic nature of their solutions (Experiment 2). Possible explanations for these findings are discussed. |
Matteo Visconti Di Oleggio Castello; M. Ida Gobbini Familiar face detection in 180ms Journal Article In: PLoS ONE, vol. 10, no. 8, pp. e0136548, 2015. @article{DiOleggioCastello2015, The visual system is tuned for rapid detection of faces, with the fastest choice saccade to a face at 100ms. Familiar faces have a more robust representation than do unfamiliar faces, and are detected faster in the absence of awareness and with reduced attentional resources. Faces of familiar and close friends become familiar over a protracted period involving learning the unique visual appearance, including a view-invariant representation, as well as person knowledge. We investigated the effect of personal familiarity on the earliest stages of face processing by using a saccadic-choice task to measure how fast familiar face detection can happen. Subjects made correct and reliable saccades to familiar faces when unfamiliar faces were distractors at 180ms-very rapid saccades that are 30 to 70ms earlier than the earliest evoked potential modulated by familiarity. By contrast, accuracy of saccades to unfarmiliar faces with familiar faces as distractors did not exceed chance. Saccades to faces with object distractors were even faster (110 to 120ms) and equivalent for familiar and unfamiliar faces, including that familiarity does not affect ultra-rapid saccades. We propose that detectors of diagnostic facial features for familiar faces develop in visual cortices through learning and allow rapid detection that procedes expicit recognition of identity. |
Gregory J. DiGirolamo; David Smelson; Nathan Guevremont Cue-induced craving in patients with cocaine use disorder predicts cognitive control deficits toward cocaine cues Journal Article In: Addictive Behaviors, vol. 47, pp. 86–90, 2015. @article{DiGirolamo2015, Introduction: Cue-induced craving is a clinically important aspect of cocaine addiction influencing ongoing use and sobriety. However, little is known about the relationship between cue-induced craving and cognitive control toward cocaine cues. While studies suggest that cocaine users have an attentional bias toward cocaine cues, the present study extends this research by testing if cocaine use disorder patients (CDPs) can control their eye movements toward cocaine cues and whether their response varied by cue-induced craving intensity. Methods: Thirty CDPs underwent a cue exposure procedure to dichotomize them into high and low craving groups followed by a modified antisaccade task in which subjects were asked to control their eye movements toward either a cocaine or neutral drug cue by looking away from the suddenly presented cue. The relationship between breakdowns in cognitive control (as measured by eye errors) and cue-induced craving (changes in self-reported craving following cocaine cue exposure) was investigated. Results: CDPs overall made significantly more errors toward cocaine cues compared to neutral cues, with higher cravers making significantly more errors than lower cravers even though they did not differ significantly in addiction severity, impulsivity, anxiety, or depression levels. Cue-induced craving was the only specific and significant predictor of subsequent errors toward cocaine cues. Conclusion: Cue-induced craving directly and specifically relates to breakdowns of cognitive control toward cocaine cues in CDPs, with higher cravers being more susceptible. Hence, it may be useful identifying high cravers and target treatment toward curbing craving to decrease the likelihood of a subsequent breakdown in control. |
Mithun Diwakar; Deborah L. Harrington; Jun Maruta; Jamshid Ghajar; Fady El-Gabalawy; Laura Muzzatti; Maurizio Corbetta; Ming-Xiong X. Huang; Roland R. Lee Filling in the gaps: Anticipatory control of eye movements in chronic mild traumatic brain injury Journal Article In: NeuroImage: Clinical, vol. 8, pp. 210–223, 2015. @article{Diwakar2015, A barrier in the diagnosis of mild traumatic brain injury (mTBI) stems from the lack of measures that are adequately sensitive in detecting mild head injuries. MRI and CT are typically negative in mTBI patients with persistent symptoms of post-concussive syndrome (PCS), and characteristic difficulties in sustaining attention often go undetected on neuropsychological testing, which can be insensitive to momentary lapses in concentration. Conversely, visual tracking strongly depends on sustained attention over time and is impaired in chronic mTBI patients, especially when tracking an occluded target. This finding suggests deficient internal anticipatory control in mTBI, the neural underpinnings of which are poorly understood. The present study investigated the neuronal bases for deficient anticipatory control during visual tracking in 25 chronic mTBI patients with persistent PCS symptoms and 25 healthy control subjects. The task was performed while undergoing magnetoencephalography (MEG), which allowed us to examine whether neural dysfunction associated with anticipatory control deficits was due to altered alpha, beta, and/or gamma activity. Neuropsychological examinations characterized cognition in both groups. During MEG recordings, subjects tracked a predictably moving target that was either continuously visible or randomly occluded (gap condition). MEG source-imaging analyses tested for group differences in alpha, beta, and gamma frequency bands. The results showed executive functioning, information processing speed, and verbal memory deficits in the mTBI group. Visual tracking was impaired in the mTBI group only in the gap condition. Patients showed greater error than controls before and during target occlusion, and were slower to resynchronize with the target when it reappeared. Impaired tracking concurred with abnormal beta activity, which was suppressed in the parietal cortex, especially the right hemisphere, and enhanced in left caudate and frontaloral areas. Regional beta-amplitude demonstrated high classification accuracy (92%) compared to eye-tracking (65%) and neuropsychological variables (80%). These findings show that deficient internal anticipatory control in mTBI is associated with altered beta activity, which is remarkably sensitive given the heterogeneity of injuries. |
Helen F. Dodd; Jennifer L. Hudson; Tracey A. Williams; Talia Morris; Rebecca S. Lazarus; Yulisha Byrow Anxiety and attentional bias in preschool-aged children: An eyetracking study Journal Article In: Journal of Abnormal Child Psychology, vol. 43, no. 6, pp. 1055–1065, 2015. @article{Dodd2015, Extensive research has examined attentional bias for threat in anxious adults and school-aged children but it is unclear when this anxiety-related bias is first established. This study uses eyetracking technology to assess attentional bias in a sample of 83 children aged 3 or 4 years. Of these, 37 (19 female) met criteria for an anxiety disorder and 46 (30 female) did not. Gaze was recorded during a free-viewing task with angry-neutral face pairs presented for 1250 ms. There was no indication of between-group differences in threat bias, with both anxious and non-anxious groups showing vigilance for angry faces as well as longer dwell times to angry over neutral faces. Importantly, however, the anxious participants spent significantly less time looking at the faces overall, when compared to the non-anxious group. The results suggest that both anxious and non-anxious preschool-aged children preferentially attend to threat but that anxious children may be more avoidant of faces than non-anxious children. |
Peter H. Donaldson; Caroline T. Gurvich; Joanne Fielding; Peter G. Enticott Exploring associations between gaze patterns and putative human mirror neuron system activity Journal Article In: Frontiers in Human Neuroscience, vol. 9, pp. 523, 2015. @article{Donaldson2015, The human mirror neuron system (MNS) is hypothesized to be crucial to social cognition. Given that key MNS-input regions such as the superior temporal sulcus are involved in biological motion processing, and mirror neuron activity in monkeys has been shown to vary with visual attention, aberrant MNS function may be partly attributable to atypical visual input. To examine the relationship between gaze pattern and interpersonal motor resonance (IMR; an index of putative MNS activity), healthy right-handed participants aged 18–40 (n = 26) viewed videos of transitive grasping actions or static hands, whilst the left primary motor cortex received transcranial magnetic stimulation. Motor- evoked potentials recorded in contralateral hand muscles were used to determine IMR. Participants also underwent eyetracking analysis to assess gaze patterns whilst viewing the same videos. No relationship was observed between predictive gaze and IMR. However, IMR was positively associated with fixation counts in areas of biological motion in the videos, and negatively associated with object areas. These findings are discussed with reference to visual influences on the MNS, and the possibility that MNS atypicalities might be influenced by visual processes such as aberrant gaze pattern. |
Dandan Bi; Buxin Han Age-related differences in attention and memory toward emotional stimuli Journal Article In: PsyCh Journal, vol. 4, no. 3, pp. 155–159, 2015. @article{Bi2015, From the perspectives of time perception and motivation, socioemotional selectivity theory (SST) postulates that in comparison with younger adults, older adults tend to prefer positive stimuli and avoid negative stimuli. Currently the cross-cultural consistency of this positivity effect (PE) is still not clear. While empirical evidence for Western populations is accumulating, the validation of the PE in Asians is still rare. The current study compared 28 younger and 24 older Chinese adults in the processing of emotional information. Eye-tracking and recognition data of participants in processing pictures with positive, negative, or neutral emotional information sampled from the International Affection Picture System were collected. The results showed less negative bias for emotional attention in older adults than in younger adults, whereas for emotional recognition, only younger adults showed a negative bias while older adults showed no bias between negative and positive emotional information. Overall, compared with younger adults, emotional processing was more positive in older adults. It was concluded that Chinese older adults show a PE. |
Narcisse P. Bichot; Matthew T. Heard; Ellen M. DeGennaro; Robert Desimone A source for feature-based attention in the prefrontal cortex Journal Article In: Neuron, vol. 88, no. 4, pp. 832–844, 2015. @article{Bichot2015, In cluttered scenes, we can use feature-based attention to quickly locate a target object. To understand how feature attention is used to find and select objects for action, we focused on the ventral prearcuate (VPA) region of prefrontal cortex. In a visual search task, VPA cells responded selectively to search cues, maintained their feature selectivity throughout the delay and subsequent saccades, and discriminated the search target in their receptive fields with a time course earlier than in FEF or IT cortex. Inactivation of VPA impaired the animals' ability to find targets, and simultaneous recordings in FEF revealed that the effects of feature attention were eliminated while leaving the effects of spatial attention in FEF intact. Altogether, the results suggest that VPA neurons compute the locations of objects with the features sought and send this information to FEF to guide eye movements to those relevant stimuli. |
Julia Boggia; Jelena Ristic Social event segmentation Journal Article In: Quarterly Journal of Experimental Psychology, vol. 68, no. 4, pp. 731–744, 2015. @article{Boggia2015, Humans are experts in understanding social environments. What perceptual and cognitive processes enable such competent evaluation of social information? Here we show that environmental content is grouped into units of "social perception", which are formed automatically based on the attentional priority given to social information conveyed by eyes and faces. When asked to segment a clip showing a typical daily scenario, participants were remarkably consistent in identifying the boundaries of social events. Moreover, at those social event boundaries, participants' eye movements were reliably directed to actors' eyes and faces. Participants' indices of attention measured during the initial passive viewing, reflecting natural social behaviour, also showed a remarkable correspondence with overt social segmentation behaviour, reflecting the underlying perceptual organization. Together, these data show that dynamic information is automatically organized into meaningful social events on an ongoing basis, strongly suggesting that the natural comprehension of social content in daily life might fundamentally depend on this underlying grouping process. |
Ali Borji; Andreas Lennartz; Marc Pomplun What do eyes reveal about the mind? Algorithmic inference of search targets from fixations Journal Article In: Neurocomputing, vol. 149, pp. 788–799, 2015. @article{Borji2015, We address the question of inferring the search target from fixation behavior in visual search. Such inference is possible since during search, our attention and gaze are guided toward visual features similar to those in the search target. We strive to answer two fundamental questions: what are the most powerful algorithmic principles for this task, and how does their performance depend on the amount of available eye movement data and the complexity of the target objects? In the first two experiments, we choose a random-dot search paradigm to eliminate contextual influences on search. We present an algorithm that correctly infers the target pattern up to 50 times as often as a previously employed method and promises sufficient power and robustness for interface control. Moreover, the current data suggest a principal limitation of target inference that is crucial for interface design: if the target pattern exceeds a certain spatial complexity level, only a subpattern tends to guide the observers' eye movements, which drastically impairs target inference. In the third experiment, we show that it is possible to predict search targets in natural scenes using pattern classifiers and classic computer vision features significantly above chance. The availability of compelling inferential algorithms could initiate a new generation of smart, gaze-controlled interfaces and wearable visual technologies that deduce from their users' eye movements the visual information for which they are looking. In a broader perspective, our study shows directions for efficient intent decoding from eye movements. |
Sarah Chabal; Viorica Marian Speakers of different languages process the visual world differently Journal Article In: Journal of Experimental Psychology: General, vol. 144, no. 3, pp. 539–550, 2015. @article{Chabal2015, Language and vision are highly interactive. Here we show that people activate language when they perceive the visual world, and that this language information impacts how speakers of different languages focus their attention. For example, when searching for an item (e.g., clock) in the same visual display, English and Spanish speakers look at different objects. Whereas English speakers searching for the clock also look at a cloud, Spanish speakers searching for the clock also look at a gift, because the Spanish names for gift (regalo) and clock (reloj) overlap phonologically. These different looking patterns emerge despite an absence of direct language input, showing that linguistic information is automatically activated by visual scene processing. We conclude that the varying linguistic information available to speakers of different languages affects visual perception, leading to differences in how the visual world is processed. (PsycINFO Database Record |
Sarah Chabal; Scott R. Schroeder; Viorica Marian Audio-visual object search is changed by bilingual experience Journal Article In: Attention, Perception, and Psychophysics, vol. 77, no. 8, pp. 2684–2693, 2015. @article{Chabal2015a, The current study examined the impact of language experience on the ability to efficiently search for objects in the face of distractions. Monolingual and bilingual participants completed an ecologically-valid, object-finding task that contained conflicting, consistent, or neutral auditory cues. Bilinguals were faster than monolinguals at locating the target item, and eye movements revealed that this speed advantage was driven by bilinguals' ability to overcome interference from visual distractors and focus their attention on the relevant object. Bilinguals fixated the target object more often than did their monolingual peers, who, in contrast, attended more to a distracting image. Moreover, bilinguals', but not monolinguals', object-finding ability was positively associated with their executive control ability. We conclude that bilinguals' executive control advantages extend to real-world visual processing and object finding within a multi-modal environment. |
Jason L. Chan; Michael J. Koval; Thilo Womelsdorf; Stephen G. Lomber; Stefan Everling Dorsolateral prefrontal cortex deactivation in monkeys reduces preparatory beta and gamma power in the superior colliculus Journal Article In: Cerebral Cortex, vol. 25, no. 12, pp. 4704–4714, 2015. @article{Chan2015, Cognitive control requires the selection and maintenance of task-relevant stimulus-response associations, or rules. The dorsolateral prefrontal cortex (DLPFC) has been implicated by lesion, functional imaging, and neurophysiological studies to be involved in encoding rules, but the mechanisms by which it modulates other brain areas are poorly understood. Here, the functional relationship of the DLPFC with the superior colliculus (SC) was investigated by bilaterally deactivating the DLPFC while recording local field potentials (LFPs) in the SC in monkeys performing an interleaved pro- and antisaccade task. Event-related LFPs showed differences between pro- and antisaccades and responded prominently to stimulus presentation. LFP power after stimulus onset was higher for correct saccades than erroneous saccades. Deactivation of the DLPFC did not affect stimulus onset related LFP activity, but reduced high beta (20-30 Hz) and high gamma (60-150 Hz) power during the preparatory period for both pro- and antisaccades. Spike rate during the preparatory period was positively correlated with gamma power and this relationship was attenuated by DLPFC deactivation. These results suggest that top-down control of the SC by the DLPFC may be mediated by beta oscillations. |
Steve W. C. Chang; Nicholas A. Fagan; Koji Toda; Amanda V. Utevsky; John M. Pearson; Michael L. Platt Neural mechanisms of social decision-making in the primate amygdala Journal Article In: Proceedings of the National Academy of Sciences, vol. 112, no. 52, pp. 16012–16017, 2015. @article{Chang2015, SignificanceMaking social decisions requires evaluation of benefits and costs to self and others. Long associated with emotion and vigilance, neurons in primate amygdala also signal reward and punishment as well as information about the faces and eyes of others. Here we show that neurons in the basolateral amygdala signal the value of rewards for self and others when monkeys make social decisions. These value-mirroring neurons reflected monkeys tendency to make prosocial decisions on a momentary as well as long-term basis. We also found that delivering the social peptide oxytocin into basolateral amygdala enhances both prosocial tendencies and attention to the recipients of prosocial decisions. Our findings endorse the amygdala as a critical neural nexus regulating social decisions. Social decisions require evaluation of costs and benefits to oneself and others. Long associated with emotion and vigilance, the amygdala has recently been implicated in both decision-making and social behavior. The amygdala signals reward and punishment, as well as facial expressions and the gaze of others. Amygdala damage impairs social interactions, and the social neuropeptide oxytocin (OT) influences human social decisions, in part, by altering amygdala function. Here we show in monkeys playing a modified dictator game, in which one individual can donate or withhold rewards from another, that basolateral amygdala (BLA) neurons signaled social preferences both across trials and across days. BLA neurons mirrored the value of rewards delivered to self and others when monkeys were free to choose but not when the computer made choices for them. We also found that focal infusion of OT unilaterally into BLA weakly but significantly increased both the frequency of prosocial decisions and attention to recipients for context-specific prosocial decisions, endorsing the hypothesis that OT regulates social behavior, in part, via amygdala neuromodulation. Our findings demonstrate both neurophysiological and neuroendocrinological connections between primate amygdala and social decisions. |
Philippe Chassy; Trym A. E. Lindell; Jessica A. Jones; Galina V. Paramei A relationship between visual complexity and aesthetic appraisal of car front images: An eye-tracker study Journal Article In: Perception, vol. 44, no. 8-9, pp. 1085–1097, 2015. @article{Chassy2015, Image aesthetic pleasure (AP) is conjectured to be related to image visual complexity (VC). The aim of the present study was to investigate whether (a) two image attributes, AP and VC, are reflected in eye-movement parameters; and (b) subjective measures of AP and VC are related. Participants (N=26) explored car front images (M=50) while their eye movements were recorded. Following image exposure (10seconds), its VC and AP were rated. Fixation count was found to positively correlate with the subjective VC and its objective proxy, JPEG compression size, suggesting that this eye-movement parameter can be considered an objective behavioral measure of VC. AP, in comparison, positively correlated with average dwelling time. Subjective measures of AP and VC were related too, following an inverted U-shape function best-fit by a quadratic equation. In addition, AP was found to be modulated by car prestige. Our findings reveal a close relationship between subjective and objective measures of complexity and aesthetic appraisal, which is interpreted within a prototype-based theory framework. |
Magdalena Chechlacz; Glyn W. Humphreys; Stamatios N. Sotiropoulos; Christopher Kennard; Dario Cazzoli Structural organization of the corpus callosum predicts attentional shifts after continuous theta burst stimulation Journal Article In: Journal of Neuroscience, vol. 35, no. 46, pp. 15353–15368, 2015. @article{Chechlacz2015, Repetitive transcranial magnetic stimulation (rTMS) applied over the right posterior parietal cortex (PPC) in healthy participants has been shown to trigger a significant rightward shift in the spatial allocation of visual attention, temporarily mimicking spatial deficits observed in neglect. In contrast, rTMS applied over the left PPC triggers a weaker or null attentional shift. However, large interindividual differences in responses to rTMS have been reported. Studies measuring changes in brain activation suggest that the effects of rTMS may depend on both interhemispheric and intrahemispheric interactions between cortical loci controlling visual attention. Here, we investigated whether variability in the structural organization of human white matter pathways subserving visual attention, as assessed by diffusion magnetic resonance imaging and tractography, could explain interindividual differences in the effects of rTMS. Most participants showed a rightward shift in the allocation of spatial attention after rTMS over the right intraparietal sulcus (IPS), but the size of this effect varied largely across participants. Conversely, rTMS over the left IPS resulted in strikingly opposed individual responses, with some participants responding with rightward and some with leftward attentional shifts. We demonstrate that microstructural and macrostructural variability within the corpus callosum, consistent with differential effects on cross-hemispheric interactions, predicts both the extent and the direction of the response to rTMS. Together, our findings suggest that the corpus callosum may have a dual inhibitory and excitatory function in maintaining the interhemispheric dynamics that underlie the allocation of spatial attention. |
Cheng Chen; Xianghui Chen; Min Gao; Qiong Yang; Hongmei Yan Contextual influence on the tilt after-effect in foveal and para-foveal vision Journal Article In: Neuroscience Bulletin, vol. 31, no. 3, pp. 307–316, 2015. @article{Chen2015c, A sensory stimulus can only be properly interpreted in light of the stimuli that surround it in space and time. The tilt illusion (TI) and tilt after-effect (TAE) provide good evidence that the perception of a target depends strongly on both its spatial and temporal context. In previous studies, the TI and TAE have typically been investigated separately, so little is known about their co-effects on visual perception and information processing mechanisms. Here, we considered the influence of the spatial context and the temporal effect together and asked how center- surround context affects the TAE in foveal and para- foveal vision. Our results showed that different center-surround spatial patterns signifi cantly affected the TAE for both foveal and para-foveal vision. In the fovea, the TAE was mainly produced by central adaptive gratings. Cross-oriented surroundings significantly inhibited the TAE, and iso-oriented surroundings slightly facilitated it; surround inhibition was much stronger than surround facilitation. In the para-fovea, the TAE was mainly decided by the surrounding patches. Likewise, a cross-oriented central patch inhibited the TAE, and an iso-oriented one facilitated it, but there was no significant difference between inhibition and facilitation. Our findings demonstrated, at the perceptual level, that our visual system adopts different mechanisms to process consistent or inconsistent central-surround orientation information and that the unequal magnitude magnitude of surround inhibition and facilitation is vitally important for the visual system to improve the detectability or discriminability of novel or incongruent stimuli. |
Lijing Chen; Yufang Yang Emphasizing the only character: EMPHASIS, attention and contrast Journal Article In: Cognition, vol. 136, pp. 222–227, 2015. @article{Chen2015b, In conversations, pragmatic information such as emphasis is important for identifying the speaker's/writer's intention. The present research examines the cognitive processes involved in emphasis processing. Participants read short discourses that introduced one or two character(s), with the character being emphasized or non-emphasized in subsequent texts. Eye movements showed that: (1) early processing of the emphasized word was facilitated, which may have been due to increased attention allocation, whereas (2) late integration of the emphasized character was inhibited when the discourse involved only this character. These results indicate that it is necessary to include other characters as contrastive characters to facilitate the integration of an emphasized character, and support the existence of a relationship between Emphasis and Contrast computation. Taken together, our findings indicate that both attention allocation and contrast computation are involved in emphasis processing, and support the incremental nature of sentence processing and the importance of contrast in discourse comprehension. |
Nigel T. M. Chen; Patrick J. F. Clarke; Tamara L. Watson; Colin MacLeod; Adam J. Guastella Attentional bias modification facilitates attentional control mechanisms: Evidence from eye tracking Journal Article In: Biological Psychology, vol. 104, pp. 139–146, 2015. @article{Chen2015d, Social anxiety is thought to be maintained by biased attentional processing towards threatening information. Research has further shown that the experimental attenuation of this bias, through the implementation of attentional bias modification (ABM), may serve to reduce social anxiety vulnerability. However, the mechanisms underlying ABM remain unclear. The present study examined whether inhibitory attentional control was associated with ABM. A non-clinical sample of participants was randomly assigned to receive either ABM or a placebo task. To assess pre-post changes in attentional control, participants were additionally administered an emotional antisaccade task. ABM participants exhibited a subsequent shift in attentional bias away from threat as expected. ABM participants further showed a subsequent decrease in antisaccade cost, indicating a general facilitation of inhibitory attentional control. Mediational analysis revealed that the shift in attentional bias following ABM was independent to the change in attentional control. The findings suggest that the mechanisms of ABM are multifaceted. |
Sheng-Chang Chen; Mi-Shan Hsiao; Hsiao-Ching She In: Computers in Human Behavior, vol. 53, pp. 169–180, 2015. @article{Chen2015e, This study examined the effectiveness of the different spatial abilities of high school students who constructed their understanding of the atomic orbital concepts and mental models after learning with multimedia learning materials presented in static and dynamic modes of 3D representation. A total of 60 high school students participated in this study and were randomly assigned into static and dynamic 3D representation groups. The dependent variables included a pre-test and post-test on atomic orbital concepts, an atomic orbital mental model construction test, and students' eye-movement behaviors. Results showed that students who learned with dynamic 3D representation allocated a significantly greater amount of attention, exhibited better performance on the mental model test, and constructed more sophisticated 3D hybridizations of the orbital mental model than the students in the static 3D group. The logistic regression result indicated that the dynamic 3D representation group students' number of saccades and number of re-readings were positive predictors, while the number of fixations was the negative predictor, for developing the students' 3D mental models of an atomic orbital. High-spatial-ability students outperformed the low-spatial-ability students on the atomic orbital conceptual test and mental model construction, while both types of students allocated similar amounts of attention to the 3D representations. Our results demonstrated that low-spatial-ability students' eye movement behaviors positively correlate with their performance on the atomic orbital concept test and the mental model construction. |
Xinxin Chen; Hongyan Yu; Fang Yu What is the optimal number of response alternatives for rating scales? From an information processing perspective Journal Article In: Journal of Marketing Analytics, vol. 3, no. 2, pp. 69–78, 2015. @article{Chen2015f, Rating scales are measuring instruments that are widely used in social science research. However, many different rating scale formats are used in the literature, differing specifically in the number of response alternatives offered. Previous studies on the optimal number of response alternatives have focused exclusively on the participants' final response results, rather than on the participants' information processing. We used an eye-tracking study to explore this issue from an information processing perspective. We analyzed the information processing in six scales with different response alternatives. We compared the reaction times, net acquiescence response styles, extreme response styles and proportional changes in the response alternatives of the six scales. Our results suggest that the optimal number of response alternatives is five. |
Joseph D. Chisholm; Alan Kingstone Action video game players' visual search advantage extends to biologically relevant stimuli Journal Article In: Acta Psychologica, vol. 159, pp. 93–99, 2015. @article{Chisholm2015, Research investigating the effects of action video game experience on cognition has demonstrated a host of performance improvements on a variety of basic tasks. Given the prevailing evidence that these benefits result from efficient control of attentional processes, there has been growing interest in using action video games as a general tool to enhance everyday attentional control. However, to date, there is little evidence indicating that the benefits of action video game playing scale up to complex settings with socially meaningful stimuli - one of the fundamental components of our natural environment. The present experiment compared action video game player (AVGP) and non-video game player (NVGP) performance on an oculomotor capture task that presented participants with face stimuli. In addition, the expression of a distractor face was manipulated to assess if action video game experience modulated the effect of emotion. Results indicate that AVGPs experience less oculomotor capture than NVGPs; an effect that was not influenced by the emotional content depicted by distractor faces. It is noteworthy that this AVGP advantage emerged despite participants being unaware that the investigation had to do with video game playing, and participants being equivalent in their motivation and treatment of the task as a game. The results align with the notion that action video game experience is associated with superior attentional and oculomotor control, and provides evidence that these benefits can generalize to more complex and biologically relevant stimuli. |
Joseph D. Chisholm; Alan Kingstone Action video games and improved attentional control: Disentangling selection-and response-based processes Journal Article In: Psychonomic Bulletin & Review, vol. 22, no. 5, pp. 1430–1436, 2015. @article{Chisholm2015a, Research has demonstrated that experience with action video games is associated with improvements in a host of cognitive tasks. Evidence from paradigms that assess aspects of attention has suggested that action video game players (AVGPs) possess greater control over the allocation of attentional resources than do non-video-game players (NVGPs). Using a compound search task that teased apart selection- and response-based processes (Duncan, 1985), we required participants to perform an oculomotor capture task in which they made saccades to a uniquely colored target (selection-based process) and then produced a manual directional response based on information within the target (response-based process). We replicated the finding that AVGPs are less susceptible to attentional distraction and, critically, revealed that AVGPs outperform NVGPs on both selection-based and response-based processes. These results not only are consistent with the improved-attentional-control account of AVGP benefits, but they suggest that the benefit of action video game playing extends across the full breadth of attention-mediated stimulus–response processes that impact human performance. |
Wonil Choi; John M. Henderson Neural correlates of active vision: An fMRI comparison of natural reading and scene viewing Journal Article In: Neuropsychologia, vol. 75, pp. 109–118, 2015. @article{Choi2015, Theories of eye movement control during active vision tasks such as reading and scene viewing have primarily been developed and tested using data from eye tracking and computational modeling, and little is currently known about the neurocognition of active vision. The current fMRI study was conducted to examine the nature of the cortical networks that are associated with active vision. Subjects were asked to read passages for meaning and view photographs of scenes for a later memory test. The eye movement control network comprising frontal eye field (FEF), supplementary eye fields (SEF), and intraparietal sulcus (IPS), commonly activated during single-saccade eye movement tasks, were also involved in reading and scene viewing, suggesting that a common control network is engaged when eye movements are executed. However, the activated locus of the FEF varied across the two tasks, with medial FEF more activated in scene viewing relative to passage reading and lateral FEF more activated in reading than scene viewing. The results suggest that eye movements during active vision are associated with both domain-general and domain-specific components of the eye movement control network. |
Sabine Born; Eckart Zimmermann; Patrick Cavanagh The spatial profile of mask-induced compression for perception and action Journal Article In: Vision Research, vol. 110, pp. 128–141, 2015. @article{Born2015, Stimuli briefly flashed just before a saccade are perceived closer to the saccade target, a phenomenon known as saccadic compression of space. We have recently demonstrated that similar mislocalizations of flashed stimuli can be observed in the absence of saccades: brief probes were attracted towards a visual reference when followed by a mask. To examine the spatial profile of this new phenomenon of masked-induced compression, here we used a pair of references that draw the probe into the gap between them. Strong compression was found when we masked the probe and presented it following a reference pair, whereas little or no compression occurred for the probe without the reference pair or without the mask. When the two references were arranged vertically, horizontal mislocalizations prevailed. That is, probes presented to the left or right of the vertically arranged references were "drawn in" to be seen aligned with the references. In contrast, when we arranged the two references horizontally, we found vertical compression for stimuli presented above or below the references. Finally, when participants were to indicate the perceived probe location by making an eye movement towards it, saccade landing positions were compressed in a similar fashion as perceptual judgments, confirming the robustness of mask-induced compression. Our findings challenge pure oculomotor accounts of saccadic compression of space that assume a vital role for saccade-specific signals such as corollary discharge or the updating of eye position. Instead, we suggest that saccade- and mask-induced compression both reflect how the visual system deals with disruptions. |
Alison C. Bowling; Peter Lindsay; Belinda G. Smith; Kerri Storok Saccadic eye movements as indicators of cognitive function in older adults Journal Article In: Aging, Neuropsychology, and Cognition, vol. 22, no. 2, pp. 201–219, 2015. @article{Bowling2015, Older adults appear to have greater difficulty ignoring distractions during day-to-day activities than younger adults. To assess these effects of age, the ability of adults aged between 50 and 80 years to ignore distracting stimuli was measured using the antisaccade and oculomotor capture tasks. In the antisaccade task, observers are instructed to look away from a visual cue, whereas in the oculomotor capture task, observers are instructed to look toward a colored singleton in the presence of a concurrent onset distractor. Index scores of the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) were compared with capture errors, and with prosaccade errors on the antisaccade task. A higher percentage of capture errors were made on the oculomotor capture tasks by the older members of the cohort compared to the younger members. There was a weak relationship between the attention index and capture errors, but the visuospatial/constructional index was the strongest predictor of prosaccade error rate in the antisaccade task. The saccade reaction times (SRTs) of correct initial saccades in the oculomotor capture task were poorly correlated with age, and with the neurospsychological tests, but prosaccade SRTs in both tasks moderately correlated with antisaccade error rate. These results were interpreted in terms of a competitive integration (or race) model. Any variable that reduces the strength of the top-down neural signal to produce a voluntary saccade, or that increases saccade speed, will enhance the likelihood that a reflexive saccade to a stimulus with an abrupt onset will occur. |
Senne Braem; Ena Coenen; Klaas Bombeke; Marlies E. Bochove; Wim Notebaert Open your eyes for prediction errors Journal Article In: Cognitive, Affective and Behavioral Neuroscience, vol. 15, no. 2, pp. 374–380, 2015. @article{Braem2015, Previous studies have demonstrated that autonomic arousal is increased following correct task performance on a difficult, relative to an easy, task. Here, we hypothesized that this arousal response reflects the (relative) surprise of correct performance following a difficult versus an easy task. Following this line of reasoning, we would expect to find a reversed pattern following erroneous responses, because errors are less expected during an easy than during a difficult task. To test this, participants performed a flanker task while pupil size was measured online. As predicted, the results demonstrated that pupil size was larger following difficult (incongruent) correct trials than following easy (congruent) correct trials, but smaller following difficult than following easy incorrect trials. Moreover, participants with larger congruency effects, and hence a larger difference in outcome expectancies between the two trial types, showed larger differences in pupil size after both correct and incorrect responses, further corroborating the idea that pupil size increased as a measure of performance prediction errors. |
Signe Bray; Ramsha Almas; Aiden E. G. F. Arnold; Giuseppe Iaria; Glenda Macqueen Intraparietal sulcus activity and functional connectivity supporting spatial working memory manipulation Journal Article In: Cerebral Cortex, vol. 25, no. 5, pp. 1252–1264, 2015. @article{Bray2015, The intraparietal sulcus (IPS) is recruited during tasks requiring attention, maintenance and manipulation of information in working memory (WM). While WM tasks often show broad bilateral engagement along the IPS, topographic maps of contralateral (CL) visual space have been identified along the IPS, similar to retinotopic maps in visual cortex. In the present study, we asked how these visuotopic IPS regions are differentially involved in the maintenance and manipulation of spatial information in WM. Visuotopic mapping was performed in 26 participants to define regions of interest along the IPS, corresponding to previously described IPS0-4. In a separate task, we showed that while maintaining the location of a briefly flashed target in WM preferentially engaged CL IPS, manipulation of spatial information by mentally rotating the target around a circle engaged bilateral IPS, peaking in IPS1 in most participants. Functional connectivity analyses showed increased interaction between the IPS and prefrontal regions during manipulation, as well as interhemispheric interactions. Two control tasks demonstrated that covert attention shifts, and nonspatial manipulation (arithmetic), engaged patterns of IPS activation and connectivity that were distinct from WM manipulation. These findings add to our understanding of the role of IPS in spatial WM maintenance and manipulation. |
Eli Brenner; Jeroen B. J. Smeets How moving backgrounds influence interception Journal Article In: PLoS ONE, vol. 10, no. 3, pp. e0119903, 2015. @article{Brenner2015, Reaching movements towards an object are continuously guided by visual information about the target and the arm. Such guidance increases precision and allows one to adjust the movement if the target unexpectedly moves. On-going arm movements are also influenced by motion in the surrounding. Fast responses to motion in the surrounding could help cope with moving obstacles and with the consequences of changes in one's eye orientation and vantage point. To further evaluate how motion in the surrounding influences interceptive movements we asked subjects to tap a moving target when it reached a second, static target. We varied the direction and location of motion in the surrounding, as well as details of the stimuli that are known to influence eye movements. Subjects were most sensitive to motion in the background when such motion was near the targets. Whether or not the eyes were moving, and the direction of the background motion in relation to the direction in which the eyes were moving, had very little influence on the response to the background motion. We conclude that the responses to background motion are driven by motion near the target rather than by a global analysis of the optic flow and its relation with other information about self-motion. |
Scott L. Brincat; Earl K. Miller Frequency-specific hippocampal-prefrontal interactions during associative learning Journal Article In: Nature Neuroscience, vol. 18, no. 4, pp. 576–581, 2015. @article{Brincat2015, Much of our knowledge of the world depends on learning associations (for example, face-name), for which the hippocampus (HPC) and prefrontal cortex (PFC) are critical. HPC-PFC interactions have rarely been studied in monkeys, whose cognitive and mnemonic abilities are akin to those of humans. We found functional differences and frequency-specific interactions between HPC and PFC of monkeys learning object pair associations, an animal model of human explicit memory. PFC spiking activity reflected learning in parallel with behavioral performance, whereas HPC neurons reflected feedback about whether trial-and-error guesses were correct or incorrect. Theta-band HPC-PFC synchrony was stronger after errors, was driven primarily by PFC to HPC directional influences and decreased with learning. In contrast, alpha/beta-band synchrony was stronger after correct trials, was driven more by HPC and increased with learning. Rapid object associative learning may occur in PFC, whereas HPC may guide neocortical plasticity by signaling success or failure via oscillatory synchrony in different frequency bands. |
Michael Browning; Timothy E. Behrens; Gerhard Jocham; Jill X. O'Reilly; Sonia J. Bishop Anxious individuals have difficulty learning the causal statistics of aversive environments Journal Article In: Nature Neuroscience, vol. 18, no. 4, pp. 590–596, 2015. @article{Browning2015, Statistical regularities in the causal structure of the environment enable us to predict the probable outcomes of our actions. Environments differ in the extent to which action-outcome contingencies are stable or volatile. Difficulty in being able to use this information to optimally update outcome predictions might contribute to the decision-making difficulties seen in anxiety. We tested this using an aversive learning task manipulating environmental volatility. Human participants low in trait anxiety matched updating of their outcome predictions to the volatility of the current environment, as predicted by a Bayesian model. Individuals with high trait anxiety showed less ability to adjust updating of outcome expectancies between stable and volatile environments. This was linked to reduced sensitivity of the pupil dilatory response to volatility, potentially indicative of altered norepinephrinergic responsivity to changes in this aspect of environmental information. |
Berno Bucker; Artem V. Belopolsky; Jan Theeuwes Distractors that signal reward attract the eyes Journal Article In: Visual Cognition, vol. 23, no. 1-2, pp. 1–24, 2015. @article{Bucker2015, Salient stimuli and stimuli associated with reward have the ability to attract both attention and the eyes. The current study exploited the effects of reward on the well-known global effect in which two objects appear simultaneously in close spatial proximity. Participants always made saccades to a predefined target, while the colour of a nearby distractor signalled the reward available (high/low) for that trial. Unlike previous reward studies, in the current study these distractors never served as targets. We show that participants made fast saccades towards the target. However, saccades landed significantly closer to the high compared to the low reward signalling distractor. This reward effect was already present in the first block and remained stable throughout the experiment. Instead of landing exactly in between the two stimuli (i.e., the classic global effect), the fastest eye movements landed closer towards the reward signalling distractor. Results of a control experiment, in which no distractor-reward contingencies were present, confirmed that the observed effects were driven by reward and not by physical salience. Furthermore, there were trial-by-trial reward priming effects in which saccades landed significantly closer to the high instead of the low reward signalling distractor when the same distractor was presented on two consecutive trials. Together the results imply that a reward signalling stimulus that was never part of the task set has an automatic effect on the oculomotor system. |
Carsten Buhmann; Wolfgang H. W. H. Zangemeister; Stefanie Kraft; Kim Hinkelmann; Sven Krause; Christian Gerloff; Wolfgang H. Zangemeister Visual attention and saccadic oculomotor control in Parkinson's disease Journal Article In: European Neurology, vol. 73, no. 5-6, pp. 283–293, 2015. @article{Buhmann2015, In patients with Parkinson's disease (PD) we aimed at differentiating the relation between selective vi- sual attention, deficits of programming and dynamics of saccadic eye movements while searching for a target and hand-reaction time as well as hand-movement time. Visual attention is crucial for concentrating selectively on one as- pect of the visual field while ignoring other aspects. Eye movements are anatomically and functionally related to mechanisms of visual attention. Saccadic dysfunction might confound selective visual attention in PD. Methods: We studied visual selective attention in 22 medicated PD pa- tients (clinical ON status, mild to moderate disease severity) and 22 age matched controls. We looked for possible inter- ferences through oculomotor deficits. Two tasks were com- pared: free viewing of photographs and time optimal visual search of a hidden target. Visual search times (VST), task re- lated dynamics of saccades, and hand-reaction and hand- movement times were analyzed. Results: In the free viewing task mild to moderately affected PD patients did not differ statistically from healthy subjects with respect to saccade dynamics. However, patients differed significantly from healthy subjects in the time optimal visual search task with 25% lower rates of successful searches. Hand movement re- action time did not differ in both groups, whereas hand movement execution time was significantly prolonged in PD patients. Conclusion: Saccadic oculomotor control and hand movement reaction times were intact, whereas in our less severely affected treated PD patients, visual selective atten- tion was not. The highly reduced successful search rate might be related to disturbed programming and delayed execution of saccades during time optimal visual search due to decreased execution of serial-order sequential genera- tion of saccades. |
Melissa C. Bulloch; Steven L. Prime; Jonathan J. Marotta Anticipatory gaze strategies when grasping moving objects Journal Article In: Experimental Brain Research, vol. 233, no. 12, pp. 3413–3423, 2015. @article{Bulloch2015, Grasping moving objects involves both spatial and temporal predictions. The hand is aimed at a location where it will meet the object, rather than the position at which the object is seen when the reach is initiated. Previous eye–hand coordination research from our laboratory, utilizing stationary objects, has shown that participants' initial gaze tends to be directed towards the eventual location of the index finger when making a precision grasp. This experiment examined how the speed and direction of a computer-generated block's movement affect gaze and selection of grasp points. Results showed that when the target first appeared, participants anticipated the target's eventual movement by fixating well ahead of its leading edge in the direction of eventual motion. Once target movement began, participants shifted their fixation to the leading edge of the target. Upon reach initiation, participants then fixated towards the top edge of the target. As seen in our previous work with stationary objects, final fixations tended towards the final index finger contact point on the target. Moreover, gaze and kinematic analyses revealed that it was direction that most influenced fixation locations and grasp points. Interestingly, participants fixated further ahead of the target's leading edge when the direction of motion was leftward, particularly at the slower speed—possibly the result of mechanical constraints of intercepting leftward-moving targets with one's right hand. |
Michele Burigo; Pia Knoeferle Visual attention during spatial language comprehension Journal Article In: PLoS ONE, vol. 10, no. 1, pp. e0115758, 2015. @article{Burigo2015, Spatial terms such as “above”, “in front of”, and “on the left of” are all essential for describing the location of one object relative to another object in everyday communication. Apprehending such spatial relations involves relating linguistic to object representations by means of attention. This requires at least one attentional shift, andmodels such as the Attentional Vector Sum (AVS) predict the direction of that attention shift, from the sausage to the box for spatial utterances such as “The box is above the sausage”. To the extent that this prediction generalizes to overt gaze shifts, a listener's visual attention should shift from the sausage to the box. However, listeners tend to rapidly look at referents in their order of mention and even anticipate them based on linguistic cues, a behavior that predicts a converse attention- al shift from the box to the sausage. Four eye-tracking experiments assessed the role of overt attention in spatial language comprehension by examining to which extent visual at- tention is guided by words in the utterance and to which extent it also shifts “against the grain” of the unfolding sentence. The outcome suggests that comprehenders' visual attention is predominantly guided by their interpretation of the spatial description. Visual shifts against the grain occurred only when comprehenders had some extra time, and their absence did not affect comprehension accuracy. However, the timing of this reverse gaze shift on a trial correlated with that trial's verification time. Thus, while the timing of these gaze shifts is subtly related to the verification time, their presence is not necessary for successful verification of spatial relations. |
Melanie R. Burke; Charlotte Poyser; Ingo Schiessl Age-related deficits in visuospatial memory are due to changes in preparatory set and eye-hand coordination Journal Article In: Journals of Gerontology - Series B Psychological Sciences and Social Sciences, vol. 70, no. 5, pp. 682–690, 2015. @article{Burke2015, Objectives. Healthy aging is associated with a decline in visuospatial working memory. The nature of the changes leading to this decline in response of the eye and/or hand is still under debate. This study aims to establish whether impairments observed in performance on cognitive tasks are due to actual cognitive effects or are caused by motor-related eye–hand coordination. Methods. We implemented a computerized version of the Corsi span task. The eye and touch responses of healthy young and older adults were recorded to a series of remembered targets on a screen. Results. Results revealed differences in fixation strategies between the young and the old with increasing cognitive demand, which resulted in higher error rates in the older group. We observed increasing reaction times and durations between fixations and touches to targets, with increasing memory load and delays in both the eye and the hand in the older adults. Discussion. Our results show that older adults have difficulty maintaining a " preparatory set " for durations longer than 5 s and with increases in memory load. Attentional differences cannot account for our results, and differences in age groups appear to be principally memory related. Older adults reveal poorer eye–hand coordination, which is further confounded by increasing delay and complexity. |
Zoya Bylinskii; Phillip Isola; Constance Bainbridge; Antonio Torralba; Aude Oliva Intrinsic and extrinsic effects on image memorability Journal Article In: Vision Research, vol. 116, pp. 165–178, 2015. @article{Bylinskii2015, Previous studies have identified that images carry the attribute of memorability, a predictive value of whether a novel image will be later remembered or forgotten. Here we investigate the interplay between intrinsic and extrinsic factors that affect image memorability. First, we find that intrinsic differences in memorability exist at a finer-grained scale than previously documented. Second, we test two extrinsic factors: image context and observer behavior. Building on prior findings that images that are distinct with respect to their context are better remembered, we propose an information-theoretic model of image distinctiveness. Our model can automatically predict how changes in context change the memorability of natural images. In addition to context, we study a second extrinsic factor: where an observer looks while memorizing an image. It turns out that eye movements provide additional information that can predict whether or not an image will be remembered, on a trial-by-trial basis. Together, by considering both intrinsic and extrinsic effects on memorability, we arrive at a more complete and fine-grained model of image memorability than previously available. |
Laura Cacciamani; Paige E. Scalf; Mary A. Peterson Neural evidence for competition-mediated suppression in the perception of a single object Journal Article In: Cortex, vol. 72, pp. 124–139, 2015. @article{Cacciamani2015, Multiple objects compete for representation in visual cortex. Competition may also underlie the perception of a single object. Computational models implement object perception as competition between units on opposite sides of a border. The border is assigned to the winning side, which is perceived as an object (or "figure"), whereas the other side is perceived as a shapeless ground. Behavioral experiments suggest that the ground is inhibited to a degree that depends on the extent to which it competed for object status, and that this inhibition is relayed to low-level brain areas. Here, we used fMRI to assess activation for ground regions of task-irrelevant novel silhouettes presented in the left or right visual field (LVF or RVF) while participants performed a difficult task at fixation. Silhouettes were designed so that the insides would win the competition for object status. The outsides (grounds) suggested portions of familiar objects in half of the silhouettes and novel objects in the other half. Because matches to object memories affect the competition, these two types of silhouettes operationalized, respectively, high competition and low competition from the grounds. The results showed that activation corresponding to ground regions was reduced for high- versus low-competition silhouettes in V4, where receptive fields (RFs) are large enough to encompass the familiar objects in the grounds, and in V1/V2, where RFs are much smaller. These results support a theory of object perception involving competition-mediated ground suppression and feedback from higher to lower levels. This pattern of results was observed in the left hemisphere (RVF), but not in the right hemisphere (LVF). One explanation of the lateralized findings is that task-irrelevant silhouettes in the RVF captured attention, allowing us to observe these effects, whereas those in the LVF did not. Experiment 2 provided preliminary behavioral evidence consistent with this possibility. |
Christophe Carlei; Dirk Kerzel The effect of gaze direction on the different components of visuo-spatial short-term memory Journal Article In: Laterality, vol. 20, no. 6, pp. 738–754, 2015. @article{Carlei2015, Cerebral asymmetries and cortical regions associated with the upper and lower visual field were investigated using shifts of gaze. Earlier research suggests that gaze shifts to the left or right increase activation of specific areas of the contralateral hemisphere. We asked whether looking at one quadrant of the visual field facilitates the recall in various visuo-spatial tasks. The different components of visuo-spatial memory were investigated by probing memory for a stimulus matrix in each quadrant of the screen. First, memory for visual images or patterns was probed with a matrix of squares that was simultaneously presented and had to be reconstructed by mouse click. Better memory performance was found in the upper left quadrant compared to the three other quadrants indicating that both laterality and elevation are important. Second, positional memory was probed by subsequently presenting squares which prevented the formation of a visual image. Again, we found that gaze to the upper left facilitated performance. Third, memory for object-location binding was probed by asking observers to associate objects to particular locations. Higher performance was found with gaze directed to the lower quadrants irrespective of lateralization, confirming that only some components of visual short-term memory have shared neural substrates.; |
Nathan Caruana; Jon Brock; Alexandra Woolgar A frontotemporoparietal network common to initiating and responding to joint attention bids Journal Article In: NeuroImage, vol. 108, pp. 34–46, 2015. @article{Caruana2015, Joint attention is a fundamental cognitive ability that supports daily interpersonal relationships and communication. The Parallel Distributed Processing model (PDPM) postulates that responding to (RJA) and initiating (IJA) joint attention are predominantly supported by posterior-parietal and frontal regions respectively. It also argues that these neural networks integrate during development, supporting the parallel processes of self- and other-attention representation during interactions. However, direct evidence for the PDPM is limited due to a lack of ecologically valid experimental paradigms that can capture both RJA and IJA. Building on existing interactive approaches, we developed a virtual reality paradigm where participants engaged in an online interaction to complete a cooperative task. By including tightly controlled baseline conditions to remove activity associated with non-social task demands, we were able to directly contrast the neural correlates of RJA and IJA to determine whether these processes are supported by common brain regions. Both RJA and IJA activated broad frontotemporoparietal networks. Critically, a conjunction analysis identified that a subset of these regions were common to both RJA and IJA. This right-lateralised network included the dorsal portion of the middle frontal gyrus (MFG), inferior frontal gyrus (IFG), middle temporal gyrus (MTG), precentral gyrus, posterior superior temporal sulcus (pSTS), temporoparietal junction (TPJ) and precuneus. Additional activation was observed in this network for IJA relative to RJA at MFG, IFG, TPJ and precuneus. This is the first imaging study to directly investigate the neural correlates common to RJA and IJA engagement, and thus support the assumption that a broad integrated network underlies the parallel aspects of both initiating and responding to joint attention. |
Nathan Caruana; Peter Lissa; Genevieve McArthur The neural time course of evaluating self-initiated joint attention bids Journal Article In: Brain and Cognition, vol. 98, pp. 43–52, 2015. @article{Caruana2015a, Background: During interactions with other people, we constantly evaluate the significance of our social partner's gaze shifts in order to coordinate our behaviour with their perspective. In this study, we used event-related potentials (ERPs) to investigate the neural time course of evaluating gaze shifts that signal the success of self-initiated joint attention bids. Method: Nineteen participants were allocated to a "social" condition, in which they played a cooperative game with an anthropomorphic virtual character whom they believed was controlled by a human partner in a nearby laboratory. Participants were required to initiate joint attention towards a target. In response, the virtual partner shifted his gaze congruently towards the target - thus achieving joint attention - or incongruently towards a different location. Another 19 participants completed the same task in a non-social "control" condition, in which arrows, believed to be controlled by a computer program, pointed at a location that was either congruent or incongruent with the participant's target fixation. Results: In the social condition, ERPs to the virtual partner's incongruent gaze shifts evoked significantly larger P350 and P500 peaks compared to congruent gaze shifts. This P350 and P500 morphology was absent in both the congruent and incongruent control conditions. Discussion: These findings are consistent with previous claims that gaze shifts differing in their social significance modulate central-parietal ERPs 350. ms following the onset of the gaze shift. Our control data highlights the social specificity of the observed P350 effect, ruling out explanations pertaining to attention modulation or error detection. |
Michele Cascardi; Davine Armstrong; Leeyup Chung; Denis Pare Pupil response to threat in trauma-exposed individuals with or without PTSD Journal Article In: Journal of Traumatic Stress, vol. 28, pp. 370–374, 2015. @article{Cascardi2015, An infrequently studied and potentially promising physiological marker for posttraumatic stress disorder (PTSD) is pupil response. This study tested the hypothesis that pupil responses to threat would be significantly larger in trauma-exposed individuals with PTSD compared to those without PTSD. Eye-tracking technology was used to evaluate pupil response to threatening and neutral images. Recruited for participation were 40 trauma-exposed individuals; 40.0% (n = 16) met diagnostic criteria for PTSD. Individuals with PTSD showed significantly more pupil dilation to threat-relevant stimuli compared to the neutral elements (Cohen's d = 0.76), and to trauma-exposed trauma, cumulative violence exposure, and trait anxiety were statistically adjusted. The final logistic regression model was associated with controls (Cohen's d = 0.75). Pupil dilation significantly accounted for 12% of variability in PTSD after time elapsed since most recent 85% of variability in PTSD status and correctly classified 93.8% of individuals with PTSD and 95.8% of those without. Pupil reactivity showed promise as a physiological marker for PTSD. |
Dario Cazzoli; Simon Jung; Thomas Nyffeler; Tobias Nef; Pascal Wurtz; Urs P. Mosimann; René M. Müri The role of the right frontal eye field in overt visual attention deployment as assessed by free visual exploration Journal Article In: Neuropsychologia, vol. 74, pp. 37–41, 2015. @article{Cazzoli2015, The frontal eye field (FEF) is known to be involved in saccade generation and visual attention control. Studies applying covert attentional orienting paradigms have shown that the right FEF is involved in attentional shifts to both the left and the right hemifield. In the current study, we aimed at examining the effects of inhibitory continuous theta burst (cTBS) transcranial magnetic stimulation over the right FEF on overt attentional orienting, as measured by a free visual exploration paradigm.In forty-two healthy subjects, free visual exploration of naturalistic pictures was tested in three conditions: (1) after cTBS over the right FEF; (2) after cTBS over a control site (vertex); and, (3) without any stimulation. The results showed that cTBS over the right FEF-but not cTBS over the vertex-triggered significant changes in the spatial distribution of the cumulative fixation duration. Compared to the group without stimulation and the group with cTBS over the vertex, cTBS over the right FEF decreased cumulative fixation duration in the left and in the right peripheral regions, and increased cumulative fixation duration in the central region.The present study supports the view that the right FEF is involved in the bilateral control of not only covert, but also of overt, peripheral visual attention. |
Paul Roux; Christine Passerieux; Franck Ramus An eye-tracking investigation of intentional motion perception in patients with schizophrenia Journal Article In: Journal of Psychiatry and Neuroscience, vol. 40, no. 2, pp. 118–125, 2015. @article{Roux2015, BACKGROUND: Schizophrenia has been characterized by an impaired attribution of intentions in social interactions. However, it remains unclear to what extent poor performance may be due to low-level processes or to later, higher-level stages or to what extent the deficit reflects an over- (hypermentalization) or underattribution of intentions (hypomentalization). METHODS: We evaluated intentional motion perception using a chasing detection paradigm in individuals with schizophrenia or schizoaffective disorder and in healthy controls while eye movements were recorded. Smooth pursuit was measured as a control task. Eye-tracking was used to dissociate ocular from cognitive stages of processing. RESULTS: We included 27 patients with schizophrenia, 2 with schizoaffective disorder and 29 controls in our analysis. As a group, patients had lower sensitivity to the detection of chasing than controls, but showed no bias toward the chasing present response. Patients showed a slightly different visual exploration strategy, which affected their ocular sensitivity to chasing. They also showed a decreased cognitive sensitivity to chasing that was not explained by differences in smooth pursuit ability, in visual exploration strategy or in general cognitive abilities. LIMITATIONS: It is not clear whether the deficit in intentional motion detection demonstrated in this study might be explained by a general deficit in motion perception in individuals with schizophrenia or whether it is specific to the social domain. CONCLUSION: Participants with schizophrenia showed a hypomentalization deficit: they adopted suboptimal visual exploration strategies and had difficulties deciding whether a chase was present or not, even when their eye movement revealed that chasing information had been seen correctly. |
Annie Roy-Charland; Melanie Perron; Jessica Boulard; Justin Chamberland; Nichola Hoffman If I point, do they look?: The impact of attention-orientation strategies on text exploration during shared book reading Journal Article In: Reading and Writing, vol. 28, no. 9, pp. 1285–1305, 2015. @article{RoyCharland2015, The current study examined the effect of pointing to the words and using highlighted text by examining eye movements when children in preschool, Grade 1 and 2 were read storybooks of two levels of difficulty. For all children, pointing to and highlighting the text was observed to increase the amount of time and number of fixations on the printed text than when there was no intervention. Furthermore, with difficult text, an increased amount of time and number of fixations was observed when the text was pointed to than when it was highlighted. For preschoolers, even with the increased attention on the text from pointing to and highlighting the words, the fixations did not match the narration. First and second graders, with the difficult book, made more matching fixations both when the printed text was pointed to and highlighted than when no intervention was done. Additionally, more matching fixations were made when the printed text was highlighted than when pointed to. Future research is required to examine the effects of attention-orienting strategies on reading related outcomes. |
Annie Roy-Charland; Melanie Perron; Cheryl Young; Jessica Boulard; Justin A. Chamberland The confusion of fear and surprise: A developmental study of the perceptual-attentional limitation hypothesis using eye movements Journal Article In: The Journal of Genetic Psychology, vol. 176, no. 5, pp. 281–298, 2015. @article{RoyCharland2015a, The goal of the present study was to test the Perceptual-Attentional Limitation Hypothesis in children and adults by manipulating the distinctiveness between expressions and recording eye movements. Children 3-5 and 9-11 years old as well as adults were presented pairs of expressions and required to identify a target emotion. Children 3-5 years old were less accurate than those 9-11 years old and adults. All children viewed pictures longer than adults but did not spend more time attending to the relevant cues. For all participants, accuracy for the recognition of fear was lower than for surprise when the distinctive cue was in the brow only. They also took longer and spent more time in both the mouth and brow zones than when a cue was in the mouth or both areas. Adults and children 9-11 years old made more comparisons between the expressions when fear comprised a single distinctive cue in the brow than when the distinctive cue was in the mouth only or when both cues were present. Children 3-5 years old made more comparisons for brow only than both. The results of the present study extend on the Perceptual-Attentional Limitation Hypothesis showing an importance of both decoder and stimuli, and an interaction between decoder and stimuli characteristics. |
Anthony J. Ryals; Jane X. Wang; Kelly L. Polnaszek; Joel L. Voss Hippocampal contribution to implicit configuration memory expressed via eye movements during scene exploration Journal Article In: Hippocampus, vol. 25, no. 9, pp. 1028–1041, 2015. @article{Ryals2015, Although hippocampus unequivocally supports explicit/declarative memory, fewer findings have demonstrated its role in implicit expressions of memory. We tested for hippocampal contributions to an implicit expression of configural/relational memory for complex scenes using eye-movement tracking during functional magnetic resonance imaging (fMRI) scanning. Participants studied scenes and were later tested using scenes that resembled study scenes in their overall feature configuration but comprised different elements. These configurally similar scenes were used to limit explicit memory, and were intermixed with new scenes that did not resemble studied scenes. Scene configuration memory was expressed through eye movements reflecting exploration overlap (EO), which is the viewing of the same scene locations at both study and test. EO reliably discriminated similar study-test scene pairs from study-new scene pairs, was reliably greater for similarity-based recognition hits than for misses, and correlated with hippocampal fMRI activity. In contrast, subjects could not reliably discriminate similar from new scenes by overt judgments, although ratings of familiarity were slightly higher for similar than new scenes. Hippocampal fMRI correlates of this weak explicit memory were distinct from EO-related activity. These findings collectively suggest that EO was an implicit expression of scene configuration memory associated with hippocampal activity. Visual exploration can therefore reflect implicit hippocampal-related memory processing that can be observed in eye-movement behavior during naturalistic scene viewing. |
Rachel A. Ryskin; Aaron S. Benjamin; Jonathan Tullis; Sarah Brown-Schmidt Perspective-taking in comprehension, production, and memory: An individual differences approach Journal Article In: Journal of Experimental Psychology: General, vol. 144, no. 5, pp. 898–915, 2015. @article{Ryskin2015, The ability to take a different perspective is central to a tremendous variety of higher level cognitive skills. To communicate effectively, we must adopt the perspective of another person both while speaking and listening. To ensure the successful retrieval of critical information in the future, we must adopt the perspective of our own future self and construct cues that will survive the passage of time. Here we explore the cognitive underpinnings of perspective-taking across a set of tasks that involve communication and memory, with an eye toward evaluating the proposal that perspective-taking is domain-general (e.g., Wardlow, 2013). We measured participants' perspective-taking ability in a language production task, a language comprehension task, and a memory task in which people generated their own cues for the future. Surprisingly, there was little variance common to the 3 tasks, a result that suggests that perspective-taking is not domain-general. Performance in the language production task was predicted by a measure of working memory, whereas performance in the cue-generation memory task was predicted by a combination of working memory and long-term memory measures. These results indicate that perspective-taking relies on differing cognitive capacities in different situations. |
Chihiro Saegusa; Janis Intoy; Shinsuke Shimojo Visual attractiveness is leaky: The asymmetrical relationship between face and hair Journal Article In: Frontiers in Psychology, vol. 6, pp. 377, 2015. @article{Saegusa2015, Predicting personality is crucial when communicating with people. It has been revealed that the perceived attractiveness or beauty of the face is a cue. As shown in the well-known "what is beautiful is good" stereotype, perceived attractiveness is often associated with desirable personality. Although such research on attractiveness used mainly the face isolated from other body parts, the face is not always seen in isolation in the real world. Rather, it is surrounded by one's hairstyle, and is perceived as a part of total presence. In human vision, perceptual organization/integration occurs mostly in a bottom up, task-irrelevant fashion. This raises an intriguing possibility that task-irrelevant stimulus that is perceptually integrated with a target may influence our affective evaluation. In such a case, there should be a mutual influence between attractiveness perception of the face and surrounding hair, since they are assumed to share strong and unique perceptual organization. In the current study, we examined the influence of a task-irrelevant stimulus on our attractiveness evaluation, using face and hair as stimuli. The results revealed asymmetrical influences in the evaluation of one while ignoring the other. When hair was task-irrelevant, it still affected attractiveness of the face, but only if the hair itself had never been evaluated by the same evaluator. On the other hand, the face affected the hair regardless of whether the face itself was evaluated before. This has intriguing implications on the asymmetry between face and hair, and perceptual integration between them in general. Together with data from a post hoc questionnaire, it is suggested that both implicit non-selective and explicit selective processes contribute to attractiveness evaluation. The findings provide an understanding of attractiveness perception in real-life situations, as well as a new paradigm to reveal unknown implicit aspects of information integration for emotional judgment. |
Carola Salvi; Emanuela Bricolo; Steven L. Franconeri; John Kounios; Mark Beeman Sudden insight is associated with shutting out visual inputs Journal Article In: Psychonomic Bulletin & Review, vol. 22, no. 6, pp. 1814–1819, 2015. @article{Salvi2015, Creative ideas seem often to appear when we close our eyes, stare at a blank wall, or gaze out of a window—all signs of shutting out distractions and turning attention inward. Prior research has demonstrated that attention-related brain areas are differently active when people solve problems with sudden insight (the Aha! phenomenon), relative to deliberate, analytic solving.We directly investigated the relationship be- tween attention deployment and problemsolving by recording eye movements and blinks, which are overt indicators of at- tention, as people solved short, visually presented problems. In the preparation period, before problems eventually solved by insight, participants blinked more frequently and longer, and made fewer fixations, than before problems eventually solved by analysis. Immediately prior to solutions, partici- pants blinked longer and looked away from the problemmore often when solving by insight than when solving analytically. These phenomena extend prior research with a direct demon- stration of dynamic differences in attention as people solve problems with sudden insight versus analytically. |
Sebastian Sandoval Similä; Robert D. McIntosh Look where you're going! Perceptual attention constrains the online guidance of action Journal Article In: Vision Research, vol. 110, pp. 179–189, 2015. @article{SandovalSimilae2015, Action guidance, like perceptual discrimination, requires selective attention. Perception is enhanced at the target of a reaching movement, but it is not known whether selecting an object for perception reciprocally prioritises it for action. Two theoretical frameworks, the premotor theory and the Visual Attention Model, predict that this reciprocal relation should hold. We tested the influence of perceptual attention on the online control of reaching. In Experiment 1, participants attended covertly to a flanker on one or other side of a fixated target, prior to reaching for that target, which occasionally jumped, after reach onset, to the attended or non-attended side. Participants corrected their reaches for almost all target jumps. In Experiment 2, we required covert monitoring of the flanker during reaching. This concurrent perceptual task globally reduced correction behaviour, indicating that perception and action share a common attentional resource. Corrections were especially unlikely toward the attended side. This is explained by assuming that perceptual attention primed an action toward the attended location and that the participant inhibited this primed action. The data thus imply that perceptual selection constrains online action guidance, as predicted by the premotor theory and the VAM. We further argue that the fact that participants can inhibit a location within the action system but simultaneously maintain its prioritisation for perceptual monitoring, is easier to reconcile with the VAM than with the premotor theory. |
Jessica Taubert; Goedele Van Belle; Wim Vanduffel; Bruno Rossion; Rufin Vogels The effect of face inversion for neurons inside and outside fMRI-defined face-selective cortical regions Journal Article In: Journal of Neurophysiology, vol. 113, no. 5, pp. 1644–1655, 2015. @article{Taubert2015, It is widely believed that face processing in the primate brain occurs in a network of category-selective cortical regions. Combined functional MRI (fMRI)-single-cell recording studies in macaques have identified high concentrations of neurons that respond more to faces than objects within face-selective patches. However, cells with a preference for faces over objects are also found scattered throughout inferior temporal (IT) cortex, raising the question whether face-selective cells inside and outside of the face patches differ functionally. Here, we compare the properties of face-selective cells inside and outside of face-selective patches in the IT cortex by means of an image manipulation that reliably disrupts behavior toward face processing: inversion. We recorded IT neurons from two fMRI-defined face-patches (ML and AL) and a region outside of the face patches (herein labeled OUT) during upright and inverted face stimulation. Overall, turning faces upside down reduced the firing rate of face-selective cells. However, there were differences among the recording regions. First, the reduced neuronal response for inverted faces was independent of stimulus position, relative to fixation, in the face-selective patches (ML and AL) only. Additionally, the effect of inversion for face-selective cells in ML, but not those in AL or OUT, was impervious to whether the neurons were initially searched for using upright or inverted stimuli. Collectively, these results show that face-selective cells differ in their functional characteristics depending on their anatomicofunctional location, suggesting that upright faces are preferably coded by face-selective cells inside but not outside of the fMRI-defined face-selective regions of the posterior IT cortex. |
Jessica Taubert; Goedele Van Belle; Wim Vanduffel; Bruno Rossion; Rufin Vogels Neural correlate of the Thatcher face illusion in a monkey face-selective patch Journal Article In: Journal of Neuroscience, vol. 35, no. 27, pp. 9872–9878, 2015. @article{Taubert2015a, Compelling evidence that our sensitivity to facial structure is conserved across the primate order comes from studies of the “Thatcher face illusion”: humans and monkeys notice changes in the orientation of facial features (e.g., the eyes) only when faces are upright, not when faces are upside down. Although it is presumed that face perception in primates depends on face-selective neurons in the inferior temporal (IT) cortex, it is not known whether these neurons respond differentially to upright faces with inverted features. Using microelectrodes guided by functional MRI mapping, we recorded cell responses in three regions of monkey IT cortex.Wereport an interaction in the middle lateral face patch (ML) between the global orientation of a face and the local orientation of its eyes, a response profile consistent with the perception of the Thatcher illusion. This increased sensitivity to eye orientation in upright faces resisted changes in screen location and was not found among face-selective neurons in other areas of IT cortex, including neurons in another face-selective region, the anterior lateral face patch. We conclude that the Thatcher face illusion is correlated with a pattern of activity in the ML that encodes faces according to a flexible holistic template. |
Masahiko Terao; Ikuya Murakami; Shin'ya Nishida Enhancement of motion perception in the direction opposite to smooth pursuit eye movement Journal Article In: Journal of Vision, vol. 15, no. 13, pp. 1–11, 2015. @article{Terao2015, When eyes track a moving target, a stationary background environment moves in the direction opposite to the eye movement on the observer's retina. Here, we report a novel effect in which smooth pursuit can enhance the retinal motion in the direction opposite to eye movement, under certain conditions. While performing smooth pursuit, the observers were presented with a counterphase grating on the retina. The counterphase grating consisted of two drifting component gratings: one drifting in the direction opposite to the eye movement and the other drifting in the same direction as the pursuit. Although the overall perceived motion direction should be ambiguous if only retinal information is considered, our results indicated that the stimulus almost always appeared to be moving in the direction opposite to the pursuit direction. This effect was ascribable to the perceptual dominance of the environmentally stationary component over the other. The effect was robust at suprathreshold contrasts, but it disappeared at lower overall contrasts. The effect was not associated with motion capture by a reference frame served by peripheral moving images. Our findings also indicate that the brain exploits eye-movement information not only for eye-contingent image motion suppression but also to develop an ecologically plausible interpretation of ambiguous retinal motion signals. Based on this biological assumption, we argue that visual processing has the functional consequence of reducing the apparent motion blur of a stationary background pattern during eye movements and that it does so through integration of the trajectories of pattern and color signals. |
Katharine N. Thakkar; Jeffrey D. Schall; Gordon D. Logan; Sohee Park Cognitive control of gaze in bipolar disorder and schizophrenia Journal Article In: Psychiatry Research, vol. 225, no. 3, pp. 254–262, 2015. @article{Thakkar2015a, The objective of the present study was to compare two components of executive functioning, response monitoring and inhibition, in bipolar disorder (BP) and schizophrenia (SZ). The saccadic countermanding task is a translational paradigm optimized for detecting subtle abnormalities in response monitoring and response inhibition. We have previously reported countermanding performance abnormalities in SZ, but the degree to which these impairments are shared by other psychotic disorders is unknown. 18 BP, 17 SZ, and 16 demographically matched healthy controls (HC) participated in a saccadic countermanding task. Performance on the countermanding task is approximated as a race between movement generation and inhibition processes; this model provides an estimate of the time needed to cancel a planned movement. Response monitoring was assessed by the reaction time (RT) adjustments based on trial history. Like SZ patients, BP patients needed more time to cancel a planned movement. The two patient groups had equivalent inhibition efficiency. On trial history-based RT adjustments, however, we found a trend towards exaggerated trial history-based slowing in SZ compared to BP. Findings have implications for understanding the neurobiology of cognitive control, for defining the etiological overlap between schizophrenia and bipolar disorder, and for developing pharmacological treatments of cognitive impairments. |
Katharine N. Thakkar; Jeffrey D. Schall; Gordon D. Logan; Sohee Park Response inhibition and response monitoring in a saccadic double-step task in schizophrenia Journal Article In: Brain and Cognition, vol. 95, pp. 90–98, 2015. @article{Thakkar2015b, Background: Cognitive control impairments are linked to functional outcome in schizophrenia. The goal of the current study was to investigate precise abnormalities in two aspects of cognitive control: reactively changing a prepared response, and monitoring performance and adjusting behavior accordingly. We adapted an oculomotor task from neurophysiological studies of the cellular basis of cognitive control in nonhuman primates. Methods: 16 medicated outpatients with schizophrenia (SZ) and 18 demographically-matched healthy controls performed the modified double-step task. In this task, participants were required to make a saccade to a visual target. Infrequently, the target jumped to a new location and participants were instructed to rapidly inhibit and change their response. A race model provided an estimate of the time needed to cancel a planned movement. Response monitoring was assessed by measuring reaction time (RT) adjustments based on trial history. Results: SZ patients had normal visually-guided saccadic RTs but required more time to switch the response to the new target location. Additionally, the estimated latency of inhibition was longer in patients and related to employment. Finally, although both groups slowed down on trials that required inhibiting and changing a response, patients showed exaggerated performance-based adjustments in RTs, which was correlated with positive symptom severity. Conclusions: SZ patients have impairments in rapidly inhibiting eye movements and show idiosyncratic response monitoring. These results are consistent with functional abnormalities in a network involving cortical oculomotor regions, the superior colliculus, and basal ganglia, as described in neurophysiological studies of non-human primates using an identical paradigm, and provide a translational bridge for understanding cognitive symptoms of SZ. |
Kate M. Thompson; Tracy L. Taylor Memory instruction interacts with both visual and motoric inhibition of return Journal Article In: Attention, Perception, and Psychophysics, vol. 77, no. 3, pp. 804–818, 2015. @article{Thompson2015, In the item-method directed forgetting paradigm, the magnitude of inhibition of return (IOR) is larger after an instruction to forget (F) than after an instruction to remember (R). In the present experiments, we further investigated this increased magnitude of IOR after F than after R memory instructions, to determine whether this F > R IOR pattern occurs only for the motoric form of IOR, as predicted, or also for the visual form. In three experiments, words were presented in one of two peripheral locations, followed by either an F or an R memory instruction. Then, a target appeared either at the same location as the previous word or at the other location. In Experiment 1, participants maintained fixation throughout the trial until the target appeared, at which point they made a saccade to the target. In Experiment 2, they maintained fixation throughout the entire trial and made a manual localization response to the target. The F > R IOR difference in reaction times occurred for both the saccadic and manual responses, suggesting that memory instructions modify both motoric and visual forms of IOR. In Experiment 3, participants made a perceptual discrimination response to report the identity of a target while the eyes remained fixed. The F > R IOR difference also occurred for these manual discrimination responses, increasing our confidence that memory instructions modify the visual form of IOR. We relate our findings to postulated differences in attentional withdrawal following F and R instructions and consider the implications of the findings for successful forgetting. |
Andreza Sartori; Victoria Yanulevskaya; Almila Akdag Salah; Jasper Uijlings; Elia Bruni; Nicu Sebe Affective analysis of abstract paintings using statistical analysis and art theory Journal Article In: ACM Transactions on Interactive Intelligent Systems (TiiS), vol. 5, pp. 1–27, 2015. @article{Sartori2015, When artists express their feelings through the artworks they create, it is believed that the resulting works transform into objects with “emotions” capable of conveying the artists' mood to the audience. There is little to no dispute about this belief: Regardless of the artwork, genre, time, and origin of creation, people from different backgrounds are able to read the emotional messages. This holds true even for the most abstract paintings. Could this idea be applied to machines as well? Can machines learn what makes a work of art “emotional”? In this work, we employ a state-of-the-art recognition system to learn which statistical patterns are associated with positive and negative emotions on two different datasets that comprise professional and amateur abstract artworks. Moreover, we analyze and compare two different annotation methods in order to establish the ground truth of positive and negative emotions in abstract art. Additionally, we use computer vision techniques to quantify which parts of a painting evoke positive and negative emotions. We also demonstrate how the quantification of evidence for positive and negative emotions can be used to predict which parts of a painting people prefer to focus on. This method opens new opportunities of research on why a specific painting is perceived as emotional at global and local scales. |
David J. Schaeffer; Lingxi Chi; Cynthia E. Krafft; Qingyang Li; Nicolette F. Schwarz; Jennifer E. Mcdowell Individual differences in working memory moderate the relationship between prosaccade latency and antisaccade error rate Journal Article In: Psychophysiology, vol. 52, no. 4, pp. 605–608, 2015. @article{Schaeffer2015, Cognitive control is required for flexible responses in changing environments and can be assessed by measuring antisaccade error rate. Considerable variance in antisaccade error rate is observed in healthy participants, which motivated the current study to explore the cognitive factors affecting antisaccade performance. Relationships exist between prosaccade latency and antisaccade error rate, with faster prosaccade latencies linked to more antisaccade errors. Individual differences in working memory also impact saccadic performance. The current study tested the relationships among prosaccade latency, antisaccade error rate, and working memory in 153 healthy participants. Correlation and multiple regression analyses demonstrated that prosaccade latency predicted antisaccade error rate, and working memory moderated this relationship. These results may help elucidate individual differences in cognitive control among healthy individuals. |
Annett Schirmer; Christy Reece; Claris Zhao; Erik Ng; Esther Wu; Shih-Cheng Yen Reach out to one and you reach out to many: Social touch affects third-party observers Journal Article In: British Journal of Psychology, vol. 106, no. 1, pp. 107–132, 2015. @article{Schirmer2015, Casual social touch influences emotional perceptions, attitudes, and behaviours of interaction partners. We asked whether these influences extend to third-party observers. To this end, we developed the Social Touch Picture Set comprising line drawings of dyadic interactions, half of which entailed publicly acceptable casual touch and half of which served as no-touch controls. In Experiment 1, participants provided basic image norms by rating how frequently they observed a displayed touch gesture in everyday life and how comfortable they were observing it. Results implied that some touch gestures were observed more frequently and with greater comfort than others (e.g., handshake vs. hug). All gestures, however, obtained rating scores suitable for inclusion in Experiments 2 and 3. In Experiment 2, participants rated perceived valence, arousal, and likeability of randomly presented touch and no-touch images without being explicitly informed about touch. Image characters seemed more positive, aroused, and likeable when they touched as compared to when they did not touch. Image characters seemed more negative and aroused, but were equally likeable, when they received touch as compared to when there was no physical contact. In Experiment 3, participants passively viewed touch and no-touch images while their eye movements were recorded. Differential gazing at touch as compared to no-touch images emerged within the first 500 ms following image exposure and was largely restricted to the characters' upper body. Gazing at the touching body parts (e.g., hands) was minimal and largely unaffected by touch, suggesting that touch processing occurred outside the focus of visual attention. Together, these findings establish touch as an important visual cue and provide novel insights into how this cue modulates socio-emotional processing in third-party observers. |
Lisette J. Schmidt; Artem V. Belopolsky; Jan Theeuwes Potential threat attracts attention and interferes with voluntary saccades Journal Article In: Emotion, vol. 15, no. 3, pp. 329–338, 2015. @article{Schmidt2015, Several studies have shown that threatening stimuli are prioritized by the visual system. In the present study we investigated whether a stimulus associated with a threat of electrical shock attracts attention and accordingly interferes with the execution of voluntary eye movements to other locations. In 2 experiments, we showed that when a fear-conditioned and a neutral stimulus were presented simultaneously, voluntary saccades were initiated faster toward fear-conditioned compared with neutral stimuli. Moreover, saccades often erroneously went to the location of threat even when a saccade to a different location was required. This implies an automatic shift of attention to a fear-conditioned stimulus that interferes with saccade execution. The same pattern of results was found for a neutral stimulus that was always presented together with the fear-conditioned stimulus and consequently itself became associated with threat. The current results indicate that threatening stimuli attract visual attention and subsequently bias saccade target selection in a reflexive fashion. |
Tobias Schoeberl; Isabella Fuchs; Jan Theeuwes; Ulrich Ansorge Stimulus-driven attentional capture by subliminal onset cues Journal Article In: Attention, Perception, and Psychophysics, vol. 77, no. 3, pp. 737–748, 2015. @article{Schoeberl2015, In two experiments, we tested whether subliminal abrupt onset cues capture attention in a stimulus-driven way. An onset cue was presented 16 ms prior to the stimulus display that consisted of clearly visible color targets. The onset cue was presented either at the same side as the target (the valid cue condition) or on the opposite side of the target (the invalid cue condition). Because the onset cue was presented 16 ms before other placeholders were presented, the cue was subliminal to the participant. To ensure that this subliminal cue captured attention in a stimulus-driven way, the cue's features did not match the top-down attentional control settings of the participants: (1) The color of the cue was always different than the color of the non-singleton targets ensuring that a top-down set for a specific color or for a singleton would not match the cue, and (2) colored targets and distractors had the same objective luminance (measured by the colorimeter) and subjective lightness (measured by flicker photometry), preventing a match between the top-down set for target and cue contrast. Even though a match between the cues and top-down settings was prevented, in both experiments, the cues captured attention, with faster response times in valid than invalid cue conditions (Experiments 1 and 2) and faster response times in valid than the neutral conditions (Experiment 2). The results support the conclusion that subliminal cues capture attention in a stimulus-driven way. |
Chris Scholes; Paul V. McGraw; Marcus Nyström; Neil W. Roach Fixational eye movements predict visual sensitivity Journal Article In: Proceedings of the Royal Society B: Biological Sciences, vol. 282, pp. 1–10, 2015. @article{Scholes2015, During steady fixation, observers make small fixational saccades at a rate of around 1–2 per second. Presentation of a visual stimulus triggers a biphasic modulation in fixational saccade rate—an initial inhibition followed by a period of elevated rate and a subsequent return to baseline. Here we show that, during passive viewing, this rate signature is highly sensitive to small changes in stimulus contrast. By training a linear support vector machine to classify trials in which a stimulus is either present or absent,we directly com- pared the contrast sensitivity of fixational eye movements with individuals' psychophysical judgements. Classification accuracy closely matched psycho- physical performance, and predicted individuals' threshold estimates with less bias and overall error than those obtained using specific features of the sig- nature. Performance of the classifier was robust to changes in the training set (novel subjects and/or contrasts) and good prediction accuracy was obtained with a practicable number of trials. Our results indicate a tight coupling between the sensitivity of visual perceptual judgements and fixational eye con- trol mechanisms. This raises the possibility that fixational saccades could provide a novel and objective means of estimating visual contrast sensitivity without the need for observers to make any explicit judgement. |
Daniel E. Schoth; H. J. Godwin; Simon P. Liversedge; Christina Liossi Eye movements during visual search for emotional faces in individuals with chronic headache Journal Article In: European Journal of Pain, vol. 19, no. 5, pp. 722–732, 2015. @article{Schoth2015, BACKGROUND: Attentional biases for pain-related information have been frequently reported in individuals with chronic pain. Recording of participants' eye movements provides a continuous measure of attention, although to date this methodology has received little use in research exploring attentional biases in chronic pain. The aim of the current investigation was to explore the specificity of attentional orienting bias using a novel visual search task while recording participant eye movement behaviours. This also allowed for the investigation of whether attentional biases for pain-related information exist in the presence of multiple stimuli competing for attention. METHODS: Twenty-three participants with chronic headache and 24 pain-free, healthy control participants were engaged in a visual search task where pain, angry, happy and neutral faces were used as both target and distractor stimuli. While completing this task, participants' eye movements were recorded. RESULTS: Supporting the adopted hypothesis, participants with chronic headache, relative to healthy controls, demonstrated a significantly higher proportion of initial fixations to target pain expressions when the pain expressions were presented in displays containing neutral-distractor faces. No significant differences were found between groups in the time taken to fixate target pain expressions (localization time). CONCLUSIONS: Individuals with chronic headache show facilitated initial orienting towards pain expressions specifically when used as targets in a visual search task. This study adds to a growing body of research supporting the presence of pain-related attentional biases in chronic pain as assessed via different experimental paradigms, and shows biases to exist when multiple stimuli competing for attention are presented simultaneously. |
Alexander C. Schütz; Felix Lossin; Karl R. Gegenfurtner Dynamic integration of information about salience and value for smooth pursuit eye movements Journal Article In: Vision Research, vol. 113, pp. 169–178, 2015. @article{Schuetz2015a, Eye movement behavior can be determined by bottom-up factors like visual salience and by top-down factors like expected value. These different types of signals have to be combined for the control of eye movements. In this study we investigated how smooth pursuit eye movements integrate salience and value information. Observers were asked to track a random-dot kinematogram containing two coherent motion directions. To manipulate salience, the coherence or the density of one of the motion signals was varied. To manipulate value, observers won or lost money in a separate experiment if they were tracking one or the other motion direction. Our results show that pursuit direction was initially determined only by salience. 300-400 ms after target motion onset, pursuit steered towards the rewarded direction and the salience effects disappeared. The time course of this effect depended crucially on the difficulty to segment the two signal directions. These results indicate that salience determines early pursuit responses in the same way as saccades with short latencies. Value information is processed slower and dominates pursuit after several 100 ms. |
Immo Schütz; Denise Y. P. Henriques; Katja Fiehler No effect of delay on the spatial representation of serial reach targets Journal Article In: Experimental Brain Research, vol. 233, no. 4, pp. 1225–1235, 2015. @article{Schuetz2015b, When reaching for remembered target locations, it has been argued that the brain primarily relies on egocentric metrics and especially target position relative to gaze when reaches are immediate, but that the visuo-motor system relies stronger on allocentric (i.e., object-centered) metrics when a reach is delayed. However, previous reports from our group have shown that reaches to single remembered targets are represented relative to gaze, even when static visual landmarks are available and reaches are delayed by up to 12 s. Based on previous findings which showed a stronger contribution of allocentric coding in serial reach planning, the present study aimed to determine whether delay influences the use of a gaze-dependent reference frame when reaching to two remembered targets in a sequence after a delay of 0, 5 or 12 s. Gaze was varied relative to the first and second target and shifted away from the target before each reach. We found that participants used egocentric and allocentric reference frames in combination with a stronger reliance on allocentric information regardless of whether reaches were executed immediately or after a delay. Our results suggest that the relative contributions of egocentric and allocentric reference frames for spatial coding and updating of sequential reach targets do not change with a memory delay between target presentation and reaching. |
Hillary Schwarb; Patrick D. Watson; Kelsey Campbell; Christopher L. Shander; Jim M. Monti; Gillian E. Cooke; Jane X. Wang; Arthur F. Kramer; Neal J. Cohen Competition and cooperation among relational memory representations Journal Article In: PLoS ONE, vol. 10, no. 11, pp. e0143832, 2015. @article{Schwarb2015, Mnemonic processing engages multiple systems that cooperate and compete to support task performance. Exploring these systems' interaction requires memory tasks that produce rich data with multiple patterns of performance sensitive to different processing sub-components. Here we present a novel context-dependent relational memory paradigm designed to engage multiple learning and memory systems. In this task, participants learned unique face-room associations in two distinct contexts (i.e., different colored buildings). Faces occupied rooms as determined by an implicit gender-by-side rule structure (e.g., male faces on the left and female faces on the right) and all faces were seen in both contexts. In two experiments, we use behavioral and eye-tracking measures to investigate interactions among different memory representations in both younger and older adult populations; furthermore we link these representations to volumetric variations in hippocampus and ventromedial PFC among older adults. Overall, performance was very accurate. Successful face placement into a studied room systematically varied with hippocampal volume. Selecting the studied room in the wrong context was the most typical error. The proportion of these errors to correct responses positively correlated with ventromedial prefrontal volume. This novel task provides a powerful tool for investigating both the unique and interacting contributions of these systems in support of relational memory. |
Mojtaba Seyedhosseini; S. Shushruth; Tyler Davis; Jennifer M. Ichida; Paul A. House; Bradley Greger; Alessandra Angelucci; Tolga Tasdizen Informative features of local field potential signals in primary visual cortex during natural image stimulation Journal Article In: Journal of Neurophysiology, vol. 113, no. 5, pp. 1520–1532, 2015. @article{Seyedhosseini2015, The local field potential (LFP) is of growing importance in neurophysiology as a metric of network activity and as a readout signal for use in brain-machine interfaces. However, there are uncertainties regarding the kind and visual field extent of information carried by LFP signals, and the specific features of the LFP signal conveying such information, especially under naturalistic conditions. To address these questions, we recorded LFP responses to natural images in V1 of awake and anesthetized macaques using Utah multielectrode arrays. First, we show that it is possible to identify presented natural images from the LFP responses they evoke using trained Gabor wavelet (GW) models. Since GW models were devised to explain the spiking responses of V1 cells, this finding suggests that local spiking activity and LFPs (thought to reflect primarily local synaptic activity) carry similar visual information. Second, models trained on scalar metrics, like evoked LFP response range, provide robust image identification, supporting the informative nature of even simple LFP features. Third, image identification is robust only for the first 300ms following image presentation, and image information is not restricted to any of the spectral bands. This suggests that the short latency broadband LFP response carries most information during natural scene viewing. Finally, best image identification was achieved by GW models incorporating information at the scale of ~0.5° in size and trained using 4 different orientations. This suggests that during natural image viewing LFPs carry stimulus-specific information at spatial scales corresponding to few orientation columns in macaque V1. |
Timothy J. Shakespeare; Yoni Pertzov; Keir X. X. Yong; Jennifer M. Nicholas; Sebastian J. Crutch Reduced modulation of scanpaths in response to task demands in posterior cortical atrophy Journal Article In: Neuropsychologia, vol. 68, pp. 190–200, 2015. @article{Shakespeare2015a, A difficulty in perceiving visual scenes is one of the most striking impairments experienced by patients with the clinico-radiological syndrome posterior cortical atrophy (PCA). However whilst a number of studies have investigated perception of relatively simple experimental stimuli in these individuals, little is known about multiple object and complex scene perception and the role of eye movements in posterior cortical atrophy. We embrace the distinction between high-level (top-down) and low-level (bottom-up) influences upon scanning eye movements when looking at scenes. This distinction was inspired by Yarbus (1967), who demonstrated how the location of our fixations is affected by task instructions and not only the stimulus' low level properties. We therefore examined how scanning patterns are influenced by task instructions and low-level visual properties in 7 patients with posterior cortical atrophy, 8 patients with typical Alzheimer's disease, and 19 healthy age-matched controls.Each participant viewed 10 scenes under four task conditions (encoding, recognition, search and description) whilst eye movements were recorded. The results reveal significant differences between groups in the impact of test instructions upon scanpaths. Across tasks without a search component, posterior cortical atrophy patients were significantly less consistent than typical Alzheimer's disease patients and controls in where they were looking. By contrast, when comparing search and non-search tasks, it was controls who exhibited lowest between-task similarity ratings, suggesting they were better able than posterior cortical atrophy or typical Alzheimer's disease patients to respond appropriately to high-level needs by looking at task-relevant regions of a scene. Posterior cortical atrophy patients had a significant tendency to fixate upon more low-level salient parts of the scenes than controls irrespective of the viewing task. The study provides a detailed characterisation of scene perception abilities in posterior cortical atrophy and offers insights into the mechanisms by which high-level cognitive schemes interact with low-level perception. |
Annie L. Shelton; Kim M. Cornish; David E. Godler; Meaghan Clough; Claudine Kraan; Minh Bui; Joanne Fielding Delineation of the working memory profile in female FMR1 premutation carriers: The effect of cognitive load on ocular motor responses Journal Article In: Behavioural Brain Research, vol. 282, pp. 194–200, 2015. @article{Shelton2015, Fragile X mental retardation 1 (FMR1) premutation carriers (PM-carriers) are characterised as having mid-sized expansions of between 55 and 200 CGG repeats in the 5' untranslated region of the FMR1 gene. While there is evidence of executive dysfunction in PM-carriers, few studies have explicitly explored working memory capabilities in female PM-carriers. 14 female PM-carriers and 13 age- and IQ-matched healthy controls completed an ocular motor n-back working memory paradigm. This task examined working memory ability and the effect of measured increases in cognitive load. Female PM-carriers were found to have attenuated working memory capabilities. Increasing the cognitive load did not elicit the expected reciprocal increase in the task errors for female PM-carriers, as it did in controls. However female PM-carriers took longer to respond than controls, regardless of the cognitive load. Further, FMR1 mRNA levels were found to significantly predict PM-carrier response time. Although preliminary, these findings provide further evidence of executive dysfunction, specifically disruption to working memory processes, which were found to be associated with increases in FMR1 mRNA expression in female PM-carriers. With future validation, ocular motor paradigms such as the n-back paradigm will be critical to the development of behavioural biomarkers for identification of PM-carrier cognitive-affective phenotypes. |
Chengyao Shen; Xun Huang; Qi Zhao Predicting eye fixations in webpages with multi-scale features and high-level representations from deep networks Journal Article In: IEEE Transactions on Multimedia, vol. 17, no. 11, pp. 2084–2093, 2015. @article{Shen2015, In recent decades, webpages are becoming an increasingly important visual information source. Compared with natural images, webpages are different in many ways. For example, webpages are usually rich in semantically meaningful visual media (text, pictures, logos, and animations), which make the direct application of some traditional low-level saliency models ineffective. Besides, distinct web-viewing patterns such as top-left bias and banner blindness suggest different ways for predicting attention deployment on a webpage. In this study, we utilize a new scheme of low-level feature extraction pipeline and combine it with high-level representations from deep neural networks. The proposed model is evaluated on a newly published webpage saliency dataset with three popular evaluation metrics. Results show that our model outperforms other existing saliency models by a large margin and both low-and high-level features play an important role in predicting fixations on webpage. |
Chengyao Shen; Xun Huang; Qi Zhao Predicting eye fixations on webpage with an ensemble of early features and high-level representations from deep network Journal Article In: IEEE Transactions on Multimedia, vol. 17, no. 11, pp. 2084–2093, 2015. @article{Shen2015a, In recent decades, webpage is becoming an increasingly important visual information source for us. Compared with natural images, webpages are different in many ways. For example, webpages are usually rich in semantically- meaningful visual media (text, pictures, logos and animations) which make the direct application of some traditional low-level saliency models ineffective. Besides, distinct web-viewing patterns such as top-left bias and banner blindness suggest different ways for predicting attention deployment on webpage. In this study, we utilize a new scheme of low-level feature extraction pipeline and combine it with high-level representations from Deep Neural Networks. The proposed model is evaluated on a newly published webpage saliency dataset with three popular evaluation metrics. Results show that our model outperforms other existing saliency models by a large margin and both low- and high-level features play an important role in predicting fixations on webpage. Index |
Alisha Siebold; Matthew David Weaver; Mieke Donk; Wieske Zoest Social salience does not transfer to oculomotor visual search Journal Article In: Visual Cognition, vol. 23, no. 8, pp. 989–1019, 2015. @article{Siebold2015, Evidence suggests that socially relevant information, such as self-referential information, leads to perceptual prioritization that is considered to be similar to prioritization based on physical stimulus salience. The current study used an oculomotor visual search paradigm to investigate whether self-prioritization affects visual selection early in time, akin to physical salience, or later in time, where it would relate to processing of top-down strategies. We report three experiments. Prior to each experiment, observers first performed a manual line-label matching task where they were asked to form associations between two orientation lines (right-tilted and left-tilted) and two labels (you and stranger). Participants then had to make a speeded eye-movement to one of the two lines without any task instructions (Experiment 1), to a dot probe target located on one of the two lines (Experiment 2), or to the line that was validly cued by its associated label (Experiment 3). We replicate previous findings with the manual stimulus-matching task. However, we did not find any evidence for increased salience of the self-relevant you stimulus during visual search, nor did we observe any self-prioritization due to later goal-driven or strategic processing. We argue that self-prioritization does not affect overt visual selection. The results suggest that the effects found in the manual matching task are unlikely to reflect self-prioritization during perceptual processing but might rather act on higher-level processing related to recognition or decision-making. |
Heida M. Sigurdardottir; David L. Sheinberg The effects of short-term and long-term learning on the responses of lateral intraparietal neurons to visually presented objects Journal Article In: Journal of Cognitive Neuroscience, vol. 27, no. 7, pp. 1360–1375, 2015. @article{Sigurdardottir2015, The lateral intraparietal area (LIP) is thought to play an important role in the guidance of where to look and pay attention. LIP can also respond selectively to differently shaped objects. We sought to understand to what extent short-term and long-term experience with visual orienting determines the responses of LIP to objects of different shapes. We taught monkeys to arbitrarily associate centrally presented objects of various shapes with orienting either toward or away from a preferred spatial location of a neuron. The training could last for less than a single day or for several months. We found that neural responses to objects are affected by such experience, but that the length of the learning period determines how this neural plasticity manifests. Short-term learning affects neural responses to objects, but these effects are only seen relatively late after visual onset; at this time, the responses to newly learned objects resemble those of familiar objects that share their meaning or arbitrary association. Long-term learning affects the earliest bottom–up responses to visual objects. These responses tend to be greater for objects that have been associated with looking toward, rather than away from, LIP neurons' preferred spatial locations. Responses to objects can nonetheless be distinct, although they have been similarly acted on in the past and will lead to the same orienting behavior in the future. Our results therefore indicate that a complete experience-driven override of LIP object responses may be difficult or impossible. We relate these results to behavioral work on visual attention. |
J. D. Silvis; Katya Olmos-Solis; M. Donk The nature of the global effect beyond the first eye movement Journal Article In: Vision Research, vol. 108, pp. 20–32, 2015. @article{Silvis2015, When two or more visual objects appear in close proximity, the initial oculomotor response is systematically aimed at a location in between the objects, a phenomenon named the global effect. The global effect is known to arise when saccades are initiated relatively quickly, immediately after the presentation of a display, but it has also been shown that a global effect may occur much later in time, even for eye movements beyond the first. That is, when participants are searching for a complex target among complex distractor objects, it can take several eye movements to hit the target, and these eye movements mainly land at intermediate locations. It is debatable whether these findings are caused by the same mechanisms as those involved in the more typical global effect studies, studies in which much simpler search tasks are employed. In the current two experiments, we examined whether and under which circumstances a global effect can be found for a second oculomotor response in a search display containing two simple objects. Experiment 1 showed that the global effect only occurs when the presentation of the target and distractor objects is delayed, until after the first oculomotor response is initiated. Experiment 2 demonstrated that identity information, rather than spatial information, is crucial for the occurrence of the global effect. These results suggest that the global effect is not due to a failure to dissociate between the locations of multiple objects, but a failure to determine which one is the target. |
Jeroen D. Silvis; Artem V. Belopolsky; Jozua W. I. Murris; Mieke Donk The effects of feature-based priming and visual working memory on oculomotor capture Journal Article In: PLoS ONE, vol. 10, no. 11, pp. e0142696, 2015. @article{Silvis2015a, Recently, it has been demonstrated that objects held in working memory can influence rapid oculomotor selection. This has been taken as evidence that perceptual salience can be modified by active working memory representations. The goal of the present study was to examine whether these results could also be caused by feature-based priming. In two experiments, participants were asked to saccade to a target line segment of a certain orientation that was presented together with a to-be-ignored distractor. Both objects were given a task-irrelevant color that varied per trial. In a secondary task, a color had to be memorized, and that color could either match the color of the target, match the color of the distractor, or it did not match the color of any of the objects in the search task. The memory task was completed either after the search task (Experiment 1), or before it (Experiment 2). The results showed that in both experiments the memorized color biased oculomotor selection. Eye movements were more frequently drawn towards objects that matched the memorized color, irrespective of whether the memory task was completed after (Experiment 1) or before (Experiment 2) the search task. This bias was particularly prevalent in short-latency saccades. The results show that early oculomotor selection performance is not only affected by properties that are actively maintained in working memory but also by those previously memorized. Both working memory and feature priming can cause early biases in oculomotor selection. |
Matúš Šimkovic; Birgit Träuble Pursuit tracks chase: Exploring the role of eye movements in the detection of chasing Journal Article In: PeerJ, vol. 3, pp. 1–36, 2015. @article{Simkovic2015, We explore the role of eye movements in a chase detection task. Unlike the previous studies, which focused on overall performance as indicated by response speed and chase detection accuracy, we decompose the search process into gaze events such as smooth eye movements and use a data-driven approach to separately describe these gaze events. We measured eye movements of four human subjects engaged in a chase detection task displayed on a computer screen. The subjects were asked to detect two chasing rings among twelve other randomly moving rings. Using principal component analysis and support vector machines, we looked at the template and classification images that describe various stages of the detection process. We showed that the subjects mostly search for pairs of rings that move one after another in the same direction with a distance of 3.5-3.8 degrees. To find such pairs, the subjects first looked for regions with a high ring density and then pursued the rings in this region. Most of these groups consisted of two rings. Three subjects preferred to pursue the pair as a single object, while the remaining subject pursued the group by alternating the gaze between the two individual rings. In the discussion, we argue that subjects do not compare the movement of the pursued pair to a singular preformed template that describes a chasing motion. Rather, subjects bring certain hypotheses about what motion may qualify as chase and then, through feedback, they learn to look for a motion pattern that maximizes their performance. |
Jaana Simola; Kevin Le Fevre; Jari Torniainen; Thierry Baccino Affective processing in natural scene viewing: Valence and arousal interactions in eye-fixation-related potentials Journal Article In: NeuroImage, vol. 106, pp. 21–33, 2015. @article{Simola2015, Attention is drawn to emotionally salient stimuli. The present study investigates processing of emotionally salient regions during free viewing of emotional scenes that were categorized according to the two-dimensional model comprising of valence (unpleasant, pleasant) and arousal (high, low). Recent studies have reported interactions between these dimensions, indicative of stimulus-evoked approach or withdrawal tendencies. We addressed the valence and arousal effects when emotional items were embedded in complex real-world scenes by analyzing both eye movement behavior and eye-fixation-related potentials (EFRPs) time-locked to the critical event of fixating the emotionally salient items for the first time. Both data sets showed an interaction between the valence and arousal dimensions. First, the fixation rates and gaze durations on emotionally salient regions were enhanced for unpleasant versus pleasant images in the high arousal condition. In the low arousal condition, both measures were enhanced for pleasant versus unpleasant images. Second, the EFRP results at 140-170. ms [P2] over the central site showed stronger responses for high versus low arousing images in the unpleasant condition. In addition, the parietal LPP responses at 400-500. ms post-fixation were enhanced for stimuli reflecting congruent stimulus dimensions, that is, stronger responses for high versus low arousing images in the unpleasant condition and stronger responses for low versus high arousing images in the pleasant condition. The present findings support the interactive two-dimensional approach, according to which the integration of valence and arousal recruits brain regions associated with action tendencies of approach or withdrawal. |
Chris R. Sims The cost of misremembering: Inferring the loss function in visual working memory Journal Article In: Journal of Vision, vol. 15, no. 3, pp. 1–27, 2015. @article{Sims2015, Visual working memory (VWM) is a highly limited storage system. A basic consequence of this fact is that visual memories cannot perfectly encode or represent the veridical structure of the world. However, in natural tasks, some memory errors might be more costly than others. This raises the intriguing possibility that the nature of memory error reflects the costs of committing different kinds of errors. Many existing theories assume that visual memories are noise-corrupted versions of afferent perceptual signals. However, this additive noise assumption oversimplifies the problem. Implicit in the behavioral phenomena of visual working memory is the concept of a loss function: a mathematical entity that describes the relative cost to the organism of making different types of memory errors. An optimally efficient memory system is one that minimizes the expected loss according to a particular loss function, while subject to a constraint on memory capacity. This paper describes a novel theoretical framework for characterizing visual working memory in terms of its implicit loss function. Using inverse decision theory, the empirical loss function is estimated from the results of a standard delayed recall visual memory experiment. These results are compared to the predicted behavior of a visual working memory system that is optimally efficient for a previously identified natural task, gaze correction following saccadic error. Finally, the approach is compared to alternative models of visual working memory, and shown to offer a superior account of the empirical data across a range of experimental datasets. |
Michael Stengel; Pablo Bauszat; Martin Eisemann; Elmar Eisemann; Marcus Magnor Temporal video filtering and exposure control for perceptual motion blur Journal Article In: IEEE Transactions on Visualization and Computer Graphics, vol. 21, no. 5, pp. 663–671, 2015. @article{Stengel2015, We propose the computation of a perceptual motion blur in videos. Our technique takes the predicted eye motion into account when watching the video. Compared to traditional motion blur recorded by a video camera our approach results in a perceptual blur that is closer to reality. This postprocess can also be used to simulate different shutter effects or for other artistic purposes. It handles real and artificial video input, is easy to compute and has a low additional cost for rendered content. We illustrate its advantages in a user study using eye tracking. |
Tobias Stevens; Damien Brevers; Christopher D. Chambers; Aureliu Lavric; Ian P. L. McLaren; Myriam Mertens; Xavier Noël; Frederick Verbruggen How does response inhibition influence decision making when gambling? Journal Article In: Journal of Experimental Psychology: Applied, vol. 21, no. 1, pp. 15–36, 2015. @article{Stevens2015, Recent research suggests that response inhibition training can alter impulsive and compulsive behavior. When stop signals are introduced in a gambling task, people not only become more cautious when executing their choice responses, they also prefer lower bets when gambling. Here, we examined how stopping motor responses influences gambling. Experiment 1 showed that the reduced betting in stop-signal blocks was not caused by changes in information sampling styles or changes in arousal. In Experiments 2a and 2b, people preferred lower bets when they occasionally had to stop their response in a secondary decision-making task but not when they were instructed to respond as accurately as possible. Experiment 3 showed that merely introducing trials on which subjects could not gamble did not influence gambling preferences. Experiment 4 demonstrated that the effect of stopping on gambling generalized to different populations. Further, 2 combined analyses suggested that the effect of stopping on gambling preferences was reliable but small. Finally, Experiment 5 showed that the effect of stopping on gambling generalized to a different task. On the basis of our findings and earlier research, we propose that the presence of stop signals influences gambling by reducing approach behavior and altering the motivational value of the gambling outcome. |
Emma E. M. Stewart; Anna Ma-Wyatt The spatiotemporal characteristics of the attentional shift relative to a reach Journal Article In: Journal of Vision, vol. 15, no. 5, pp. 1–17, 2015. @article{Stewart2015, While the attentional shift preceding a saccadic eye movement has been well documented, the mechanisms surrounding the attentional shift preceding a reach are not well understood. It is unknown whether these mechanisms may be the same as those used in perceptual tasks, or those used in the planning of a saccade. We mapped the spatiotemporal properties of attention relative to a reach to determine the time course of attentional facilitation for hand movements alone. Participants had to reach toward a target and during the reach a perceptual probe could appear at one of six locations around the target, and at nine temporal offsets relative to the cue. Results showed a consistent pattern of facilitation in the planning stages of the reach, with attention increasing and then reaching a plateau during the completion of the movement before dropping off. These results demonstrate that planning a hand movement necessitates a shift in attention across the visual field around 150 ms before the onset of a reach. While these results are broadly consistent with the results of experiments mapping attentional shifts for saccades, the spatiotemporal profile of facilitation found shows that reaching without a concurrent eye movement also causes shifts in attention across the visual field. These results also suggest that the profile of the attentional shift preceding and during a hand movement is different at different locations across the visual field. |
Josef Stoll; Michael Thrun; Antje Nuthmann; Wolfgang Einhäuser Overt attention in natural scenes: Objects dominate features Journal Article In: Vision Research, vol. 107, pp. 36–48, 2015. @article{Stoll2015, Whether overt attention in natural scenes is guided by object content or by low-level stimulus features has become a matter of intense debate. Experimental evidence seemed to indicate that once object locations in a scene are known, salience models provide little extra explanatory power. This approach has recently been criticized for using inadequate models of early salience; and indeed, state-of-the-art salience models outperform trivial object-based models that assume a uniform distribution of fixations on objects. Here we propose to use object-based models that take a preferred viewing location (PVL) close to the centre of objects into account. In experiment 1, we demonstrate that, when including this comparably subtle modification, object-based models again are at par with state-of-the-art salience models in predicting fixations in natural scenes. One possible interpretation of these results is that objects rather than early salience dominate attentional guidance. In this view, early-salience models predict fixations through the correlation of their features with object locations. To test this hypothesis directly, in two additional experiments we reduced low-level salience in image areas of high object content. For these modified stimuli, the object-based model predicted fixations significantly better than early salience. This finding held in an object-naming task (experiment 2) and a free-viewing task (experiment 3). These results provide further evidence for object-based fixation selection - and by inference object-based attentional guidance - in natural scenes. |
Caleb E. Strait; Brianna J. Sleezer; Benjamin Y. Hayden Signatures of value comparison in ventral striatum neurons Journal Article In: PLoS Biology, vol. 13, no. 6, pp. 1–22, 2015. @article{Strait2015, The ventral striatum (VS), like its cortical afferents, is closely associated with processing of rewards, but the relative contributions of striatal and cortical reward systems remains unclear. Most theories posit distinct roles for these structures, despite their similarities. We compared responses of VS neurons to those of ventromedial prefrontal cortex (vmPFC) Area 14 neurons, recorded in a risky choice task. Five major response patterns observed in vmPFC were also observed in VS: (1) offer value encoding, (2) value difference encoding, (3) preferential encoding of chosen relative to unchosen value, (4) a correlation between residual variance in responses and choices, and (5) prominent encoding of outcomes. We did observe some differences as well; in particular, preferential encoding of the chosen option was stronger and started earlier in VS than in vmPFC. Nonetheless, the close match between vmPFC and VS suggests that cortex and its striatal targets make overlapping contributions to economic choice. |
Patrick Sturt; Nayoung Kwon The processing of raising and nominal control: An eye-tracking study Journal Article In: Frontiers in Psychology, vol. 6, pp. 331, 2015. @article{Sturt2015, According to some views of sentence processing, the memory retrieval processes involved in dependency formation may differ as a function of the type of dependency involved. For example, using closely matched materials in a single experiment, Dillon et al. (2013) found evidence for retrieval interference in subject-verb agreement, but not in reflexive-antecedent agreement. We report four eye-tracking experiments that examine examine reflexive-antecedent dependencies, combined with raising (e.g., "John seemed to Tom to be kind to himself…"), or nominal control (e.g., "John's agreement with Tom to be kind to himself…"). We hypothesized that dependencies involving raising would (a) be processed more quickly, and (b) be less subject to retrieval interference, relative to those involving nominal control. This is due to the fact that the interpretation of raising is structurally constrained, while the interpretation of nominal control depends crucially on lexical properties of the control nominal. The results showed evidence of interference when the reflexive-antecedent dependency was mediated by raising or nominal control, but very little evidence that could be interpreted in terms of interference for direct reflexive-antecedent dependencies that did not involve raising or control. However, there was no evidence either for greater interference, or for quicker dependency formation, for raising than for nominal control. |
Aditi Subramaniam; Sri Mahavir Agarwal; Sunil Kalmady; Venkataram Shivakumar; Harleen Chhabra; Anushree Bose; Dinakaran Damodharan; Janardhanan C. Narayanaswamy; Samuel B. Hutton; Ganesan Venkatasubramanian In: Indian Journal of Psychological Medicine, vol. 37, no. 4, pp. 419–422, 2015. @article{Subramaniam2015, Background: Deficient prefrontal cortex inhibitory control is of particular interest with regard to the pathogenesis of auditory hallucinations (AHs) in schizophrenia. Antisaccade task performance is a sensitive index of prefrontal inhibitory function and has been consistently found to be abnormal in schizophrenia. Methods: This study investigated the effect of transcranial direct current stimulation (tDCS) on antisaccade performance in 13 schizophrenia patients. Results: The tDCS resulted in significant reduction in antisaccade error percentage (t = 3.4; P = 0.005), final eye position gain (t = 2.3; P = 0.042), and AHs severity (t = 4.1; P = 0.003). Conclusion: Our results raise the possibility that improvement in antisaccade performance and severity of AH may be mechanistically related. |
Brian Sullivan; Laura Walker Comparing the fixational and functional preferred retinal location in a pointing task Journal Article In: Vision Research, vol. 116, pp. 68–79, 2015. @article{Sullivan2015a, Patients with central vision loss (CVL) typically adopt eccentric viewing strategies using a preferred retinal locus (PRL) in peripheral retina. Clinically, the PRL is defined monocularly as the area of peripheral retina used to fixate small stimuli. It is not clear if this fixational PRL describes the same portion of peripheral retina used during dynamic binocular eye-hand coordination tasks. We studied this question with four participants each with a unique CVL history. Using a scanning laser ophthalmoscope, we measured participants' monocular visual fields and the location and stability of their fixational PRLs. Participants' monocular and binocular visual fields were also evaluated using a computer monitor and eye tracker. Lastly, eye-hand coordination was tested over several trials where participants pointed to and touched a small target on a touchscreen monitor. Trials were blocked and carried out monocularly and binocularly, with a target appearing at 5° or 15° from screen center, in one of 8 locations. During pointing, our participants often exhibited long movement durations, an increased number of eye movements and impaired accuracy, especially in monocular conditions. However, these compensatory changes in behavior did not consistently worsen when loci beyond the fixational PRL were used. While fixational PRL size, location and fixation stability provide a necessary description of behavior, they are not sufficient to capture the pointing PRL used in this task. Generally, patients use a larger portion of peripheral retina than one might expect from measures of the fixational PRL alone, when pointing to a salient target without time constraints. While the fixational and pointing PRLs often overlap, the fixational PRL does not predict the large area of peripheral retina that can be used. |
Lalitta Suriya-Arunroj; Alexander Gail I plan therefore I choose: Free-choice bias due to prior action-probability but not action-value Journal Article In: Frontiers in Behavioral Neuroscience, vol. 9, pp. 315, 2015. @article{SuriyaArunroj2015, According to an emerging view, decision-making, and motor planning are tightly entangled at the level of neural processing. Choice is influenced not only by the values associated with different options, but also biased by other factors. Here we test the hypothesis that preliminary action planning can induce choice biases gradually and independently of objective value when planning overlaps with one of the potential action alternatives. Subjects performed center-out reaches obeying either a clockwise or counterclockwise cue-response rule in two tasks. In the probabilistic task, a pre-cue indicated the probability of each of the two potential rules to become valid. When the subsequent rule-cue unambiguously indicated which of the pre-cued rules was actually valid (instructed trials), subjects responded faster to rules pre-cued with higher probability. When subjects were allowed to choose freely between two equally rewarded rules (choice trials) they chose the originally more likely rule more often and faster, despite the lack of an objective advantage in selecting this target. In the amount task, the pre-cue indicated the amount of potential reward associated with each rule. Subjects responded faster to rules pre-cued with higher reward amount in instructed trials of the amount task, equivalent to the more likely rule in the probabilistic task. Yet, in contrast, subjects showed hardly any choice bias and no increase in response speed in favor of the original high-reward target in the choice trials of the amount task. We conclude that free-choice behavior is robustly biased when predictability encourages the planning of one of the potential responses, while prior reward expectations without action planning do not induce such strong bias. Our results provide behavioral evidence for distinct contributions of expected value and action planning in decision-making and a tight interdependence of motor planning and action selection, supporting the idea that the underlying neural mechanisms overlap. |
Martin Szinte; Marisa Carrasco; Patrick Cavanagh; Martin Rolfs Attentional trade-offs maintain the tracking of moving objects across saccades Journal Article In: Journal of Neurophysiology, vol. 113, no. 7, pp. 2220–2231, 2015. @article{Szinte2015, In many situations like playing sports or driving a car, we keep track of moving objects, despite the frequent eye movements that drastically interrupt their retinal motion trajectory. Here we report evidence that transsaccadic tracking relies on trade-offs of attentional resources from a tracked object's motion path to its remapped location. While participants covertly tracked a moving object, we presented pulses of coherent motion at different locations to probe the allocation of spatial attention along the object's entire motion path. Changes in the sensitivity for these pulses showed that during fixation attention shifted smoothly in anticipation of the tracked object's displacement. However, just before a saccade, atten- tional resources were withdrawn from the object's current motion path and reflexively drawn to the retinal location the object would have after saccade. This finding demonstrates the predictive choice the visual system makes to maintain the tracking of moving objects across saccades. |
Ryosuke Tachibana; Yasuki Noguchi Unconscious analyses of visual scenes based on feature conjunctions Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 41, no. 3, pp. 639–648, 2015. @article{Tachibana2015, To efficiently process a cluttered scene, the visual system analyzes statistical properties or regularities of visual elements embedded in the scene. It is controversial, however, whether those scene analyses could also work for stimuli unconsciously perceived. Here we show that our brain performs the unconscious scene analyses not only using a single featural cue (e.g., orientation) but also based on conjunctions of multiple visual features (e.g., combinations of color and orientation information). Subjects foveally viewed a stimulus array (duration: 50 ms) where 4 types of bars (red-horizontal, red-vertical, greenhorizontal, and green-vertical) were intermixed. Although a conscious perception of those bars was inhibited by a subsequent mask stimulus, the brain correctly analyzed the information about color, orientation, and color-orientation conjunctions of those invisible bars. The information of those features was then used for the unconscious configuration analysis (statistical processing) of the central bars, which induced a perceptual bias and illusory feature binding in visible stimuli at peripheral locations. While statistical analyses and feature binding are normally 2 key functions of the visual system to construct coherent percepts of visual scenes, our results show that a high-level analysis combining those 2 functions is correctly performed by unconscious computations in the brain. |
Jedediah M. Singer; Joseph R. Madsen; William S. Anderson; Gabriel Kreiman Sensitivity to timing and order in human visual cortex Journal Article In: Journal of Neurophysiology, vol. 113, no. 5, pp. 1656–1669, 2015. @article{Singer2015, Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. |
J. Suzanne Singh; Michelle C. Capozzoli; Michael D. Dodd; Debra A. Hope The effects of social anxiety and state anxiety on visual attention: Testing the vigilance–avoidance hypothesis Journal Article In: Cognitive Behaviour Therapy, vol. 44, no. 5, pp. 377–388, 2015. @article{Singh2015, A growing theoretical and research literature suggests that trait and state social anxiety can predict attentional patterns in the presence of emotional stimuli. The current study adds to this literature by examining the effects of state anxiety on visual attention and testing the vigilance–avoidance hypothesis, using a method of continuous visual attentional assessment. Participants were 91 undergraduate college students with high or low trait fear of negative evaluation (FNE), a core aspect of social anxiety, who were randomly assigned to either a high or low state anxiety condition. Participants engaged in a free view task in which pairs of emotional facial stimuli were presented and eye movements were continuously monitored. Overall, participants with high FNE avoided angry stimuli and participants with high state anxiety attended to positive stimuli. Participants with high state anxiety and high FNE were avoidant of angry faces, whereas participants with low state and low FNE exhibited a bias toward angry faces. The study provided partial support for the vigilance–avoidance hypothesis. The findings add to the mixed results in the literature that suggest that both positive and negative emotional stimuli may be important in understanding the complex attention patterns associated with social anxiety. Clinical implications and suggestions for future research are discussed. |
Lisa M. Soederberg Miller; Diana L. Cassady; Laurel A. Beckett; Elizabeth A. Applegate; Machelle D. Wilson; Tanja N. Gibson; Kathleen Ellwood Misunderstanding of front-of-package nutrition information on us food products Journal Article In: PLoS ONE, vol. 10, no. 4, pp. e0125306, 2015. @article{SoederbergMiller2015a, Front-of-package nutrition symbols (FOPs) are presumably readily noticeable and require minimal prior nutrition knowledge to use. Although there is evidence to support this notion, few studies have focused on Facts Up Front type symbols which are used in the US. Participants with varying levels of prior knowledge were asked to view two products and decide which was more healthful. FOPs on packages were manipulated so that one product was more healthful, allowing us to assess accuracy. Attention to nutrition information was assessed via eye tracking to determine what if any FOP information was used to make their decisions. Results showed that accuracy was below chance on half of the comparisons despite consulting FOPs. Negative correlations between attention to calories, fat, and sodium and accuracy indicated that consumers over-relied on these nutrients. Although relatively little attention was allocated to fiber and sugar, associations between attention and accuracy were positive. Attention to vitamin D showed no association to accuracy, indicating confusion surrounding what constitutes a meaningful change across products. Greater nutrition knowledge was associated with greater accuracy, even when less attention was paid. Individuals, particularly those with less knowledge, are misled by calorie, sodium, and fat information on FOPs. |
Maryam Soleimannejad; Mehdi Tehrani-Doost; Anahita Khorrami; Mohammad Taghi Joghataei; Ebrahim Pishyareh Evaluation of attention bias in morphine and methamphetamine abusers towards emotional scenes during early abstinence: An eye-tracking study Journal Article In: Basic and Clinical Neuroscience, vol. 6, no. 4, pp. 223–230, 2015. @article{Soleimannejad2015, INTRODUCTION: We hypothesized that inappropriate attention during the period of abstinence in individuals with substance use disorder can result in an inadequate perception of emotion and unsuitable reaction to emotional scenes. The main aim of this research was to evaluate the attentional bias towards emotional images in former substance abusers and compare it to healthy adults. METHODS: Paired images of general scenes consisting of pleasant, unpleasant, and neutral images were presented to subjects for 3 s while their attentional bias and eye movements were measured by eye tracking. The participants were 72 male adults consisting of 23 healthy control, 24 morphine former abusers, and 25 methamphetamine former abusers. The former abusers were recruited from a private addiction quitting center and addiction rehabilitation campus. The healthy individuals were selected from general population. Number and duration of first fixation, duration of first gaze, and sustained attention towards emotional scenes were measured as the main variables and the data were analyzed using the repeated measures ANOVA. RESULTS: A significant difference was observed between former morphine abusers and healthy control in terms of number and duration of first fixations and first gaze duration towards pleasant images. DISCUSSION: Individuals with morphine use disorder have more problems with attending to emotional images compared to methamphetamine abusers and healthy people. |
Guillermo Solovey; Guy Gerard Graney; Hakwan Lau A decisional account of subjective inflation of visual perception at the periphery Journal Article In: Attention, Perception, and Psychophysics, vol. 77, no. 1, pp. 258–271, 2015. @article{Solovey2015, Human peripheral vision appears vivid compared to foveal vision; the subjectively perceived level of detail does not seem to drop abruptly with eccentricity. This compelling impression contrasts with the fact that spatial resolution is substantially lower at the periphery. A similar phenomenon occurs in visual attention, in which subjects usually overestimate their perceptual capacity in the unattended periphery. We have previously shown that at identical eccentricity, low spatial attention is associated with liberal detection biases, which we argue may reflect inflated subjective perceptual qualities. Our computational model suggests that this subjective inflation occurs because under the lack of attention, the trial-by-trial variability of the internal neural response is increased, resulting in more frequent surpassing of a detection criterion. In the current work, we hypothesized that the same mechanism may be at work in peripheral vision. We investigated this possibility in psychophysical experiments in which participants performed a simultaneous detection task at the center and at the periphery. Confirming our hypothesis, we found that participants adopted a conservative criterion at the center and liberal criterion at the periphery. Furthermore, an extension of our model predicts that detection bias will be similar at the center and at the periphery if the periphery stimuli are magnified. A second experiment successfully confirmed this prediction. These results suggest that, although other factors contribute to subjective inflation of visual perception in the periphery, such as top-down filling-in of information, the decision mechanism may be relevant too. |
Sabine Soltani; Kristin R. Newman; Leanne Quigley; Amanda Fernandez; Keith S. Dobson; Christopher Sears Temporal changes in attention to sad and happy faces distinguish currently and remitted depressed individuals from never depressed individuals Journal Article In: Psychiatry Research, vol. 230, no. 2, pp. 454–463, 2015. @article{Soltani2015, Depression is associated with attentional biases for emotional information that are proposed to reflect stable vulnerability factors for the development and recurrence of depression. A key question for researchers is whether those who have recovered from depression also exhibit attentional biases, and if so, how similar these biases are to those who are currently depressed. To address this question, the present study examined attention to emotional faces in remitted depressed (N=26), currently depressed (N=16), and never depressed (N=33) individuals. Participants viewed sets of four face images (happy, sad, threatening, and neutral) while their eye movements were tracked throughout an 8-s presentation. Like currently depressed participants, remitted depressed participants attended to sad faces significantly more than never depressed participants and attended to happy faces significantly less. Analyzing temporal changes in attention revealed that currently and remitted depressed participants did not reduce their attention to sad faces over the 8-s presentation, unlike never depressed participants. In contrast, remitted depressed participants attended to happy faces similarly to never depressed participants, increasing their attention to happy faces over the 8-s presentation. The implications for cognitive theories of depression and depression vulnerability are discussed. |
Joo-Hyun Song; Robert M. McPeek Neural correlates of target selection for reaching movements in superior colliculus Journal Article In: Journal of Neurophysiology, vol. 113, no. 5, pp. 1414–1422, 2015. @article{Song2015, We recently demonstrated that inactivation of the primate superior colliculus (SC) causes a deficit in target selection for arm-reaching movements when the reach target is located in the inactivated field (Song JH, Rafal RD, McPeek RM. Proc Natl Acad Sci USA 108: E1433–E1440, 2011). This is consistent with the notion that the SC is part of a general-purpose target selection network beyond eye movements. To understand better the role of SC activity in reach target selection, we examined how individual SC neurons in the intermediate layers discriminate a reach target from distractors. Monkeys reached to touch a color oddball target among distractors while maintaining fixation. We found that many SC neurons robustly discriminate the goal of the reaching movement before the onset of the reach even though no saccade is made. To identify these cells in the context of conventional SC cell classification schemes, we also recorded visual, delay-period, and saccade-related responses in a delayed saccade task. On average, SC cells that discriminated the reach target from distractors showed significantly higher visual and delay-period activity than nondiscriminating cells, but there was no significant difference in saccade-related activity. Whereas a majority of SC neurons that discriminated the reach target showed significant delay-period activity, all nondiscriminating cells lacked such activity. We also found that some cells without delay-period activity did discriminate the reach target from distractors. We conclude that the majority of intermediate-layer SC cells discriminate a reach target from distractors, consistent with the idea that the SC contains a priority map used for effector-independent target selection. |