全部EyeLink出版物
以下列出了截至2023年(包括2024年初)的所有12000多份同行评审的EyeLink研究出版物。您可以使用“视觉搜索”、“平滑追踪”、“帕金森氏症”等关键字搜索出版物库。您还可以搜索个人作者的姓名。可以在解决方案页面上找到按研究领域分组的眼动追踪研究。如果我们错过了任何EyeLink眼动追踪论文,请 给我们发电子邮件!
2016 |
Ping-I Lin; Cheng-Da Hsieh; Chi-Hung Juan; Md Monir Hossain; Craig A. Erickson; Yang-Han Lee; Mu-Chun Su Predicting aggressive tendencies by visual attention bias associated with hostile emotions Journal Article In: PLoS ONE, vol. 11, no. 2, pp. e0149487, 2016. @article{Lin2016, The goal of the current study is to clarify the relationship between social information processing (e.g., visual attention to cues of hostility, hostility attribution bias, and facial expression emotion labeling) and aggressive tendencies. Thirty adults were recruited in the eye-tracking study that measured various components in social information processing. Baseline aggressive tendencies were measured using the Buss-Perry Aggression Questionnaire (AQ). Visual attention towards hostile objects was measured as the proportion of eye gaze fixation duration on cues of hostility. Hostility attribution bias was measured with the rating results for emotions of characters in the images. The results show that the eye gaze duration on hostile characters was significantly inversely correlated with the AQ score and less eye contact with an angry face. The eye gaze duration on hostile object was not significantly associated with hostility attribution bias, although hostility attribution bias was significantly positively associated with the AQ score. Our findings suggest that eye gaze fixation time towards non-hostile cues may predict aggressive tendencies. |
Yu-Tzu Lin; Cheng-Chih Wu; Ting-Yun Hou; Yu-Chih Lin; Fang-Ying Yang; Chia-Hu Chang Tracking students' cognitive processes during program debugging-an eye-movement approach Journal Article In: IEEE Transactions on Education, vol. 59, no. 3, pp. 175–186, 2016. @article{Lin2016a, This study explores students' cognitive processes while debugging programs by using an eye tracker. Students' eye movements during debugging were recorded by an eye tracker to investigate whether and how high- and low-performance students act differently during debugging. Thirty-eight computer science undergraduates were asked to debug two C programs. The path of students' gaze while following program codes was subjected to sequential analysis to reveal significant sequences of areas examined. These significant gaze path sequences were then compared to those of students with different debugging performances. The results show that, when debugging, high-performance students traced programs in a more logical manner, whereas low-performance students tended to stick to a line-by-line sequence and were unable to quickly derive the program's higher-level logic. Low-performance students also often jumped directly to certain suspected statements to find bugs, without following the program's logic. They also often needed to trace back to prior statements to recall information, and spent more time on manual computation. Based on the research results, adaptive instructional strategies and materials can be developed for students of different performance levels, to improve associated cognitive activities during debugging, which can foster learning during debugging and programming. |
Damien Litchfield; Tim Donovan Worth a quick look? Initial scene previews can guide eye movements as a function of domain-specific expertise but can also have unforeseen costs Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 42, no. 7, pp. 982–994, 2016. @article{Litchfield2016, Rapid scene recognition is a global visual process we can all exploit to guide search. This ability is thought to underpin expertise in medical image perception yet there is no direct evidence that isolates the expertise-specific contribution of processing scene previews on subsequent eye movement performance. We used the flash-preview moving window paradigm (Castelhano&Henderson, 2007) to investigate this issue. Expert radiologists and novice observers underwent 2 experiments whereby participants viewed a 250-ms scene preview or a mask before searching for a target. Observers looked for everyday objects from real-world scenes (Experiment 1), and searched for lung nodules from medical images (Experiment 2). Both expertise groups exploited the brief preview of the upcoming scene to more efficiently guide windowed search in Experiment 1, but there was only a weak effect of domain-specific expertise in Experiment 2, with experts showing small improvements in search metrics with scene previews. Expert diagnostic performance was better than novices in all conditions but was not contingent on seeing the scene preview, and scene preview actually impaired novice diagnostic performance. Experiment 3 required novice and experienced observers to search for a variety of abnormalities from different medical images. Rather than maximizing the expertise-specific advantage of processing scene previews, both novices and experienced radiographers were worse at detecting abnormalities with scene previews. We discuss how restricting access to the initial glimpse can be compensated for by subsequent search and discovery processing, but there can still be costs in integrating a fleeting glimpse of a medical scene. |
Liu D. Liu; Ralf M. Haefner; Christopher C. Pack A neural basis for the spatial suppression of visual motion perception Journal Article In: eLife, vol. 5, pp. 1–20, 2016. @article{Liu2016c, In theory, sensory perception should be more accurate when more neurons contribute to the representation of a stimulus. However, psychophysical experiments that use larger stimuli to activate larger pools of neurons sometimes report impoverished perceptual performance. To determine the neural mechanisms underlying these paradoxical findings, we trained monkeys to discriminate the direction of motion of visual stimuli that varied in size across trials, while simultaneously recording from populations of motion-sensitive neurons in cortical area MT. We used the resulting data to constrain a computational model that explained the behavioral data as an interaction of three main mechanisms: noise correlations, which prevented stimulus information from growing with stimulus size; neural surround suppression, which decreased sensitivity for large stimuli; and a read-out strategy that emphasized neurons with receptive fields near the stimulus center. These results suggest that paradoxical percepts reflect tradeoffs between sensitivity and noise in neuronal populations. |
Lu Liu; Liang She; Ming Chen; Tianyi Liu; Haidong D. Lu; Yang Dan; Mu-ming Poo Spatial structure of neuronal receptive field in awake monkey secondary visual cortex (V2) Journal Article In: Proceedings of the National Academy of Sciences, vol. 113, no. 7, pp. 1913–1918, 2016. @article{Liu2016d, Visual processing depends critically on the receptive field (RF) properties of visual neurons. However, comprehensive characterization of RFs beyond the primary visual cortex (V1) remains a challenge. Here we report fine RF structures in secondary visual cortex (V2) of awake macaque monkeys, identified through a projection pursuit regression analysis of neuronal responses to natural images. We found that V2 RFs could be broadly classified as V1-like (typical Gabor-shaped subunits), ultralong (subunits with high aspect ratios), or complex-shaped (subunits with multiple oriented components). Furthermore, single-unit recordings from functional domains identified by intrinsic optical imaging showed that neurons with ultralong RFs were primarily localized within pale stripes, whereas neurons with complex-shaped RFs were more concentrated in thin stripes. Thus, by combining single-unit recording with optical imaging and a computational approach, we identified RF subunits underlying spatial feature selectivity of V2 neurons and demonstrated the functional organization of these RF properties. |
Rong Liu; MiYoung Kwon Integrating oculomotor and perceptual training to induce a pseudofovea: A model system for studying central vision loss Journal Article In: Journal of Vision, vol. 16, no. 6, pp. 1–21, 2016. @article{Liu2016b, People with a central scotoma often adopt an eccentric retinal location (Preferred Retinal Locus, PRL) for fixation. Here, we proposed a novel training paradigm as a model system to study the nature of the PRL formation and its impacts on visual function. The training paradigm was designed to effectively induce a PRL at any intended retinal location by integrating oculomotor control and pattern recognition. Using a gaze-contingent display, a simulated central scotoma was induced in eight normally sighted subjects. A subject's entire peripheral visual field was blurred, except for a small circular aperture with location randomly assigned to each subject (to the left, right, above, or below the scotoma). Under this viewing condition, subjects performed a demanding oculomotor and visual recognition task. Various visual functions were tested before and after training at both PRL and nonPRL locations. After 6-10 hr of the training, all subjects formed their PRL within the clear window. Both oculomotor control and visual recognition performance significantly improved. Moreover, there was considerable improvement at PRL location in high-level function, such as trigram letter-recognition, reading, and spatial attention, but not in low-level function, such as acuity and contrast sensitivity. Our results demonstrated that within a relatively short time, a PRL could be induced at any intended retinal location in normally-sighted subjects with a simulated scotoma. Our training paradigm might not only hold promise as a model system to study the dynamic nature of the PRL formation, but also serve as a rehabilitation regimen for individuals with central vision loss. |
Taosheng Liu Neural representation of object-specific attentional priority Journal Article In: NeuroImage, vol. 129, pp. 15–24, 2016. @article{Liu2016a, Humans can flexibly select locations, features, or objects in a visual scene for prioritized processing. Although it is relatively straightforward to manipulate location- and feature-based attention, it is difficult to isolate object-based selection. Because objects are always composed of features, studies of object-based selection can often be interpreted as the selection of a combination of locations and features. Here we examined the neural representation of attentional priority in a paradigm that isolated object-based selection. Participants viewed two superimposed gratings that continuously changed their color, orientation, and spatial frequency, such that the gratings traversed the same exact feature values within a trial. Participants were cued at the beginning of each trial to attend to one or the other grating to detect a brief luminance increment, while their brain activity was measured with fMRI. Using multi-voxel pattern analysis, we were able to decode the attended grating in a set of frontoparietal areas, including anterior intraparietal sulcus (IPS), frontal eye field (FEF), and inferior frontal junction (IFJ). Thus, a perceptually varying object can be represented by patterned neural activity in these frontoparietal areas. We suggest that these areas can encode attentional priority for abstract, high-level objects independent of their locations and features. |
Xin Liu; Tong Chen; Guoqiang Xie; Guangyuan Liu Contact-free cognitive load recognition based on eye movement Journal Article In: Journal of Electrical and Computer Engineering, pp. 1–8, 2016. @article{Liu2016e, The cognitive overload not only affects the physical and mental diseases, but also affects the work efficiency and safety. Hence, the research of measuring cognitive load has been an important part of cognitive load theory. In this paper, we proposed a method to identify the state of cognitive load by using eye movement data in a noncontact manner. We designed a visual experiment to elicit human's cognitive load as high and low state in two light intense environments and recorded the eye movement data in this whole process. Twelve salient features of the eye movement were selected by using statistic test. Algorithms for processing some features are proposed for increasing the recognition rate. Finally we used the support vector machine (SVM) to classify high and low cognitive load. The experimental results show that the method can achieve 90.25% accuracy in light controlled condition. |
Yanping Liu; Erik D. Reichle; Xingshan Li The effect of word frequency and parafoveal preview on saccade length during the reading of Chinese Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 42, no. 7, pp. 1008–1025, 2016. @article{Liu2016, There are currently 2 theoretical accounts of how readers of Chinese select their saccade targets: (a) by moving their eyes to specific saccade targets (i.e., the default-targeting hypothesis) and (b) by adjusting their saccade lengths to accommodate lexical processing (i.e., the dynamic-adjustment hypothesis). In this article, we first report the results of an eye-movement experiment using a gaze-contingent boundary paradigm. This experiment demonstrates that both target-word frequency and its preview validity modulate the lengths of the saccades entering and exiting the target words, with longer saccades to/from high-frequency words when their preview was available. We then report the results of 2 simulations using computational models that instantiate the core theoretical assumptions of the default-targeting and dynamic-adjustment hypotheses. Comparisons of these simulations indicate that the dynamic-adjustment hypothesis provides a better quantitative account of the data from our experiment using fewer free parameters. We conclude by discussing evidence for dynamic saccade adjustment during the reading of alphabetic languages, and why such a heuristic may be necessary to fully explain eye-movement control during the reading of both alphabetic and nonalphabetic languages. |
Simon P. Liversedge; Denis Drieghe; Xin Li; Guoli Yan; Xuejun Bai; Jukka Hyönä Universality in eye movements and reading: A trilingual investigation Journal Article In: Cognition, vol. 147, pp. 1–20, 2016. @article{Liversedge2016, Universality in language has been a core issue in the fields of linguistics and psycholinguistics for many years (e.g., Chomsky, 1965). Recently, Frost (2012) has argued that establishing universals of process is critical to the development of meaningful, theoretically motivated, cross-linguistic models of reading. In contrast, other researchers argue that there is no such thing as universals of reading (e.g., Coltheart & Crain, 2012). Reading is a complex, visually mediated psychological process, and eye movements are the behavioural means by which we encode the visual information required for linguistic processing. To investigate universality of representation and process across languages we examined eye movement behaviour during reading of very comparable stimuli in three languages, Chinese, English and Finnish. These languages differ in numerous respects (character based vs. alphabetic, visual density, informational density, word spacing, orthographic depth, agglutination, etc.). We used linear mixed modelling techniques to identify variables that captured common variance across languages. Despite fundamental visual and linguistic differences in the orthographies, statistical models of reading behaviour were strikingly similar in a number of respects, and thus, we argue that their composition might reflect universality of representation and process in reading. |
Nathaniel Lizak; Meaghan Clough; Lynette Millist; Tomas Kalincik; Owen B. White; Joanne Fielding Impairment of smooth pursuit as a marker of early multiple sclerosis Journal Article In: Frontiers in Neurology, vol. 7, pp. 206, 2016. @article{Lizak2016, Background: Multiple sclerosis (MS) is a diffuse disease that disrupts wide-ranging cerebral networks. The control of saccades and smooth pursuit are similarly dependent upon widespread networks, with the assessment of pursuit offering an opportunity to examine feedback regulation. We sought to characterize pursuit deficits in MS and to examine their relationship with disease duration. Methods: 20 healthy controls, 20 patients with a clinically isolated syndrome (CIS), and 40 patients with clinically definite MS (CDMS) participated. 36 trials of Rashbass' step-ramp paradigm of smooth pursuit, evenly split by velocity (8.65°/s, 17.1°/s, and 25.9°/s) and ramp direction (left/right), were performed. Four parameters were measured: latency of pursuit onset, closed-loop pursuit gain, number of saccades, and summed saccade amplitudes during pursuit. For CDMS patients, these were correlated with disease duration and Expanded Disability Status Scale (EDSS) score. Results: Closed-loop pursuit gain was significantly lower in CIS than controls at all speeds. CDMS gain was lower than controls at medium pursuit velocity. CDMS patients also displayed longer pursuit latency than controls at all velocities. All patients accumulated increased summed saccade amplitudes at slow and medium pursuit speeds, and infrequent high-amplitude saccades at the fast speed. No pursuit variable significantly correlated with EDSS or disease duration in CDMS patients. Conclusions: Smooth pursuit is significantly compromised in MS from onset. Low pursuit gain and increased saccadic amplitudes may be robust markers of disseminated pathology in CIS and in more advanced MS. Pursuit may be useful in measuring early disease. |
Shawn Loewen; Solène Inceoglu The effectiveness of visual input enhancement on the noticing and L2 development of the Spanish past tense Journal Article In: Studies in Second Language Learning and Teaching, vol. 6, no. 1, pp. 89–110, 2016. @article{Loewen2016, Textual manipulation is a common pedagogic tool used to emphasize specific features of a second language (L2) text, thereby facilitating noticing and, ideally, second language development. Visual input enhancement has been used to investigate the effects of highlighting specific grammatical structures in a text. The current study uses a quasi-experimental design to determine the extent to which textual manipulation increase (a) learners' perception of targeted forms and (b) their knowledge of the forms. Input enhancement was used to highlight the Spanish preterit and imperfect verb forms and an eye tracker measured the frequency and duration of participants' fixation on the targeted items. In addition, pretests and posttests of the Spanish past tense provided information about participants' knowledge of the targeted forms. Results indicate that learners were aware of the highlighted grammatical forms in the text; however, there was no difference in the amount of attention between the enhanced and unenhanced groups. In addition, both groups improved in their knowledge of the L2 forms; however, again, there was no differential improvement between the two groups. |
Cai S. Longman; Aureliu Lavric; Stephen Monsell The coupling between spatial attention and other components of task-set: A task-switching investigation Journal Article In: Quarterly Journal of Experimental Psychology, vol. 69, no. 11, pp. 2248–2275, 2016. @article{Longman2016, Is spatial attention reconfigured independently of, or in tandem with, other task-set components when the task changes? We tracked the eyes of participants cued to perform one of three digit-classification tasks, each consistently associated with a distinct location. Previously we observed, on task switch trials, a substantial delay in orientation to the task-relevant location and tendency to fixate the location of the previously relevant task—“attentional inertia”. In the present experiments the cues specified (and instructions emphasized) the relevant location rather than the current task. In Experiment 1, with explicit spatial cues (arrows or spatial adverbs), the previously documented attentional handicaps all but disappeared, whilst the performance “switch cost” increased. Hence, attention can become decoupled from other aspects of task-set, but at a cost to the efficacy of task-set preparation. Experiment 2 used arbitrary single-letter cues with instructions and a training regime that encouraged participants to interpret the cue as indicating the relevant location rather than task. As in our previous experiments, and unlike in Experiment 1, we now observed clear switch-induced attentional delay and inertia, suggesting that the natural tendency is for spatial attention and task-set to be coupled and that only quasi-exogenous location cues decouple their reconfiguration. |
Delphine Lévy-Bencheton; Aarlenne Zein Khan; Denis Pélisson; Caroline Tilikete; Laure Pisella Adaptation of saccadic sequences with and without remapping Journal Article In: Frontiers in Human Neuroscience, vol. 10, pp. 359, 2016. @article{LevyBencheton2016, It is relatively easy to adapt visually-guided saccades because the visual vector and the saccade vector match. The retinal error at the saccade landing position is compared to the prediction error, based on target location and efference copy. If these errors do not match, planning processes at the level(s) of the visual and/or motor vector processing are assumed to be inaccurate and the saccadic response is adjusted. In the case of a sequence of two saccades, the final error can be attributed to the last saccade vector or to the entire saccadic displacement. Here, we asked whether and how adaptation can occur in the case of remapped saccades, such as during the classic double-step saccade paradigm, where the visual and motor vectors of the second saccade do not coincide and so the attribution of error is ambiguous. Participants performed saccades sequences to two targets briefly presented prior to first saccade onset. The second saccade target was either briefly re-illuminated (sequential visually-guided task) or not (remapping task) upon first saccade offset. To drive adaptation, the second target was presented at a displaced location (backward or forward jump condition or control-no jump) at the end of the second saccade. Pre- and post-adaptation trials were identical, without the re-appearance of the target after the second saccade. For the 1st saccade endpoints, there was no change as a function of adaptation. For the 2nd saccade, there was a similar increase in gain in the forward jump condition (52% and 61% of target jump) in the two tasks, whereas the gain decrease in the backward condition was much smaller for the remapping task than for the sequential visually-guided task (41% vs. 94%). In other words, the absolute gain change was similar between backward and forward adaptation for remapped saccades. In conclusion, we show that remapped saccades can be adapted, suggesting that the error is attributed to the visuo-motor transformation of the remapped visual vector. The mechanisms by which adaptation takes place for remapped saccades may be similar to those of forward sequential visually-guided saccades, unlike those involved in adaptation for backward sequential visually-guided saccades. |
Gary J. Lewis; Timothy C. Bates In: Journal of Cognitive Neuroscience, vol. 28, no. 2, pp. 308–318, 2016. @article{Lewis2016, The ability to adaptively shift between exploration and exploitation control states is critical for optimizing behavioral performance. Converging evidence from primate electrophysiology and computational neural modeling has suggested that this ability may be mediated by the broad norepinephrine projections emanating from the locus coeruleus (LC) [Aston-Jones, G., & Cohen, J. D. An integrative theory of locus coeruleus-norepinephrine function: Adaptive gain and optimal performance. Annual Review of Neuroscience, 28, 403–450, 2005]. There is also evidence that pupil diameter covaries systematically with LC activity. Although imperfect and indirect, this link makes pupillometry a useful tool for studying the locus coeruleus norepinephrine system in humans and in high-level tasks. Here, we present a novel paradigm that examines how the pupillary response during exploration and exploitation covaries with individual differences in fluid intelligence during analogical reasoning on Raven's Advanced Progressive Matrices. Pupillometry was used as a noninvasive proxy for LC activity, and concurrent think-aloud verbal protocols were used to identify exploratory and exploitative solution periods. This novel combination of pupillometry and verbal protocols from 40 participants revealed a decrease inpupil diameter during exploitation and an increase during exploration. The temporal dynamics of the pupillary response was characterized by a steep increase during the transition to exploratory periods, sustained dilation for many seconds afterward, and followed by gradual return to baseline.Moreover, the individual differences in the relative magnitude of pupillary dilation accounted for 16% of the variance in Advanced Progressive Matrices scores. Assuming that pupil diameter is a valid index of LC activity, these results establish promising preliminary connections between the literature on locus coeruleus norepinephrine-mediated cognitive control and the literature on analogical reasoning and fluid intelligence. |
Chia-Ling Li; M. Pilar Aivar; Dmitry M. Kit; Matthew H. Tong; Mary Hayhoe Memory and visual search in naturalistic 2D and 3D environments Journal Article In: Journal of Vision, vol. 16, no. 8, pp. 1–20, 2016. @article{Li2016a, The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D. |
Qian Li; Zhuowei Joy Huang; Kiel Christianson Visual attention toward tourism photographs with text: An eye-tracking study Journal Article In: Tourism Management, vol. 54, pp. 243–258, 2016. @article{Li2016b, This study examines consumers' visual attention toward tourism photographs with text naturally embedded in landscapes and their perceived advertising effectiveness. Eye-tracking is employed to record consumers' visual attention and a questionnaire is administered to acquire information about the perceived advertising effectiveness. The impacts of text elements are examined by two factors: viewers' understanding of the text language (understand vs. not understand), and the number of textual messages (single vs. multiple). Findings indicate that text within the landscapes of tourism photographs draws the majority of viewers' visual attention, irrespective of whether or not participants understand the text language. People spent more time viewing photographs with text in a known language compared to photographs with an unknown language, and more time viewing photographs with a single textual message than those with multiple textual messages. Viewers reported higher perceived advertising effectiveness toward tourism photographs that included text in the known language. |
Yu Li; Yangyang Xu; Mengqing Xia; Tianhong Zhang; Junjie Wang; Xu Liu; Yongguang He; Jijun Wang Eye movement indices in the study of depressive disorder Journal Article In: Shanghai Archives of Psychiatry, vol. 28, no. 6, pp. 326–334, 2016. @article{Li2016, Background: Impaired cognition is one of the most common core symptoms of depressive disorder. Eye movement testing mainly reflects patients' cognitive functions, such as cognition, memory, attention, recognition, and recall. This type of testing has great potential to improve theories related to cognitive functioning in depressive episodes as well as potential in its clinical application. Aims: This study investigated whether eye movement indices of patients with unmedicated depressive disorder were abnormal or not, as well as the relationship between these indices and mental symptoms. Methods: Sixty patients with depressive disorder and sixty healthy controls (who were matched by gender, age and years of education) were recruited, and completed eye movement tests including three tasks: fixation task, saccade task and free-view task. The EyeLink desktop eye tracking system was employed to collect eye movement information, and analyze the eye movement indices of the three tasks between the two groups. Results: (1) In the fixation task, compared to healthy controls, patients with depressive disorder showed more fixations, shorter fixation durations, more saccades and longer saccadic lengths; (2) In the saccade task, patients with depressive disorder showed longer anti-saccade latencies and smaller anti-saccade peak velocities; (3) In the free-view task, patients with depressive disorder showed fewer saccades and longer mean fixation durations; (4) Correlation analysis showed that there was a negative correlation between the pro-saccade amplitude and anxiety symptoms, and a positive correlation between the anti-saccade latency and anxiety symptoms. The depression symptoms were negatively correlated with fixation times, saccades, and saccadic paths respectively in the free-view task; while the mean fixation duration and depression symptoms showed a positive correlation. Conclusion: Compared to healthy controls, patients with depressive disorder showed significantly abnormal eye movement indices. In addition patients' anxiety and depression symptoms and eye movement indices were correlated. The pathological meaning of these phenomena deserve further exploration. |
Hsin-I Liao; Shunsuke Kidani; Makoto Yoneya; Makio Kashino; Shigeto Furukawa Correspondences among pupillary dilation response, subjective salience of sounds, and loudness Journal Article In: Psychonomic Bulletin & Review, vol. 23, no. 2, pp. 412–425, 2016. @article{Liao2016, A pupillary dilation response is known to be evoked by salient deviant or contrast auditory stimuli, but so far a direct link between it and subjective salience has been lacking. In two experiments, participants listened to various environmental sounds while their pupillary responses were recorded. In separate sessions, participants performed subjec-tive pairwise-comparison tasks on the sounds with respect to their salience, loudness, vigorousness, preference, beauty, an-noyance, and hardness. The pairwise-comparison data were converted to ratings on the Thurstone scale. The results showed a close link between subjective judgments of salience and loudness. The pupil dilated in response to the sound pre-sentations, regardless of sound type. Most importantly, this pupillary dilation response to an auditory stimulus positively correlated with the subjective salience, as well as the loudness, of the sounds (Exp. 1). When the loudnesses of the sounds were identical, the pupil responses to each sound were similar and were not correlated with the subjective judgments of sa-lience or loudness (Exp. 2). This finding was further con-firmed by analyses based on individual stimulus pairs and participants. In Experiment 3, when salience and loudness were manipulated by systematically changing the sound pres-sure level and acoustic characteristics, the pupillary dilation response reflected the changes in both manipulated factors. A regression analysis showed a nearly perfect linear correlation between the pupillary dilation response and loudness. The overall results suggest that the pupillary dilation response re-flects the subjective salience of sounds, which is defined, or is heavily influenced, by loudness. |
Hsin-I Liao; Makoto Yoneya; Shunsuke Kidani; Makio Kashino; Shigeto Furukawa Human pupillary dilation response to deviant auditory stimuli: Effects of stimulus properties and voluntary attention Journal Article In: Frontiers in Neuroscience, vol. 10, pp. 43, 2016. @article{Liao2016a, A unique sound that deviates from a repetitive background sound induces signature neural responses, such as mismatch negativity and novelty P3 response in electro-encephalography studies. Here we show that a deviant auditory stimulus induces a human pupillary dilation response (PDR) that is sensitive to the stimulus properties and irrespective whether attention is directed to the sounds or not. In an auditory oddball sequence, we used white noise and 2000-Hz tones as oddballs against repeated 1000-Hz tones. Participants' pupillary responses were recorded while they listened to the auditory oddball sequence. In Experiment 1, they were not involved in any task. Results show that pupils dilated to the noise oddballs for approximately 4 s, but no such PDR was found for the 2000-Hz tone oddballs. In Experiments 2, two types of visual oddballs were presented synchronously with the auditory oddballs. Participants discriminated the auditory or visual oddballs while trying to ignore stimuli from the other modality. The purpose of this manipulation was to direct attention to or away from the auditory sequence. In Experiment 3, the visual oddballs and the auditory oddballs were always presented asynchronously to prevent residuals of attention on to-be-ignored oddballs due to the concurrence with the attended oddballs. Results show that pupils dilated to both the noise and 2000-Hz tone oddballs in all conditions. Most importantly, PDRs to noise were larger than those to the 2000-Hz tone oddballs regardless of the attention condition in both experiments. The overall results suggest that the stimulus-dependent factor of the PDR appears to be independent of attention. |
Tomas Knapen; Jan Willem De Gee; Jan Brascamp; Stijn Nuiten; Sylco Hoppenbrouwers; Jan Theeuwes Cognitive and ocular factors jointly determine pupil responses under equiluminance Journal Article In: PLoS ONE, vol. 11, no. 5, pp. e0155574, 2016. @article{Knapen2016, Changes in pupil diameter can reflect high-level cognitive signals that depend on central neuromodulatory mechanisms. However, brain mechanisms that adjust pupil size are also exquisitely sensitive to changes in luminance and other events that would be considered a nuisance in cognitive experiments recording pupil size.We implemented a simple auditory experiment involving no changes in visual stimulation. Using finite impulse-response fitting we found pupil responses triggered by different types of events. Among these are pupil responses to auditory events and associated surprise: cognitive effects. However, these cognitive responses were overshadowed by pupil responses associated with blinks and eye movements, both inevitable nuisance factors that lead to changes in effective luminance. Of note, these latter pupil responses were not recording artifacts caused by blinks and eye movements, but endogenous pupil responses that occurred in the wake of these events. Furthermore, we identified slow (tonic) changes in pupil size that differentially influenced faster (phasic) pupil responses. Fitting all pupil responses using gamma functions, we provide accurate characterisations of cognitive and non-cognitive response shapes, and quantify each response's dependence on tonic pupil size. These results allow us to create a set of recommendations for pupil size analysis in cognitive neuroscience, which we have implemented in freely available software. |
Tomas Knapen; Jascha D. Swisher; Frank Tong; Patrick Cavanagh Oculomotor remapping of visual information to foveal retinotopic cortex Journal Article In: Frontiers in Systems Neuroscience, vol. 10, pp. 54, 2016. @article{Knapen2016a, Our eyes continually jump around the visual scene to bring the high-resolution, central part of our vision onto objects of interest. We are oblivious to these abrupt shifts, perceiving the visual world to appear reassuringly stable. A process called remapping has been proposed to mediate this perceptual stability for attended objects by shifting their retinotopic representation to compensate for the effects of the upcoming eye movement. In everyday vision, observers make goal-directed eye movements towards items of interest bringing them to the fovea and, for these items, the remapped activity should impinge on foveal regions of the retinotopic maps in visual cortex. Previous research has focused instead on remapping for targets that were not saccade goals, where activity is remapped to a new peripheral location rather than to the foveal representation. We used functional MRI and a phase-encoding design to investigate remapping of spatial patterns of activity towards the fovea/parafovea for saccade targets that were removed prior to completion of the eye movement. We found strong evidence of foveal remapping in retinotopic visual areas, which failed to occur when observers merely attended to the same peripheral target without making eye movements toward it. Significantly, the spatial profile of the remapped response matched the orientation and size of the saccade target, and was appropriately scaled to reflect the retinal extent of the stimulus had it been foveated. We conclude that this remapping of spatially structured information to the fovea may serve as an important mechanism to support our world-centered sense of location across goal-directed eye movements under natural viewing conditions. |
Klemens M. Knoeferle; Pia Knoeferle; Carlos Velasco; Charles Spence Multisensory brand search: How the meaning of sounds guides consumers' visual attention Journal Article In: Journal of Experimental Psychology: Applied, vol. 22, no. 2, pp. 196–210, 2016. @article{Knoeferle2016, Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short-term learning in establishing audiovisual semantic associations. The facilitation effect was stronger when searching complex rather than simple visual displays, thus suggesting a modulatory role of perceptual load. |
Makoto Kobayashi Delayed saccade to perceptually demanding locations in Parkinson's disease: Analysis from the perspective of the speed–accuracy trade-off Journal Article In: Neurological Sciences, vol. 37, no. 11, pp. 1841–1848, 2016. @article{Kobayashi2016, Parkinson's disease (PD) patients reportedly have shortened, normal, or prolonged latency of visually guided saccades (VGSs). This inconsistency seems to be partly derived from differences in experimental conditions, such as target eccentricity and direction. Another etiology may be a physiological saccade property, the speed-accuracy trade-off. VGS latency tends to increase along with its gain in certain conditions; however, this relationship has not been addressed in PD saccade studies. In this study, we measured VGS latency and gain in 47 PD patients and 48 normal controls (NCs). VGS was evoked by a target, which was presented at the central position initially and pseudo-randomly jumped to the horizontal (10° or 20° eccentricity) or vertical (10° or 15°) meridian. For each target location, the logarithm of the latency (log-latency) was modeled with subject type (PD or NC), age, and gain in the linear-mixed regression analysis. Subsequently, for target locations where PD patients showed an abnormality, the log-latency was similarly modeled with additional clinical variables measured by the mini-mental state examination (MMSE) and unified Parkinson's disease rating scale Part III. PD saccade latency was prolonged and influenced by the MMSE score when targets were presented at the 20° horizontal and upper vertical meridians. Furthermore, gain was a consistently significant variable in all models. The target locations of the delayed saccade corresponded to perceptually demanding locations, indicating that PD subclinical visual dysfunction prolonged the latency. The influence of the MMSE score supports this reasoning. Moreover, the speed-accuracy trade-off appeared to contribute to the accurate saccade analysis. |
Xaver Koch; Esther Janse Speech rate effects on the processing of conversational speech across the adult life span Journal Article In: The Journal of the Acoustical Society of America, vol. 139, no. 4, pp. 1618–1636, 2016. @article{Koch2016, This study investigates the effect of speech r ate on spoken word recognition across the adult life span. Contrary to previous studies, convers ational materials with a natural variation in speech rate were used rather than lab-recorded stimuli that are subseque ntly artificially time-compressed. It was investigated whether older adults' speech recognition is more adversely affected by increased speech rate compared to younger and middle-aged adults, and which individual listener characteristics (e.g., hearing, fluid cognitive processi ng ability) predict the size of the speech rate effect on recognition performance. In an eye-trac king experiment, par-ticipants indicated with a mous e-click which visually presented words they recognized in a conversational fragment. Click response times , gaze, and pupil size data were analyzed. As expected, click response times and gaze behavi or were affected by speech rate, indicating that word recognition is more difficult if speec h rate is faster. Contrary to earlier findings, increased speech rate affecte d the age groups to the same extent. Fluid cognitive processing ability predicted general re cognition performance, but did not modulate the speech rate effect. These findings emphasize that earlier results of age by speech rate interactions mainly obtained with artificially speeded materials ma y not generalize to speech rate variation as encountered in conversational speech. |
Ellen M. Kok; Halszka Jarodzka; Anique B. H. Bruin; Hussain A. N. BinAmir; Simon G. F. Robben; Jeroen J. G. Merriënboer Systematic viewing in radiology: Seeing more, missing less? Journal Article In: Advances in Health Sciences Education, vol. 21, no. 1, pp. 189–205, 2016. @article{Kok2016, To prevent radiologists from overlooking lesions, radiology textbooks rec- ommend ‘‘systematic viewing,'' a technique whereby anatomical areas are inspected in a fixed order. This would ensure complete inspection (full coverage) of the image and, in turn, improve diagnostic performance. To test this assumption, two experiments were performed. Both experiments investigated the relationship between systematic viewing, coverage, and diagnostic performance. Additionally, the first investigated whether sys- tematic viewing increases with expertise; the second investigated whether novices benefit from full-coverage or systematic viewing training. In Experiment 1, 11 students, ten res- idents, and nine radiologists inspected five chest radiographs. Experiment 2 had 75 students undergo a training in either systematic, full-coverage (without being systematic) or non- systematic viewing. Eye movements and diagnostic performance were measured throughout both experiments. In Experiment 1, no significant correlations were found between systematic viewing and coverage |
Oleg V. Komogortsev; Alexey Karpov Oculomotor plant characteristics : The effects of environment and stimulus Journal Article In: IEEE Transactions on Information Forensics and Security, vol. 11, no. 3, pp. 621–632, 2016. @article{Komogortsev2016, This paper presents an objective evaluation of the effects of environmental factors, such as stimulus presentation and eye tracking specifications, on the biometric accuracy of oculomotor plant characteristic (OPC) biometrics. The study examines the largest known dataset for eye movement biometrics, with eye movements recorded from 323 subjects over multiple sessions. Six spatial precision tiers (0.01°, 0.11°, 0.21°, 0.31°, 0.41°, 0.51°), six temporal resolution tiers (1000 Hz, 500 Hz, 250 Hz, 120 Hz, 75 Hz, 30 Hz), and three stimulus types (horizontal, random, textual) are evaluated to identify acceptable conditions under which to collect eye movement data. The results suggest the use of eye tracking equipment providing at least 0.1° spatial precision and 30 Hz sampling rate for biometric purposes, and the use of a horizontal pattern stimulus when using the two- dimensional oculomotor plant model developed by Komogortsev et al. [1] |
Arkady Konovalov; Ian Krajbich Gaze data reveal distinct choice processes underlying model-based and model-free reinforcement learning Journal Article In: Nature Communications, vol. 7, pp. 12438, 2016. @article{Konovalov2016, Organisms appear to learn and make decisions using different strategies known as model-free and model-based learning; the former is mere reinforcement of previously rewarded actions and the latter is a forward-looking strategy that involves evaluation of action-state transition probabilities. Prior work has used neural data to argue that both model-based and model-free learners implement a value comparison process at trial onset, but model-based learners assign more weight to forward-looking computations. Here using eye-tracking, we report evidence for a different interpretation of prior results: model-based subjects make their choices prior to trial onset. In contrast, model-free subjects tend to ignore model-based aspects of the task and instead seem to treat the decision problem as a simple comparison process between two differentially valued items, consistent with previous work on sequential-sampling models of decision making. These findings illustrate a problem with assuming that experimental subjects make their decisions at the same prescribed time. |
Arnout W. Koornneef; Jakub Dotlačil; Paul W. Broek; Ted J. M. Sanders The influence of linguistic and cognitive factors on the time course of verb-based implicit causality Journal Article In: Quarterly Journal of Experimental Psychology, vol. 69, no. 3, pp. 455–481, 2016. @article{Koornneef2016, In three eye-tracking experiments the influence of the Dutch causal connective "want" (because) and the working memory capacity of readers on the usage of verb-based implicit causality was examined. Experiments 1 and 2 showed that although a causal connective is not required to activate implicit causality information during reading, effects of implicit causality surfaced more rapidly and were more pronounced when a connective was present in the discourse than when it was absent. In addition, Experiment 3 revealed that-in contrast to previous claims-the activation of implicit causality is not a resource-consuming mental operation. Moreover, readers with higher and lower working memory capacities behaved differently in a dual-task situation. Higher span readers were more likely to use implicit causality when they had all their working memory resources at their disposal. Lower span readers showed the opposite pattern as they were more likely to use the implicit causality cue in the case of an additional working memory load. The results emphasize that both linguistic and cognitive factors mediate the impact of implicit causality on text comprehension. The implications of these results are discussed in terms of the ongoing controversies in the literature-that is, the focusing-integration debate and the debates on the source of implicit causality. |
Christoph W. Korn; Dominik R. Bach A solid frame for the window on cognition: Modeling event-related pupil responses Journal Article In: Journal of Vision, vol. 16, no. 3, pp. 1–16, 2016. @article{Korn2016, Pupil size is often used to infer central processes, including attention, memory, and emotion. Recent research has spotlighted its relation to behavioral variables from decision-making models and to neural variables such as locus coeruleus activity and cortical oscillations. As yet, a unified and principled approach for analyzing pupil responses is lacking. Here we seek to establish a formal, quantitative forward model for pupil responses by describing them with linear time-invariant systems. Based on empirical data from human participants, we show that a combination of two linear time-invariant systems can parsimoniously explain approximately all variance evoked by illuminance changes. Notably, the model makes a counterintuitive prediction that pupil constriction dominates the responses to darkness flashes, as in previous empirical reports. This prediction was quantitatively confirmed for responses to light and darkness flashes in an independent group of participants. Crucially, illuminance- and nonilluminance-related inputs to the pupillary system are presumed to share a common final pathway, composed of muscles and nerve terminals. Hence, we can harness our illuminance-based model to estimate the temporal evolution of this neural input for an auditory-oddball task, an emotional-words task, and a visual-detection task. Onset and peak latencies of the estimated neural inputs furnish plausible hypotheses for the complexity of the underlying neural circuit. To conclude, this mathematical description of pupil responses serves as a prerequisite to refining their relation to behavioral and brain indices of cognitive processes. |
Veronica Whitford; Debra Titone Eye movements and the perceptual span during first- and second-language sentence reading in bilingual older adults Journal Article In: Psychology and Aging, vol. 31, no. 1, pp. 58–70, 2016. @article{Whitford2016, This study addressed a central yet previously unexplored issue in the psychological science of aging, namely, whether the advantages of healthy aging (e.g., greater lifelong experience with language) or disadvantages (e.g., decreases in cognitive and sensory processing) drive L1 and L2 reading performance in bilingual older adults. To this end, we used a gaze-contingent moving window paradigm to examine both global aspects of reading fluency (e.g., reading rates, number of regressions) and the perceptual span (i.e., allocation of visual attention into the parafovea) in bilingual older adults during L1 and L2 sentence reading, as a function of individual differences in current L2 experience. Across the L1 and L2, older adults exhibited reduced reading fluency (e.g., slower reading rates, more regressions), but a similar perceptual span compared with matched younger adults. Also similar to matched younger adults, older adults' reading fluency was lower for L2 reading than for L1 reading as a function of current L2 experience. Specifically, greater current L2 experience increased L2 reading fluency, but decreased L1 reading fluency (for global reading measures only). Taken together, the dissociation between intact perceptual span and impaired global reading measures suggests that older adults may prioritize parafoveal processing despite age-related encoding difficulties. Consistent with this interpretation, post hoc analyses revealed that older adults with higher versus lower executive control were more likely to adopt this strategy. |
Bogusława Whyatt; Katarzyna Stachowiak; Marta Kajzer-Wietrzny Similar and different: Cognitive rhythm and effort in translation and paraphrasing Journal Article In: Poznan Studies in Contemporary Linguistics, vol. 52, no. 2, pp. 175–208, 2016. @article{Whyatt2016, Although Jakobson's (1959) seminal classification of translation into three kinds: interlingual, intralingual and intersemiotic has been widely accepted in Translation Studies, so far most research interest has focused on interlingual translation, defined as “translation proper”. Intralingual translation, more often understood as rewording, paraphrasing or reformulation within the same language, is a less prototypical kind of translation, yet we believe that the underlying mental operations needed to perform both tasks include similar processing stages. Bearing in mind the lack of research comparing inter-and intralingual translation we designed the ParaTrans project in which we investigate how translators make decisions in both tasks. In this article we present the results of a comparative analysis of processing effort and cognitive rhythm demonstrated by professional translators who were asked to translate and paraphrase similar texts. Having collected three streams of translation process data with such tools as key-logging, eye-tracking and screen-capture software, we are able to draw some tentative conclusions concerning the similarities and differences between language processing for interlingual translation and intralingual paraphrasing. The results confirm a higher processing effort in interlingual translation most likely due to the need to switch between languages. |
Farahnaz A. Wick; Tyler W. Garaas; Marc Pomplun Saccadic adaptation alters the attentional field Journal Article In: Frontiers in Human Neuroscience, vol. 10, pp. 568, 2016. @article{Wick2016, It is currently unknown whether changes to the oculomotor system can induce changes to the distribution of spatial attention around a fixated target. Previous studies have used perceptual performance tasks to show that adaptation of saccadic eye movements affects dynamic properties of visual attention, in particular, attentional shifts to a cued location. In this study, we examined the effects of saccadic adaptation on the static distribution of visual attention around fixation (attentional field). We used the classic double step adaptation procedure and a flanker task to test for differences in the attentional field after forward and backward adaptation. RT measures revealed that the shape of the attentional field changed significantly after backward adaptation as shown through altered interference from distracters at different eccentricities but not after forward adaptation. This finding reveals that modification of saccadic amplitudes can affect metrics of not only dynamic properties of attention but also its static properties. A major implication is that the neural mechanisms underlying fundamental selection mechanisms and the oculomotor system can reweight each other. |
Thomas D. W. Wilcockson; N. E. M. Sanal Heavy cannabis use and attentional avoidance of anxiety-related stimuli Journal Article In: Addictive Behaviors Reports, vol. 3, pp. 38–42, 2016. @article{Wilcockson2016, Objectives: Cannabis is now the most widely used illicit substance in the world. Previous research demonstrates that cannabis use is associated with dysfunctional affect regulation and anxiety. Anxiety is characterised by attentional biases in the presence of emotional information. This novel study therefore examined the attentional bias of cannabis users when presented with anxiety-related stimuli. The aim was to establish whether cannabis users respond to anxiety-related stimuli differently to control participants. Methods: A dot-probe paradigm was utilised using undergraduate students. Trials contained anxiety-related stimuli and neutral control stimuli. Eye-tracking was used to measure attention for the stimuli. Results: Results indicated that cannabis users demonstrated attentional-avoidance behaviour when presented with anxiety-related stimuli. Conclusions: The findings suggest a difference in processing of emotional information in relation to neutral information between groups. It would appear that cannabis users avoid anxiety provoking stimuli. Such behaviour could potentially have motivational properties that could lead to exacerbating anxiety disorder-type behaviour. |
Theresa Wildegger; Glyn W. Humphreys; Anna C. Nobre Retrospective attention interacts with stimulus strength to shape working memory performance Journal Article In: PLoS ONE, vol. 11, no. 10, pp. e0164174, 2016. @article{Wildegger2016, Orienting attention retrospectively to selective contents in working$backslash$nmemory (WM) influences performance. A separate line of research has$backslash$nshown that stimulus strength shapes perceptual representations. There is$backslash$nlittle research on how stimulus strength during encoding shapes WM$backslash$nperformance, and how effects of retrospective orienting might vary with$backslash$nchanges in stimulus strength. We explore these questions in three$backslash$nexperiments using a continuous-recall WM task. In Experiment 1 we show$backslash$nthat benefits of cueing spatial attention retrospectively during WM$backslash$nmaintenance (retrocueing) varies according to stimulus contrast during$backslash$nencoding. Retrocueing effects emerge for supraliminal but not$backslash$nsub-threshold stimuli. However, once stimuli are supraliminal,$backslash$nperformance is no longer influenced by stimulus contrast. In Experiments$backslash$n2 and 3 we used a mixture-model approach to examine how different$backslash$nsources of error in WM are affected by contrast and retrocueing. For$backslash$nhigh-contrast stimuli (Experiment 2), retrocues increased the precision$backslash$nof successfully remembered items. For low-contrast stimuli (Experiment$backslash$n3), retrocues decreased the probability of mistaking a target with$backslash$ndistracters. These results suggest that the processes by which$backslash$nretrospective attentional orienting shape WM performance are dependent$backslash$non the quality of WM representations, which in turn depends on stimulus$backslash$nstrength during encoding. |
Meytal Wilf; Michal Ramot; Edna Furman-Haran; Anat Arzi; Yechiel Levkovitz; Rafael Malach Diminished auditory responses during NREM sleep correlate with the hierarchy of language processing Journal Article In: PLoS ONE, vol. 11, no. 6, pp. e0157143, 2016. @article{Wilf2016, Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI) during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM) sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG) cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations. |
Louise R. Williams; Madeleine A. Grealy; Steve W. Kelly; Iona Henderson; Stephen H. Butler Perceptual bias, more than age, impacts on eye movements during face processing Journal Article In: Acta Psychologica, vol. 164, pp. 127–135, 2016. @article{Williams2016, Consistent with the right hemispheric dominance for face processing, a left perceptual bias (LPB) is typically demonstrated by younger adults viewing faces and a left eye movement bias has also been revealed. Hemispheric asymmetry is predicted to reduce with age and older adults have demonstrated a weaker LPB, particularly when viewing time is restricted. What is currently unclear is whether age also weakens the left eye movement bias. Additionally, a right perceptual bias (RPB) for facial judgments has less frequently been demonstrated, but whether this is accompanied by a right eye movement bias has not been investigated. To address these issues older and younger adults' eye movements and gender judgments of chimeric faces were recorded in two time conditions. Age did not significantly weaken the LPB or eye movement bias; both groups looked initially to the left side of the face and made more fixations when the gender judgment was based on the left side. A positive association was found between LPB and initial saccades in the freeview condition and with all eye movements (initial saccades, number and duration of fixations) when time was restricted. The accompanying eye movement bias revealed by LPB participants contrasted with RPB participants who demonstrated no eye movement bias in either time condition. Consequently, increased age is not clearly associated with weakened perceptual and eye movement biases. Instead an eye movement bias accompanies an LPB (particularly under restricted viewing time conditions) but not an RPB. |
Amanda H. Wilson; Agnès Alsius; Martin Paré; Kevin G. Munhall Spatial frequency requirements and gaze strategy in visual-only and audiovisual speech perception Journal Article In: Journal of Speech, Language, and Hearing Research, vol. 59, pp. 601–615, 2016. @article{Wilson2016, Purpose: Understanding speech in background noise is difficult for many individuals; however, time constraints have limited its inclusion in the clinical audiology assessment battery. Phoneme scoring of words has been suggested as a method of reducing test time and variability. The purposes of this study were to establish a phoneme scoring rubric and use it in testing phoneme and word perception in noise in older individuals and individuals with hearing impairment. Method: Words were presented to 3 participant groups at 80 dB in speech-shaped noise at 7 signal-to-noise ratios (−10 to 35 dB). Responses were scored for words and phonemes correct. Results: It was not surprising to find that phoneme scores were up to about 30% better than word scores. Word scoring resulted in larger hearing loss effect sizes than phoneme scoring, whereas scoring method did not significantly modify age effect sizes. There were significant effects of hearing loss and some limited effects of age; age effect sizes of about 3 dB and hearing loss effect sizes of more than 10 dB were found. Conclusion: Hearing loss is the major factor affecting word and phoneme recognition with a subtle contribution of age. Phoneme scoring may provide several advantages over word scoring. A set of recommended phoneme scoring guidelines is provided. |
Matthew B. Winn Rapid release from listening effort resulting from semantic context, and effects of spectral degradation and cochlear implants Journal Article In: Trends in Hearing, vol. 20, 2016. @article{Winn2016, People with hearing impairment are thought to rely heavily on context to compensate for reduced audibility. Here, we explore the resulting cost of this compensatory behavior, in terms of effort and the efficiency of ongoing predictive language processing. The listening task featured predictable or unpredictable sentences, and participants included people with cochlear implants as well as people with normal hearing who heard full-spectrum/unprocessed or vocoded speech. The crucial metric was the growth of the pupillary response and the reduction of this response for predictable versus unpredictable sentences, which would suggest reduced cognitive load resulting from predictive processing. Semantic context led to rapid reduction of listening effort for people with normal hearing; the reductions were observed well before the offset of the stimuli. Effort reduction was slightly delayed for people with cochlear implants and considerably more delayed for normal-hearing listeners exposed to spectrally degraded noise-vocoded signals; this pattern of results was maintained even when intelligibility was perfect. Results suggest that speed of sentence processing can still be disrupted, and exertion of effort can be elevated, even when intelligibility remains high. We discuss implications for experimental and clinical assessment of speech recognition, in which good performance can arise because of cognitive processes that occur after a stimulus, during a period of silence. Because silent gaps are not common in continuous flowing speech, the cognitive/linguistic restorative processes observed after sentences in such studies might not be available to listeners in everyday conversations, meaning that speech recognition in conventional tests might overestimate sentence-processing capability. |
Richard J. Wiseman; Tamami Nakano Blink and you'll miss it: the role of blinking in the perception of magic tricks Journal Article In: PeerJ, vol. 4, pp. 1–9, 2016. @article{Wiseman2016, <p>Magicians use several techniques to deceive their audiences, including, for example, the misdirection of attention and verbal suggestion. We explored another potential stratagem, namely the relaxation of attention. Participants watched a video of a highly skilled magician whilst having their eye-blinks recorded. The timing of spontaneous eye-blinks was highly synchronized across participants. In addition, the synchronized blinks frequency occurred immediately after a seemingly impossible feat, and often coincided with actions that the magician wanted to conceal from the audience. Given that blinking is associated with the relaxation of attention, these findings suggest that blinking plays an important role in the perception of magic, and that magicians may utilize blinking and the relaxation of attention to hide certain secret actions.</p> |
Nicolas Wattiez; Tymothée Poitou; Sophie Rivaud-Péchoux; Pierre Pouget Evidence for spatial tuning of movement inhibition Journal Article In: Experimental Brain Research, vol. 234, no. 7, pp. 1957–1966, 2016. @article{Wattiez2016, The time to initiate a movement can, even implicitly, be influenced by the environment. All primates, including humans, respond faster and with greater accuracy to stimuli that are brighter, louder or associated with larger reward, than to neutral stimuli. Whether this environment also modulates the executive functions which allow ongoing actions to be suppressed remains an issue of debate. In this study, we investigated the implicit learning of spatial selectivity of movement inhibition in humans and macaque monkeys performing a saccade-countermanding task. The occurrence of stop trials, in which subjects were visually instructed to cancel a prepared movement, was manipulated according to the target location. One visual target was associated with higher probability of stop signal appearance (e.g., 80 %), while the second target was associated with low fraction of stop (e.g., 20 %). The absolute occurrence of stop trials across the two targets (50 %) remains constant. The results show that human and macaque monkeys can selectively adapt their behaviors according to the implicit probability of stopping. Behavioral adjustments were larger when targets were in different hemifields and for larger distances between targets. Reduced selective inhibitory behaviors were observed when 15 degrees of visual angle separated the targets, and this effect vanished when targets were separated by only 2 degrees . Overall, our study shows that both response and inhibition times can be modulated by the relative spatial occurrence of stop signals. We speculate that beyond the particular effect we observed in the context of the saccade paradigm, selective motor execution may imply a disinhibitory mechanism that modulates the motor pathways associated with the fronto-median cortex and basal ganglia circuits. |
Michael W. Weiss; Sandra E. Trehub; E. Glenn Schellenberg; Peter Habashi Pupils dilate for vocal or familiar music Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 42, no. 8, pp. 1061–1065, 2016. @article{Weiss2016, Previous research reveals that vocal melodies are remembered better than instrumental renditions. Here we explored the possibility that the voice, as a highly salient stimulus, elicits greater arousal than nonvocal stimuli, resulting in greater pupil dilation for vocal than for instrumental melodies. We also explored the possibility that pupil dilation indexes memory for melodies. We tracked pupil dilation during a single exposure to 24 unfamiliar folk melodies (half sung to la la, half piano) and during a subsequent recognition test in which the previously heard melodies were intermixed with 24 novel melodies (half sung, half piano) from the same corpus. Pupil dilation was greater for vocal melodies than for piano melodies in the exposure phase and in the test phase. It was also greater for previously heard melodies than for novel melodies. Our findings provide the first evidence that pupillometry can be used to measure recognition of stimuli that unfold over several seconds. They also provide the first evidence of enhanced arousal to vocal melodies during encoding and retrieval, thereby supporting the more general notion of the voice as a privileged signal. |
Nilly Weiss; Elite Mardo; Galia Avidan Visual expertise for horses in a case of congenital prosopagnosia Journal Article In: Neuropsychologia, vol. 83, pp. 63–75, 2016. @article{Weiss2016a, A major question in the domain of face perception is whether faces comprise a distinct visual category that is processed by specialized mechanisms, or whether face processing merely represents an extreme case of visual expertise. Here, we examined O.H, a 22 years old woman with congenital prosopagnosia (CP), who despite her severe deficits in face processing, acquired superior recognition skills for horses. To compare the nature of face and horse processing, we utilised the inversion manipulation, known to disproportionally affect faces compared to other objects, with both faces and horses. O.H's performance was compared to data obtained from two control groups that were either horse experts, or non-experts. As expected, both control groups exhibited the face inversion effect, while O.H did not show the effect, but importantly, none of the participants showed an inversion effect for horses. Finally, gaze behaviour toward upright and inverted faces and horses was indicative of visual skill but in a distinct fashion for each category. Particularly, both control groups showed different gaze patterns for upright compared to inverted faces, while O.H presented a similar gaze pattern for the two orientations that differed from that of the two control groups. In contrast, O.H and the horse experts exhibited a similar gaze pattern for upright and inverted horses, while non-experts showed different gaze patterns for different orientations. Taken together, these results suggest that visual expertise can be acquired independently from the mechanisms mediating face recognition. |
Kimberly B. Weldon; Anina N. Rich; Alexandra Woolgar; Mark A. Williams Disruption of foveal space impairs discrimination of peripheral objects Journal Article In: Frontiers in Psychology, vol. 7, pp. 699, 2016. @article{Weldon2016, Visual space is retinotopically mapped such that peripheral objects are processed in a cortical region outside the region that represents central vision. Despite this well-known fact, neuroimaging studies have found information about peripheral objects in the foveal confluence, the cortical region representing the fovea. Further, this information is behaviorally relevant: disrupting the foveal confluence using transcranial magnetic stimulation impairs discrimination of peripheral objects at time-points consistent with a disruption of feedback. If the foveal confluence receives feedback of information about peripheral objects to boost vision, there should be behavioral consequences of this phenomenon. Here, we tested the effect of foveal distractors at different stimulus onset asynchronies (SOAs) on discrimination of peripheral targets. Participants performed a discrimination task on target objects presented in the periphery while fixating centrally. A visual distractor presented at the fovea ~100ms after presentation of the targets disrupted performance more than a central distractor presented at other SOAs. This was specific to a central distractor; a peripheral distractor at the same time point did not have the same effect. These results are consistent with the claim that foveal retinotopic cortex is recruited for extra-foveal perception. This study describes a new paradigm for investigating the nature of the foveal feedback phenomenon and demonstrates the importance of this feedback in peripheral vision. |
Laura Jean Wells; Steven M. Gillespie; Pia Rotshtein Identification of emotional facial expressions: Effects of expression, intensity, and sex on eye gaze Journal Article In: PLoS ONE, vol. 11, no. 12, pp. e0168307, 2016. @article{Wells2016, The identification of emotional expressions is vital for social interaction, and can be affected by various factors, including the expressed emotion, the intensity of the expression, the sex of the face, and the gender of the observer. This study investigates how these factors affect the speed and accuracy of expression recognition, as well as dwell time on the two most significant areas of the face: the eyes and the mouth. Participants were asked to identify expressions from female and male faces displaying six expressions (anger, disgust, fear, happiness, sadness, and surprise), each with three levels of intensity (low, moderate, and normal). Overall, responses were fastest and most accurate for happy expressions, but slowest and least accurate for fearful expressions. More intense expressions were also classified most accurately. Reaction time showed a different pattern, with slowest response times recorded for expressions of moderate intensity. Overall, responses were slowest, but also most accurate, for female faces. Relative to male observers, women showed greater accuracy and speed when recognizing female expressions. Dwell time analyses revealed that attention to the eyes was about three times greater than on the mouth, with fearful eyes in particular attracting longer dwell times. The mouth region was attended to the most for fearful, angry, and disgusted expressions and least for surprise. These results extend upon previous findings to show important effects of expression, emotion intensity, and sex on expression recognition and gaze behaviour, and may have implications for understanding the ways in which emotion recognition abilities break down. |
Kim Wende; Laetitia Theunissen; Marcus Missal Anticipation of physical causality guides eye movements Journal Article In: Journal of Eye Movement Research, vol. 9, no. 2, pp. 1–9, 2016. @article{Wende2016, Causality is a unique feature of human perception. We present here a behavioral investigation of the influence of physical causality during visual pursuit of object collisions. Pursuit and saccadic eye movements of human subjects were recorded during ocular pursuit of two concurrently launched targets, one that moved according to the laws of Newtonian mechanics (the causal target) and the other one that moved in a physically implausible direction (the non-causal target). We found that anticipation of collision evoked early smooth pursuit decelerations. Saccades to non-causal targets were hypermetric and had latencies longer than saccades to causal targets. In conclusion, before and after a collision of two moving objects the oculomotor system implicitly predicts upcoming physically plausible target trajectories. |
Dorothea Wendt; Torsten Dau; Jens Hjortkjær Impact of background noise and sentence complexity on processing demands during sentence comprehension Journal Article In: Frontiers in Psychology, vol. 7, pp. 345, 2016. @article{Wendt2016, Speech comprehension in adverse listening conditions can be effortful even when speech is fully intelligible. Acoustical distortions typically make speech comprehension more effortful, but effort also depends on linguistic aspects of the speech signal, such as its syntactic complexity. In the present study, pupil dilations, and subjective effort ratings were recorded in 20 normal-hearing participants while performing a sentence comprehension task. The sentences were either syntactically simple (subject-first sentence structure) or complex (object-first sentence structure) and were presented in two levels of background noise both corresponding to high intelligibility. A digit span and a reading span test were used to assess individual differences in the participants' working memory capacity (WMC). The results showed that the subjectively rated effort was mostly affected by the noise level and less by syntactic complexity. Conversely, pupil dilations increased with syntactic complexity but only showed a small effect of the noise level. Participants with higher WMC showed increased pupil responses in the higher-level noise condition but rated sentence comprehension as being less effortful compared to participants with lower WMC. Overall, the results demonstrate that pupil dilations and subjectively rated effort represent different aspects of effort. Furthermore, the results indicate that effort can vary in situations with high speech intelligibility. |
Jessica Werthmann; Anita Jansen; Anne Roefs Make up your mind about food: A healthy mindset attenuates attention for high-calorie food in restrained eaters Journal Article In: Appetite, vol. 105, pp. 53–59, 2016. @article{Werthmann2016, Attention bias for food could be a cognitive pathway to overeating in obesity and restrained eating. Yet, empirical evidence for individual differences (e.g., in restrained eating and body mass index) in attention bias for food is mixed. We tested experimentally if temporarily induced health versus palatability mindsets influenced attention bias for food, and whether restrained eating moderated this relation. After manipulating mindset (health vs. palatability) experimentally, food-related attention bias was measured by eye-movements (EM) and response latencies (RL) during a visual probe task depicting high-calorie food and non-food. Restrained eating was assessed afterwards. A significant interaction of mindset and restrained eating on RL bias emerged, β = 0.36, t(58) = 2.05 |
Gregory L. West; Sarah Lippé The development of inhibitory saccadic trajectory deviations correlates with measures of antisaccadic inhibition Journal Article In: NeuroReport, vol. 27, pp. 1196–1201, 2016. @article{West2016, Chronological age is related positively to a participant's ability to inhibit distracting information. Inhibition can be measured using the trajectory deviation of a saccade. Saccadic curvature away from distracting visual information is controlled through top–down inhibition mediated by the frontal eye fields. In the present study, we aimed to further test the saccadic trajectory deviation paradigm's sensitivity to the development of frontal inhibitory procuresses by comparing its measure of saccadic inhibition with that of a widely used paradigm, namely, the antisaccade task. We show that the later ‘inhibition' phase of the trajectory deviation function correlated strongly with the measure of antisaccadic inhibition obtained in the same individuals. As expected, the earlier ‘capture' phase of the trajectory deviation function, which does not represent the involvement of frontal structures, did not correlate with antisaccadic inhibition. Further, both measures of frontal inhibition increased with chronological age. |
Nicole Wetzel; David Buttelmann; Andy Schieler; Andreas Widmann Infant and adult pupil dilation in response to unexpected sounds Journal Article In: Developmental Psychobiology, vol. 58, no. 3, pp. 382–392, 2016. @article{Wetzel2016, Surprisingly occurring sounds outside the focus of attention can involuntarily capture attention. This study focuses on the impact of deviant sounds on the pupil size as a marker of auditory involuntary attention in infants. We presented an oddball paradigm including four types of deviant sounds within a sequence of repeated standard sounds to 14-month-old infants and to adults. Environmental and noise deviant sounds elicited a strong pupil dilation response (PDR) in both age groups. In contrast, moderate frequency deviants elicited a significant PDR in adults only. Moreover, a principal component analysis revealed two components underlying the PDR. Component scores differ, depending on deviant types, between age groups. Results indicate age effects of parasympathetic inhibition and sympathetic activation of the pupil size caused by deviant sounds with a high arousing potential. Results demonstrate that the PDR is a sensitive tool for the investigation of involuntary attention to sounds in preverbal children. |
Alex L. White; Martin Rolfs Oculomotor inhibition covaries with conscious detection Journal Article In: Journal of Neurophysiology, vol. 116, pp. 1507–1521, 2016. @article{White2016, Saccadic eye movements occur frequently even during attempted fixation, but they halt momentarily when a new stimulus appears. Here, we demonstrate that this rapid, involuntary “oculomotor freezing” reflex is yoked to fluctuations in explicit visual perception. Human observers reported the presence or absence of a brief visual stimulus while we recorded microsaccades, small spontaneous eye movements. We found that microsaccades were reflexively inhibited if and only if the observer reported seeing the stimulus, even when none was present. By apply- ing a novel Bayesian classification technique to patterns of microsac- cades on individual trials, we were able to decode the reported state of perception more accurately than the state of the stimulus (present vs. absent). Moreover, explicit perceptual sensitivity and the oculomotor reflex were both susceptible to orientation-specific adaptation. The adaptation effects suggest that the freezing reflex is mediated by signals processed in the visual cortex before reaching oculomotor control centers rather than relying on a direct subcortical route, as some previous research has suggested. We conclude that the reflexive inhibition of microsaccades immediately and inadvertently reveals when the observer becomes aware of a change in the environment. By providing an objective measure of conscious perceptual detection that does not require explicit reports, this finding opens doors to clinical applications and further investigations of perceptual awareness. |
J. Kael White; Ilya E. Monosov Neurons in the primate dorsal striatum signal the uncertainty of object-reward associations Journal Article In: Nature Communications, vol. 7, pp. 12735, 2016. @article{White2016b, To learn, obtain reward and survive, humans and other animals must monitor, approach and act on objects that are associated with variable or unknown rewards. However, the neuronal mechanisms that mediate behaviours aimed at uncertain objects are poorly understood. Here we demonstrate that a set of neurons in an internal-capsule bordering regions of the primate dorsal striatum, within the putamen and caudate nucleus, signal the uncertainty of object– reward associations. Their uncertainty responses depend on the presence of objects asso- ciated with reward uncertainty and evolve rapidly as monkeys learn novel object–reward associations. Therefore, beyond its established role in mediating actions aimed at known or certain rewards, the dorsal striatum also participates in behaviours aimed at reward-uncertain objects. |
Sarah J. White; Denis Drieghe; Simon P. Liversedge; Adrian Staub The word frequency effect during sentence reading: A linear or nonlinear effect of log frequency? Journal Article In: Quarterly Journal of Experimental Psychology, vol. 71, no. 1, pp. 46–55, 2016. @article{White2016a, The effect of word frequency on eye movement behaviour during reading has been reported in many experimental studies. However, the vast majority of these studies compared only two levels of word frequency (high and low). Here we assess whether the effect of log word frequency on eye movement measures is linear, in an experiment in which a critical target word in each sentence was at one of three approximately equally spaced log frequency levels. Separate analyses treated log frequency as a categorical or a continuous predictor. Both analyses showed only a linear effect of log frequency on the likelihood of skipping a word, and on first fixation duration. Ex-Gaussian analyses of first fixation duration showed similar effects on distributional parameters in comparing high- and medium-frequency words, and medium- and low-frequency words. Analyses of gaze duration and the probability of a refixation suggested a nonlinear pattern, with a larger effect at the lower end of the log frequency scale. However, the nonlinear effects were small, and Bayes Factor analyses favoured the simpler linear models for all measures. The possible roles of lexical and post-lexical factors in producing nonlinear effects of log word frequency during sentence reading are discussed. |
Flora Vanlangendonck; Roel M. Willems; Laura Menenti; Peter Hagoort An early influence of common ground during speech planning Journal Article In: Language, Cognition and Neuroscience, vol. 31, no. 6, pp. 741–750, 2016. @article{Vanlangendonck2016, In order to communicate successfully, speakers have to take into account which information they share with their addressee, i.e. common ground. In the current experiment we investigated how and when common ground affects speech planning by tracking speakers' eye movements while they played a referential communication game. We found evidence that common ground exerts an early, but incomplete effect on speech planning. In addition, we did not find longer planning times when speakers had to take common ground into account, suggesting that taking common ground into account is not necessarily an effortful process. Common ground information thus appears to act as a partial constraint on language production that is integrated flexibly and efficiently in the speech planning process. |
Outi Veivo; Juhani Järvikivi; Vincent Porretta; Jukka Hyönä Orthographic activation in L2 spoken word recognition depends on proficiency: Evidence from eye-tracking Journal Article In: Frontiers in Psychology, vol. 7, pp. 1120, 2016. @article{Veivo2016, The use of orthographic and phonological information in spoken word recognition was studied in a visual world task where L1 Finnish learners of L2 French (n = 64) and L1 French native speakers (n = 24) were asked to match spoken word forms with printed words while their eye movements were recorded. In Experiment 1, French target words were contrasted with competitors having a longer (<base> vs. <bague>) or a shorter word initial phonological overlap (<base> vs. <bain>) and an identical orthographic overlap. In Experiment 2, target words were contrasted with competitors of either longer (<mince> vs. <mite>) or shorter word initial orthographic overlap (<mince> vs. <mythe>) and of an identical phonological overlap. A general phonological effect was observed in the L2 listener group but not in the L1 control group. No general orthographic effects were observed in the L2 or L1 groups, but a significant effect of proficiency was observed for orthographic overlap over time: higher proficiency L2 listeners used also orthographic information in the matching task in a time-window from 400 to 700 ms, whereas no such effect was observed for lower proficiency listeners. These results suggest that the activation of orthographic information in L2 spoken word recognition depends on proficiency in L2. |
Aaron Veldre; Sally Andrews Semantic preview benefit in English: Individual differences in the extraction and use of parafoveal semantic information Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 42, no. 6, pp. 837–854, 2016. @article{Veldre2016a, Although there is robust evidence that skilled readers of English extract and use orthographic and phonological information from the parafovea to facilitate word identification, semantic preview benefits have been elusive. We sought to establish whether individual differences in the extraction and/or use of parafoveal semantic information could account for this discrepancy. Ninety-nine adult readers who were assessed on measures of reading and spelling ability read sentences while their eye movements were recorded. The gaze-contingent boundary paradigm was used to manipulate the availability of relevant semantic and orthographic information in the parafovea. On average, readers showed a benefit from previews high in semantic feature overlap with the target. However, reading and spelling ability yielded opposite effects on semantic preview benefit. High reading ability was associated with a semantic preview benefit that was equivalent to an identical preview on first-pass reading. High spelling ability was associated with a reduced semantic preview benefit despite an overall higher rate of skipping. These results suggest that differences in the magnitude of semantic preview benefits in English reflect constraints on extracting semantic information from the parafovea and competition between the orthographic features of the preview and the target. |
Aaron Veldre; Sally Andrews Is semantic preview benefit due to relatedness or plausibility? Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 42, no. 7, pp. 939–952, 2016. @article{Veldre2016, There is increasing evidence that skilled readers of English benefit from processing a parafoveal preview of a semantically related word. However, in previous investigations of semantic preview benefit using the gaze-contingent boundary paradigm the semantic relatedness between the preview and target has been confounded with the plausibility of the preview word in the sentence. In the present study, preview relatedness and plausibility were independently manipulated in neutral sentences read by a large sample of skilled adult readers. Participants were assessed on measures of reading and spelling ability to identify possible sources of individual differences in preview effects. The results showed that readers benefited from a preview of a plausible word, regardless of the semantic relatedness of the preview and the target. However, there was limited evidence of a semantic relatedness benefit when the plausibility of the preview was controlled. The plausibility preview benefit was strongest for low proficiency readers, suggesting that poorer readers were more likely to program a forward saccade based on information extracted from the preview. High proficiency readers showed equivalent disruption from all nonidentical previews suggesting that they were more likely to suffer interference from the orthographic mismatch between preview and target. |
Preeti Verghese; Terence L. Tyson; Saeideh Ghahghaei; Donald C. Fletcher Depth perception and grasp in central field loss Journal Article In: Investigative Ophthalmology & Visual Science, vol. 57, no. 3, pp. 1476–1487, 2016. @article{Verghese2016, PURPOSE: We set out to determine whether individuals with central field loss benefit from using two eyes to perform a grasping task. Specifically, we tested the hypothesis that this advantage is correlated with coarse stereopsis, in addition to binocular summation indices of visual acuity, contrast sensitivity, and binocular visual field. METHODS: Sixteen participants with macular degeneration and nine age-matched controls placed pegs on a pegboard, while their eye and hand movements were recorded. Importantly, the pegboard was placed near eye height, to minimize the contribution of monocular cues to peg position. All participants performed this task binocularly and monocularly. Before the experiment, we performed microperimetry to determine the profile of field loss in each eye and the locations of eccentric fixation (if applicable). In addition, we measured both acuity and contrast sensitivity monocularly and binocularly, and stereopsis by using both a RanDot test and a custom stereo test. RESULTS: Peg-placement time was significantly shorter and participants made significantly fewer errors with binocular than with monocular viewing in both the patient and control groups. Among participants with measurable stereopsis, binocular advantage in peg-placement time was significantly correlated with stereoacuity (ρ = -0.78; P = 0.003). In patients without measurable stereopsis, the binocular advantage was related significantly to the overlap in the scotoma between the two eyes (ρ = -0.81; P = 0.032). CONCLUSIONS: The high correlation between grasp performance and stereoacuity indicates that coarse stereopsis may benefit tasks of daily living for individuals with central field loss. |
Bram-Ernst Verhoef; John H. R. Maunsell Attention operates uniformly throughout the classical receptive field and the surround Journal Article In: eLife, vol. 5, no. AUGUST, pp. 1–16, 2016. @article{Verhoef2016, Shifting attention among visual stimuli at different locations modulates neuronal responses in heterogeneous ways, depending on where those stimuli lie within the receptive fields of neurons. Yet how attention interacts with the receptive-field structure of cortical neurons remains unclear. We measured neuronal responses in area V4 while monkeys shifted their attention among stimuli placed in different locations within and around neuronal receptive fields. We found that attention interacts uniformly with the spatially-varying excitation and suppression associated with the receptive field. This interaction explained the large variability in attention modulation across neurons, and a non-additive relationship among stimulus selectivity, stimulus-induced suppression and attention modulation that has not been previously described. A spatially-tuned normalization model precisely accounted for all observed attention modulations and for the spatial summation properties of neurons. These results provide a unified account of spatial summation and attention-related modulation across both the classical receptive field and the surround. |
Lorenzo Vignali; Nicole A. Himmelstoss; Stefan Hawelka; Fabio Richlan; Florian Hutzler Oscillatory brain dynamics during sentence reading: A fixation-related spectral perturbation analysis Journal Article In: Frontiers in Human Neuroscience, vol. 10, pp. 191, 2016. @article{Vignali2016, The present study investigated oscillatory brain dynamics during self-paced sentence-level processing. Participants read fully correct sentences, sentences containing a semantic violation and "sentences" in which the order of the words was randomized. At the target word level, fixations on semantically unrelated words elicited a lower-beta band (13-18 Hz) desynchronization. At the sentence level, gamma power (31-55 Hz) increased linearly for syntactically correct sentences, but not when the order of the words was randomized. In the 300-900 ms time window after sentence onsets, theta power (4-7 Hz) was greater for syntactically correct sentences as compared to sentences where no syntactic structure was preserved (random words condition). We interpret our results as conforming with a recently formulated predictive-coding framework for oscillatory neural dynamics during sentence-level language comprehension. Additionally, we discuss how our results relate to previous findings with serial visual presentation vs. self-paced reading. |
Susheel Vijayraghavan; Alex James Major; Stefan Everling Dopamine D1 and D2 Receptors Make Dissociable Contributions to Dorsolateral Prefrontal Cortical Regulation of Rule-Guided Oculomotor Behavior Journal Article In: Cell Reports, vol. 16, no. 3, pp. 805–816, 2016. @article{Vijayraghavan2016, Studies of neuromodulation of spatial short-term memory have shown that dopamine D1 receptor (D1R) stimulation in dorsolateral prefrontal cortex (DLPFC) dose-dependently modulates memory activity, whereas D2 receptors (D2Rs) selectively modulate activity related to eye movements hypothesized to encode movement feedback. We examined localized stimulation of D1Rs and D2Rs on DLPFC neurons engaged in a task involving rule representation in memory to guide appropriate eye movements toward or away from a visual stimulus. We found dissociable effects of D1R and D2R on DLPFC physiology. D1R stimulation degrades memory activity for the task rule and increases stimulus-related selectivity. In contrast, D2R stimulation affects motor activity tuning only when eye movements are made to the stimulus. Only D1R stimulation degrades task performance and increases impulsive responding. Our results suggest that D1Rs regulate rule representation and impulse control, whereas D2Rs selectively modulate eye-movement-related dynamics and not rule representation in the DLPFC. |
Laura Vilkaitem Are nonadjacent collocations processed faster? Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 42, no. 10, pp. 1632–1642, 2016. @article{Vilkaitem2016, Numerous studies have shown processing advantages for collocations, but they only investigated processing of adjacent collocations (e.g., provide information). However, in naturally occurring language, nonadjacent collocations (provide some of the information) are equally, if not more frequent. This raises the question whether the same kind of processing advantage holds for nonadjacent collocations as for adjacent ones. This paper reports on an eye-tracking experiment in which participants read sentences containing either adjacent or nonadjacent collocations or matched control phrases. The results replicated the finding that collocations are processed faster than control phrases, and extended this finding to nonadjacent collocations. However, the results also suggest that the facilitative effect might be larger for adjacent collocations than for nonadjacent ones. |
Margarita Vinnikov; Robert S. Allison; Suzette Fernandes Impact of depth of field simulation on visual fatigue: Who are impacted? and how? Journal Article In: International Journal of Human-Computer Studies, vol. 91, pp. 37–51, 2016. @article{Vinnikov2016, While stereoscopic content can be compelling, it is not always comfortable for users to interact with on a regular basis. This is because the stereoscopic content on displays viewed at a short distance has been associated with different symptoms such as eye-strain, visual discomfort, and even nausea. Many of these symptoms have been attributed to cue conflict, for example between vergence and accommodation. To resolve those conflicts, volumetric and other displays have been proposed to improve the user's experience. However, these displays are expensive, unduly restrict viewing position, or provide poor image quality. As a result, commercial solutions are not readily available. We hypothesized that some of the discomfort and fatigue symptoms exhibited from viewing in stereoscopic displays may result from a mismatch between stereopsis and blur, rather than between sensed accommodation and vergence. To find factors that may support or disprove this claim, we built a real-time gaze-contingent system that simulates depth of field (DOF) that is associated with accommodation at the virtual depth of the point of regard (POR). Subsequently, a series of experiments evaluated the impact of DOF on people of different age groups (younger versus older adults). The difference between short duration discomfort and fatigue due to prolonged viewing was also examined. Results indicated that age may be a determining factor for a user's experience of DOF. There was also a major difference in a user's perception of viewing comfort during short-term exposure and prolonged viewing. Primarily, people did not find that the presence of DOF enhanced short-term viewing comfort, while DOF alleviated some symptoms of visual fatigue but not all. |
Renée M. Visser; Michelle I. C. Haan; Tinka Beemsterboer; Pia Haver; Merel Kindt; H. Steven Scholte Quantifying learning-dependent changes in the brain: Single-trial multivoxel pattern analysis requires slow event-related fMRI Journal Article In: Psychophysiology, vol. 53, no. 8, pp. 1117–1127, 2016. @article{Visser2016, Single-trial analysis is particularly useful for assessing cognitive processes that are intrinsically dynamic, such as learning. Studying these processes with fMRI is problematic, as the low signal-to-noise ratio of fMRI requires the averaging over multiple trials, obscuring trial-by-trial changes in neural activation. The superior sensitivity of multivoxel pattern analysis over univariate analyses has opened up new possibilities for single-trial analysis, but this may require different fMRI designs. Here, we measured fMRI and pupil dilation responses during discriminant aversive conditioning, to assess associative learning in a trial-by-trial manner. The impact of design choices was examined by varying trial spacing and trial order in a series of five experiments (total n = 66), while keeping stimulus duration constant (4.5 s). Our outcome measure was the change in similarity between neural response patterns related to two consecutive presentations of the same stimulus (within-stimulus) and between patterns related to pairs of different stimuli (between-stimulus) that shared a specific outcome (electric stimulation vs. no consequence). This trial-by-trial similarity analysis revealed clear single-trial learning curves in conditions with intermediate (8.1-12.6 s) and long (16.5-18.4 s) intervals, with effects being strongest in designs with long intervals and counterbalanced stimulus presentation. No learning curves were observed in designs with shorter intervals (1.6-6.1 s), indicating that rapid event-related designs-at present, the most common designs in fMRI research-are not suited for single-trial pattern analysis. These findings emphasize the importance of deciding on the type of analysis prior to data collection. |
Andrej Vlasenko; Tadas Limba; Mindaugas Kiškis; Gintarė Gulevičiūtė Research on human emotion while playing a computer game using pupil recognition technology. Journal Article In: TEM Journal, vol. 5, no. 4, pp. 417–423, 2016. @article{Vlasenko2016, The article presents the results of an experiment during which the participants were playing an online game (poker), and while playing the game, a special video cam was recording the diameters of the player's eye pupils. Diameter data and calculations were based on these records with the aid of a computer program; then, diagrams of the diameter changes in the players' pupils were created (built) depending on the game situation. The study was conducted in a real life situation, when the players were playing online poker. The results of the study point out the connection between the changes in the psycho-emotional state of the players and the changes in their pupil diameters, where the emotional state is a critical factor affecting the operation of such systems. |
Melissa L. -H. Võ; Avigael M. Aizenman; Jeremy M. Wolfe You think you know where you looked? You better look again Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 42, no. 10, pp. 1477–1481, 2016. @article{Vo2016, People are surprisingly bad at knowing where they have looked in a scene. We tested participants' ability to recall their own eye movements in 2 experiments using natural or artificial scenes. In each experiment, participants performed a change-detection (Exp.1) or search (Exp.2) task. On 25% of trials, after 3 seconds of viewing the scene, participants were asked to indicate where they thought they had just fixated. They responded by making mouse clicks on 12 locations in the unchanged scene. After 135 trials, observers saw 10 new scenes and were asked to put 12 clicks where they thought someone else would have looked. Although observers located their own fixations more successfully than a random model, their performance was no better than when they were guessing someone else's fixations. Performance with artificial scenes was worse, though judging one's own fixations was slightly superior. Even after repeating the fixation-location task on 30 scenes immediately after scene viewing, performance was far from the prediction of an ideal observer. Memory for our own fixation locations appears to add next to nothing beyond what common sense tells us about the likely fixations of others. These results have important implications for socially important visual search tasks. For example, a radiologist might think he has looked at "everything" in an image, but eye tracking data suggest that this is not so. Such shortcomings might be avoided by providing observers with better insights of where they have looked. (PsycINFO Database Record |
Helena X. Wang; Shlomit Yuval-Greenberg; David J. Heeger Suppressive interactions underlying visually evoked fixational saccades Journal Article In: Vision Research, vol. 118, no. 9, pp. 70–82, 2016. @article{Wang2016d, Small saccades occur frequently during fixation, and are coupled to changes in visual stimulation and cognitive state. Neurophysiologically, fixational saccades reflect neural activity near the foveal region of a continuous visuomotor map. It is well known that competitive interactions between neurons within visuomotor maps contribute to target selection for large saccades. Here we asked how such interactions in visuomotor maps shape the rate and direction of small fixational saccades. We measured fixational saccades during periods of prolonged fixation while presenting pairs of visual stimuli (parafoveal: 0.8° eccentricity; peripheral: 5° eccentricity) of various contrasts. Fixational saccade direction was biased toward locations of parafoveal stimuli but not peripheral stimuli, ~100-250 ms following stimulus onset. The rate of fixational saccades toward parafoveal stimuli (congruent saccades) increased systematically with parafoveal stimulus contrast, and was suppressed by the simultaneous presentation of a peripheral stimulus. The suppression was best characterized as a combination of two processes: a subtractive suppression of the overall fixational saccade rate and a divisive suppression of the direction bias. These results reveal the nature of suppressive interactions within visuomotor maps and constrain models of the population code for fixational saccades. |
Hongfang Wang; Eleanor Callaghan; Gerard Gooding-Williams; Craig McAllister; Klaus Kessler Rhythm makes the world go round: An MEG-TMS study on the role of right TPJ theta oscillations in embodied perspective taking Journal Article In: Cortex, vol. 75, pp. 68–81, 2016. @article{Wang2016e, While some aspects of social processing are shared between humans and other species, some aspects are not. The former seems to apply to merely tracking another's visual perspective in the world (i.e., what a conspecific can or cannot perceive), while the latter applies to perspective taking in form of mentally "embodying" another's viewpoint. Our previous behavioural research had indicated that only perspective taking, but not tracking, relies on simulating a body schema rotation into another's viewpoint. In the current study we employed Magnetoencephalography (MEG) and revealed that this mechanism of mental body schema rotation is primarily linked to theta oscillations in a wider brain network of body-schema, somatosensory and motor-related areas, with the right posterior temporo-parietal junction (pTPJ) at its core. The latter was reflected by a convergence of theta oscillatory power in right pTPJ obtained by overlapping the separately localised effects of rotation demands (angular disparity effect), cognitive embodiment (posture congruence effect), and basic body schema involvement (posture relevance effect) during perspective taking in contrast to perspective tracking. In a subsequent experiment we interfered with right pTPJ processing using dual pulse Transcranial Magnetic Stimulation (dpTMS) and observed a significant reduction of embodied processing. We conclude that right TPJ is the crucial network hub for transforming the embodied self into another's viewpoint, body and/or mind, thus, substantiating how conflicting representations between self and other may be resolved and potentially highlighting the embodied origins of high-level social cognition in general. |
Lijing Wang; Xueli He; Yingchun Chen Quantitative relationship model between workload and time pressure under different flight operation tasks Journal Article In: International Journal of Industrial Ergonomics, vol. 54, pp. 93–102, 2016. @article{Wang2016f, The goal of this study was to establish a quantitative relationship model between workload and task demand under different tasks, when time pressure was set as the main influential factor to the task demand, with three workload measurement parameters. The workload "redline" was also analyzed and determined with the relationship models between the workload measurement parameters and time pressure. The experiment was designed with three different tasks under different time pressures. Three workload measurement parameters (subjective evaluation workload, accuracy and pupil diameter) and the subjective feeling threshold of time pressure were measured experimentally and then used in a comprehensive analysis for the relationship model. The data analysis result showed significant differences in workload under different time pressures, but workload was not affected by the task type. With a time pressure of 0.8, participants felt a sense of time urgency and the accuracy decreased by approximately 85%. The results demonstrate that the subjective evaluation workload, accuracy and pupil diameter can be used as the measurement parameters for the workload under different time pressures and for different tasks. Thus, for a time pressure of 0.8, an accuracy of 80%-85% was determined as the workload "redline". Linear relationships were found between subjective evaluation workload, and pupil diameter and time pressure, and a quadratic curve relationship was found between accuracy and time pressure. Workload prediction can thus be performed using these relationship models between workload and time pressure. |
Rui Wang; Jie Wang; Jun-Yun Zhang; Xin-Yu Xie; Yu-Xiang Yang; Shu-Han Luo; Cong Yu; Wu Li Perceptual learning at a conceptual level Journal Article In: Journal of Neuroscience, vol. 36, no. 7, pp. 2238–2246, 2016. @article{Wang2016a, Humans can learn to abstract and conceptualize the shared visual features defining an object category in object learning. Therefore, learning is generalizable to transformations of familiar objects and even to new objects that differ in other physical properties. In contrast, visual perceptual learning (VPL), improvement in discriminating fine differences of a basic visual feature through training, is commonly regarded as specific and low-level learning because the improvement often disappears when the trained stimulus is simply relocated or rotated in the visual field. Such location and orientation specificity is taken as evidence for neural plasticity in primary visual cortex (V1) or improved readout of V1 signals. However, new training methods have shown complete VPL transfer across stimulus locations and orientations, suggesting the involvement of high-level cognitive processes. Here we report that VPL bears similar properties of object learning. Specifically, we found that orientation discrimination learning is completely transferrable between luminance gratings initially encoded in V1 and bilaterally symmetric dot patterns encoded in higher visual cortex. Similarly, motion direction discrimination learning is transferable between first-and second-order motion signals. These results suggest that VPL can take place at a conceptual level and generalize to stimuli with different physical properties. Our findings thus reconcile perceptual and object learning into a unified framework. |
Wuyi Wang; Shivakumar Viswanathan; Taraz Lee; Scott T. Grafton In: PLoS ONE, vol. 11, no. 7, pp. e0158465, 2016. @article{Wang2016g, Cortical theta band oscillations (4-8 Hz) in EEG signals have been shown to be important for a variety of different cognitive control operations in visual attention paradigms. However the synchronization source of these signals as defined by fMRI BOLD activity and the extent to which theta oscillations play a role in multimodal attention remains unknown. Here we investigated the extent to which cross-modal visual and auditory attention impacts theta oscillations. Using a simultaneous EEG-fMRI paradigm, healthy human participants performed an attentional vigilance task with six cross-modal conditions using naturalistic stimuli. To assess supramodal mechanisms, modulation of theta oscillation amplitude for attention to either visual or auditory stimuli was correlated with BOLD activity by conjunction analysis. Negative correlation was localized to cortical regions associated with the default mode network and positively with ventral premotor areas. Modality-associated attention to visual stimuli was marked by a positive correlation of theta and BOLD activity in fronto-parietal area that was not observed in the auditory condition. A positive correlation of theta and BOLD activity was observed in auditory cortex, while a negative correlation of theta and BOLD activity was observed in visual cortex during auditory attention. The data support a supramodal interaction of theta activity with of DMN function, and modality-associated processes within fronto-parietal networks related to top-down theta related cognitive control in cross-modal visual attention. On the other hand, in sensory cortices there are opposing effects of theta activity during cross-modal auditory attention. |
Xi Wang; Bin Cai; Yang Cao; Chen Zhou; Le Yang; Runzhong Liu; Xiaojing Long; Weicai Wang; Dingguo Gao; Baicheng Bao Objective method for evaluating orthodontic treatment from the lay perspective: An eye-tracking study Journal Article In: American Journal of Orthodontics and Dentofacial Orthopedics, vol. 150, no. 4, pp. 601–610, 2016. @article{Wang2016b, Introduction Currently, few methods are available to measure orthodontic treatment need and treatment outcome from the lay perspective. The objective of this study was to explore the function of an eye-tracking method to evaluate orthodontic treatment need and treatment outcome from the lay perspective as a novel and objective way when compared with traditional assessments. Methods The scanpaths of 88 laypersons observing the repose and smiling photographs of normal subjects and pretreatment and posttreatment malocclusion patients were recorded by an eye-tracking device. The total fixation time and the first fixation time on the areas of interest (eyes, nose, and mouth) for each group of faces were compared and analyzed using mixed-effects linear regression and a support vector machine. The aesthetic component of the Index of Orthodontic Treatment Need was used to categorize treatment need and outcome levels to determine the accuracy of the support vector machine in identifying these variables. Results Significant deviations in the scanpaths of laypersons viewing pretreatment smiling faces were noted, with less fixation time (P <0.05) and later attention capture (P <0.05) on the eyes, and more fixation time (P <0.05) and earlier attention capture (P <0.05) on the mouth than for the scanpaths of laypersons viewing normal smiling subjects. The same results were obtained when comparing posttreatment smiling patients, with less fixation time (P <0.05) and later attention capture on the eyes (P <0.05), and more fixation time (P <0.05) and earlier attention capture on the mouth (P <0.05). The pretreatment repose faces exhibited an earlier attention capture on the mouth than did the normal subjects (P <0.05) and posttreatment patients (P <0.05). Linear support vector machine classification showed accuracies of 97.2% and 93.4% in distinguishing pretreatment patients from normal subjects (treatment need), and pretreatment patients from posttreatment patients (treatment outcome), respectively. Conclusions The eye-tracking device was able to objectively quantify the effect of malocclusion on facial perception and the impact of orthodontic treatment on malocclusion from the lay perspective. The support vector machine for classification of selected features achieved high accuracy of judging treatment need and treatment outcome. This approach may represent a new method for objectively evaluating orthodontic treatment need and treatment outcome from the perspective of laypersons. |
Zheng Zhiwei Wang; Kristina Zeljic; Qinying Jiang; Yong Gu; Wei Wang; Zheng Wang Dynamic network communication in the human functional connectome predicts perceptual variability in visual illusion Journal Article In: Cerebral Cortex, vol. 28, no. 1, pp. 1–15, 2016. @article{Wang2016h, The eukaryotic RNA exosome is an essential, multi-subunit complex that catalyzes RNA turnover, maturation, and quality control processes. Its non-catalytic donut-shaped core includes 9 subunits that associate with the 3' to 5' exoribonucleases Rrp6, and Rrp44/Dis3, a subunit that also catalyzes endoribonuclease activity. Although recent structures and biochemical studies of RNA bound exosomes from S. cerevisiae revealed that the Exo9 central channel guides RNA to either Rrp6 or Rrp44 using partially overlapping and mutually exclusive paths, several issues related to RNA recruitment remain. Here, we identify activities for the highly basic Rrp6 C-terminal tail that we term the 'lasso' because it binds RNA and stimulates ribonuclease activities associated with Rrp44 and Rrp6 within the 11-subunit nuclear exosome. Stimulation is dependent on the Exo9 central channel, and the lasso contributes to degradation and processing activities of exosome substrates in vitro and in vivo. Finally, we present evidence that the Rrp6 lasso may be a conserved feature of the eukaryotic RNA exosome. |
Christopher M. Warren; Eran Eldar; Ruud L. Brink; Klodiana-Daphne Tona; Nic J. Wee; Eric J. Giltay; Martijn S. Noorden; Jos A. Bosch; Robert C. Wilson; Jonathan D. Cohen; Sander Nieuwenhuis Catecholamine-mediated increases in gain enhance the precision of cortical representations Journal Article In: Journal of Neuroscience, vol. 36, no. 21, pp. 5699–5708, 2016. @article{Warren2016, Neurophysiological evidence suggests that neuromodulators, such as norepinephrine and dopamine, increase neural gain in target brain areas. Computational models and prominent theoretical frameworks indicate that this should enhance the precision of neural represen- tations, but direct empirical evidence for this hypothesis is lacking. In two functional MRI studies, we examine the effect of baseline catecholamine levels (as indexed by pupil diameter and manipulated pharmacologically) on the precision of object representations in the human ventral temporal cortex using angular dispersion, a powerful, multivariate metric of representational similarity (precision). We first report the results of computational model simulations indicating that increasing catecholaminergic gain should reduce the angular dispersion, and thus increase the precision, of object representations from the same category, as well as reduce the angular dispersion of object representations from distinct categories when distinct-category representations overlap. In Study 1 (N?24), we showthat angular dispersion covaries with pupil diameter, an index of baseline catecholamine levels. In Study 2 (N?24), we manipulate catecholamine levels and neural gain using the norepinephrine transporter blocker atomoxetine and demonstrate consistent, causal effects on angular dispersion and brain-wide functional connectivity. Despite the use of very different methods of examining the effect of baseline catecholamine levels, our results show a striking convergence and demonstrate that catecholamines increase the precision of neural representations. |
David E. Warren; Daniel Tranel; Melissa C. Duff Impaired acquisition of new words after left temporal lobectomy despite normal fast-mapping behavior Journal Article In: Neuropsychologia, vol. 80, pp. 165–175, 2016. @article{Warren2016a, Word learning has been proposed to rely on unique brain regions including the temporal lobes, and the left temporal lobe appears to be especially important. In order to investigate the role of the left temporal lobe in word learning under different conditions, we tested whether patients with left temporal lobectomies (N=6) could learn novel words using two distinct formats. Previous research has shown that word learning in contrastive fast mapping conditions may rely on different neural substrates than explicit encoding conditions (Sharon et al., 2011). In the current investigation, we used a previously reported word learning task that implemented two distinct study formats (Warren and Duff, 2014): a contrastive fast mapping condition in which a picture of a novel item was displayed beside a picture of a familiar item while the novel item's name was presented aurally ("Click on the numbat."); and an explicit encoding (i.e., control) condition in which a picture of a novel item was displayed while its name was presented aurally ("This is a numbat."). After a delay, learning of the novel words was evaluated with memory tests including three-alternative forced-choice recognition, free recall, cued recall, and familiarity ratings. During the fast-mapping study condition both the left temporal lobectomy and healthy comparison groups performed well, but at test only the comparison group showed evidence of novel word learning. Our findings indicate that unilateral resection of the left temporal lobe including the hippocampus and temporal pole can severely impair word learning, and that fast-mapping study conditions do not promote subsequent word learning in temporal lobectomy populations. |
Ronald Berg; Ariel Zylberberg; Roozbeh Kiani; Michael N. Shadlen; Daniel M. Wolpert Confidence is the bridge between multi-stage decisions Journal Article In: Current Biology, vol. 26, no. 23, pp. 3157–3168, 2016. @article{Berg2016, Demanding tasks often require a series of decisions to reach a goal. Recent progress in perceptual decision-making has served to unite decision accuracy, speed, and confidence in a common framework of bounded evidence accumulation, furnishing a platform for the study of such multi-stage decisions. In many instances, the strategy applied to each decision, such as the speed-accuracy trade-off, ought to depend on the accuracy of the previous decisions. However, as the accuracy of each decision is often unknown to the decision maker, we hypothesized that subjects may carry forward a level of confidence in previous decisions to affect subsequent decisions. Subjects made two perceptual decisions sequentially and were rewarded only if they made both correctly. The speed and accuracy of individual decisions were explained by noisy evidence accumulation to a terminating bound. We found that subjects adjusted their speed-accuracy setting by elevating the termination bound on the second decision in proportion to their confidence in the first. The findings reveal a novel role for confidence and a degree of flexibility, hitherto unknown, in the brain's ability to rapidly and precisely modify the mechanisms that control the termination of a decision. |
Ruud L. Van Den Brink; Peter R. Murphy; Sander Nieuwenhuis Pupil diameter tracks lapses of attention Journal Article In: PLoS ONE, vol. 11, no. 10, pp. e0165274, 2016. @article{VanDenBrink2016, Our ability to sustain attention for prolonged periods of time is limited. Studies on the relationship between lapses of attention and psychophysiological markers of attentional state, such as pupil diameter, have yielded contradicting results. Here, we investigated the relationship between tonic fluctuations in pupil diameter and performance on a demanding sustained attention task. We found robust linear relationships between baseline pupil diameter and several measures of task performance, suggesting that attentional lapses tended to occur when pupil diameter was small. However, these observations were primarily driven by the joint effects of time-on-task on baseline pupil diameter and task performance. The linear relationships disappeared when we statistically controlled for time-on-task effects and were replaced by consistent inverted U-shaped relationships between baseline pupil diameter and each of the task performance measures, such that most false alarms and the longest and most variable response times occurred when pupil diameter was both relatively small and large. Finally, we observed strong linear relationships between the temporal derivative of pupil diameter and task performance measures, which were largely independent of time-on-task. Our results help to reconcile contradicting findings in the literature on pupil-linked changes in attentional state, and are consistent with the adaptive gain theory of locus coeruleus-norepinephrine function. Moreover, they suggest that the derivative of baseline pupil diameter is a potentially useful psychophysiological marker that could be used in the on-line prediction and prevention of attentional lapses. |
Ruud L. Brink; Thomas Pfeffer; Christopher M. Warren; Peter R. Murphy; Klodiana-Daphne Tona; Nic J. Wee; Eric J. Giltay; Martijn S. Noorden; Serge A. R. B. Rombouts; Tobias H. Donner; Sander Nieuwenhuis Catecholaminergic neuromodulation shapes intrinsic MRI functional connectivity in the human brain Journal Article In: Journal of Neuroscience, vol. 36, no. 30, pp. 7865–7876, 2016. @article{Brink2016, The brain commonly exhibits spontaneous (i.e., in the absence of a task) fluctuations in neural activity that are correlated across brain regions. It has been established that the spatial structure, or topography, of these intrinsic correlations is in part determined by the fixed anatomical connectivity between regions. However, it remains unclear which factors dynamically sculpt this topography as a function of brain state. Potential candidate factors are subcortical catecholaminergic neuromodulatory systems, such as the locus ceruleus-norepinephrine system, which send diffuse projections to most parts of the forebrain. Here, we systematically characterized the effects of endogenous central neuromodulation on correlated fluctuations during rest in the human brain. Using a double-blind placebo-controlled crossover design, we pharmacologically increased synaptic catecholamine levels by administering atomoxetine, an NE transporter blocker, and examined the effects on the strength and spatial structure of resting-state MRI functional connectivity. First, atomoxetine reduced the strength of inter-regional correlations across three levels of spatial organization, indicating that catecholamines reduce the strength of functional interactions during rest. Second, this modulatory effectonintrinsic correlations exhibited a substantial degree of spatial specificity: the decrease in functional connectivity showed an anterior–posterior gradient in the cortex, depended on the strength of baseline functional connectivity, and was strongest for connections between regions belonging to distinct resting-state networks. Thus, catecholamines reduce intrinsic correlations in a spatially heterogeneous fashion. We conclude that neuromodulation is an important factor shaping the topography of intrinsic functional connectivity. |
Emiel Hoven; Franziska Hartung; Michael Burke; Roel M. Willems Individual differences in sensitivity to style during literary reading: Insights from eye-tracking Journal Article In: Collabra, vol. 2, no. 1, pp. 1–16, 2016. @article{Hoven2016, Style is an important aspect of literature, and stylistic deviations are sometimes labeled foregrounded, since their manner of expression deviates from the stylistic default. Russian Formalists have claimed that foregrounding increases processing demands and therefore causes slower reading – an effect called retardation. We tested this claim experimentally by having participants read short literary stories while measuring their eye movements. Our results confirm that readers indeed read slower and make more regressions towards foregrounded passages as compared to passages that are not foregrounded. A closer look, however, reveals significant individual differences in sensitivity to foregrounding. Some readers in fact do not slow down at all when reading foregrounded passages. The slowing down effect for literariness was related to a slowing down effect for high perplexity (unexpected) words: those readers who slowed down more during literary passages also slowed down more during high perplexity words, even though no correlation between literariness and perplexity existed in the stories. We conclude that individual differences play a major role in processing of literary texts and argue for accounts of literary reading that focus on the interplay between reader and text. |
Lotje Linden; Françoise Vitu On the optimal viewing position for object processing Journal Article In: Attention, Perception, and Psychophysics, vol. 78, no. 2, pp. 602–617, 2016. @article{Linden2016, Numerous studies have shown that a visually pre- sented word is processed most easily when participants initial- ly fixate just to the left of the word's center. Fixating on this optimal viewing position (OVP) results in shorter response times and a lower probability of making additional within- word refixations (OVP effects), but also longer initial- fixation durations (an inverted-OVP or I-OVP effect), as com- pared to initially fixating at the beginning or the end of the word. Thus, typical curves are u-shaped (or inverted-u- shaped), with a leftward bias. Most researchers explain the u-shape in terms of visual constraints, and the leftward bias in terms of language constraints. Previous studies have dem- onstrated that (I)-OVP effects are not specific to words, but generalize to object viewing. We further investigated this by comparing the strength and (a)symmetry of (I-)OVP effects for words and objects. To this purpose, we gave participants an object- versus word-naming task in which we manipulated the position at which they initially fixated the stimulus (i.e., a line drawing or the written name of an object). Our results showed that object viewing, just as word viewing, resulted in u-shaped (I-)OVP curves. However, the effect was weaker than for words. Furthermore, for words, the curves were bi- ased to the left, whereas they were symmetrical for objects. This might indicate that part ofthe (I-)OVP effect for words is language specific, and that (I-)OVP effects for objects are a purer measure of the effect of visual constraints. |
Rosanne M. Diepen; Lee M. Miller; Ali Mazaheri; Joy J. Geng The role of alpha activity in spatial and feature-based attention. Journal Article In: eNeuro, vol. 3, no. 5, pp. 1–11, 2016. @article{Diepen2016, Modulations in alpha oscillations (?10 Hz) are typically studied in the context of anticipating upcoming stimuli. Alpha power decreases in sensory regions processing upcoming targets compared to regions processing distracting input, thereby likely facilitating processing of relevant information while suppressing irrelevant. In this electroencephalography study using healthy human volunteers, we examined whether modulations in alpha power also occur after the onset of a bilaterally presented target and distractor. Spatial attentionwasmanipulated through spatial cues and feature-based attention through adjusting the color-similarity of distractors to the target. Consistent with previous studies, we found that informative spatial cues induced a relative decrease of pretarget alpha power at occipital electrodes contralateral to the expected target location. Interestingly, this pattern reemerged relatively late (300–750 ms) after stimulus onset, suggesting that lateralized alpha reflects not only preparatory attention, but also ongoing attentive stimulus processing. Uninformative cues (i.e., conveying no information about the spatial location of the target) resulted in an interaction between spatial attention and feature-based attention in post-target alpha lateralization. When the target was paired with a low-similarity distractor, post-target alpha was lateralized (500–900 ms). Crucially, the lateralization was absent when target selection was ambig- uous because the distractor was highly similar to the target. Instead, during this condition, midfrontal theta was increased, indicative of reactive conflict resolution. Behaviorally, the degree of alpha lateralization was negatively correlated with the reaction time distraction cost induced by target–distractor similarity. These results suggest a pivotal role for poststimulus alpha lateralization in protecting sensory processing of target information. |
Jelle A. Dijk; Benjamin Haas; Christina Moutsiana; D. Samuel Schwarzkopf Intersession reliability of population receptive field estimates Journal Article In: NeuroImage, vol. 143, pp. 293–303, 2016. @article{Dijk2016, Population receptive field (pRF) analysis is a popular method to infer spatial selectivity of voxels in visual cortex. However, it remains largely untested how stable pRF estimates are over time. Here we measured the intersession reliability of pRF parameter estimates for the central visual field and near periphery, using a combined wedge and ring stimulus containing natural images. Sixteen healthy human participants completed two scanning sessions separated by 10–114 days. Individual participants showed very similar visual field maps for V1-V4 on both sessions. Intersession reliability for eccentricity and polar angle estimates was close to ceiling for most visual field maps (r>.8 for V1-3). PRF size and cortical magnification (CMF) estimates showed strong but lower overall intersession reliability (r≈.4–.6). Group level results for pRF size and CMF were highly similar between sessions. Additional control experiments confirmed that reliability does not depend on the carrier stimulus used and that reliability for pRF size and CMF is high for sessions acquired on the same day (r>.6). Our results demonstrate that pRF mapping is highly reliable across sessions. |
Nathalie Van Humbeeck; Tom Putzeys; Johan Wagemans Apparent motion suppresses responses in early visual cortex: A population code model Journal Article In: PLoS Computational Biology, vol. 12, no. 10, pp. e1005155, 2016. @article{VanHumbeeck2016, Two stimuli alternately presented at different locations can evoke a percept of a stimulus continuously moving between the two locations. The neural mechanism underlying this apparent motion (AM) is thought to be increased activation of primary visual cortex (V1) neurons tuned to locations along the AM path, although evidence remains inconclusive. AM masking, which refers to the reduced detectability of stimuli along the AM path, has been taken as evidence for AM-related V1 activation. AM-induced neural responses are thought to interfere with responses to physical stimuli along the path and as such impair the perception of these stimuli. However, AM masking can also be explained by predictive coding models, predicting that responses to stimuli presented on the AM path are suppressed when they match the spatio-temporal prediction of a stimulus moving along the path. In the present study, we find that AM has a distinct effect on the detection of target gratings, limiting the maximum performance at high contrast levels. This masking is strongest when the target orientation is identical to the orientation of the inducers. We developed a V1-like population code model of early visual processing, based on a standard contrast normalization model. We find that AM-related activation in early visual cortex is too small to either cause masking or to be perceived as motion. Our model instead predicts strong suppression of early sensory responses during AM, consistent with the theoretical framework of predictive coding. |
Anouk Mariette Loon; Johannes J. Fahrenfort; Bauke Velde; Philipp B. Lirk; Nienke C. C. Vulink; Markus W. Hollmann; H. Steven Scholte; Victor A. F. Lamme NMDA receptor antagonist ketamine distorts object recognition by reducing feedback to early visual cortex Journal Article In: Cerebral Cortex, vol. 26, no. 5, pp. 1986–1996, 2016. @article{Loon2016, It is a well-established fact that top-down processes influence neural representations in lower-level visual areas. Electrophysiological recordings in monkeys as well as theoretical models suggest that these top-down processes depend on NMDA receptor functioning. However, this underlying neural mechanism has not been tested in humans. We used fMRI multivoxel pattern analysis to compare the neural representations of ambiguous Mooney images before and after they were recognized with their unambiguous grayscale version. Additionally, we administered ketamine, an NMDA receptor antagonist, to interfere with this process. Our results demonstrate that after recognition, the pattern of brain activation elicited by a Mooney image is more similar to that of its easily recognizable grayscale version than to the pattern evoked by the identical Mooney image before recognition. Moreover, recognition of Mooney images decreased mean response; however, neural representations of separate images became more dissimilar. So from the neural perspective, unrecognizable Mooney images all “look the same”, whereas recognized Mooneys look different. We observed these effects in posterior fusiform part of lateral occipital cortex and in early visual cortex. Ketamine distorted these effects of recognition, but in early visual cortex only. This suggests that top-down processes from higher- to lower-level visual areas might operate via an NMDA pathway. |
Leendert Maanen; Laura Fontanesi; Guy E. Hawkins; Birte U. Forstmann Striatal activation reflects urgency in perceptual decision making Journal Article In: NeuroImage, vol. 139, pp. 294–303, 2016. @article{Maanen2016, Deciding between multiple courses of action often entails an increasing need to do something as time passes - a sense of urgency. This notion of urgency is not incorporated in standard theories of speeded decision making that assume information is accumulated until a critical fixed threshold is reached. Yet, it is hypothesized in novel theoretical models of decision making. In two experiments, we investigated the behavioral and neural evidence for an “urgency signal” in human perceptual decision making. Experiment 1 found that as the duration of the decision making process increased, participants made a choice based on less evidence for the selected option. Experiment 2 replicated this finding, and additionally found that variability in this effect across participants covaried with activation in the striatum. We conclude that individual differences in susceptibility to urgency are reflected by striatal activation. By dynamically updating a response threshold, the striatum is involved in signaling urgency in humans. |
Daan R. Renswoude; S. P. Johnson; Maartje E. J. Raijmakers; Ingmar Visser Do infants have the horizontal bias? Journal Article In: Infant Behavior and Development, vol. 44, pp. 38–48, 2016. @article{Renswoude2016, A robust set of studies show that adults make more horizontal than vertical and oblique saccades, while scanning real-world scenes. In this paper we study the horizontal bias in infants. The directions of eye movements were calculated for 41 infants (M = 8.40 months |
Annelinde R. E. Vandenbroucke; Johannes J. Fahrenfort; Julia D. I. Meuwese; H. Steven Scholte; Victor A. F. Lamme Prior knowledge about objects determines neural color representation in human visual cortex Journal Article In: Cerebral Cortex, vol. 26, no. 4, pp. 1401–1408, 2016. @article{Vandenbroucke2016, To create subjective experience, our brain must translate physical stimulus input by incorporating prior knowledge and expectations. For example, we perceive color and not wavelength information, and this in part depends on our past experience with colored objects ( Hansen et al. 2006; Mitterer and de Ruiter 2008). Here, we investigated the influence of object knowledge on the neural substrates underlying subjective color vision. In a functional magnetic resonance imaging experiment, human subjects viewed a color that lay midway between red and green (ambiguous with respect to its distance from red and green) presented on either typical red (e.g., tomato), typical green (e.g., clover), or semantically meaningless (nonsense) objects. Using decoding techniques, we could predict whether subjects viewed the ambiguous color on typical red or typical green objects based on the neural response of veridical red and green. This shift of neural response for the ambiguous color did not occur for nonsense objects. The modulation of neural responses was observed in visual areas (V3, V4, VO1, lateral occipital complex) involved in color and object processing, as well as frontal areas. This demonstrates that object memory influences wavelength information relatively early in the human visual system to produce subjective color vision. |
Margreet Vogelzang; Petra Hendriks; Hedderik Rijn Pupillary responses reflect ambiguity resolution in pronoun processing Journal Article In: Language, Cognition and Neuroscience, vol. 31, no. 7, pp. 876–885, 2016. @article{Vogelzang2016, ABSTRACTThe resolution of ambiguous pronouns is influenced by the preceding linguistic discourse. This raises the question whether the processing of an object pronoun is also influenced by the preceding sentential subject. In an experiment with Dutch adults, we recorded pupil dilation as a measure of the cognitive effort involved in resolving pronominal versus full noun phrase (NP) subjects and pronominal versus reflexive objects. Our results indicate that more effort is needed to resolve a pronominal subject or object compared to a less ambiguous full NP subject or reflexive object. These results support the hypothesis that the ambiguity of a referring expression influences processing. Contrary to our expectations, no evidence was found that the form of the subject influences the processing of a subsequent pronominal object. We conclude that pupillary responses reflect ambiguity resolution in pronoun processing, and that the process to resolve pronouns commences as soon as the pronoun is encountered. |
Dimitris Voudouris; Alexander Goettker; Stefanie Mueller; Katja Fiehler Kinesthetic information facilitates saccades towards proprioceptive-tactile targets Journal Article In: Vision Research, vol. 122, pp. 73–80, 2016. @article{Voudouris2016, Saccades to somatosensory targets have longer latencies and are less accurate and precise than saccades to visual targets. Here we examined how different somatosensory information influences the planning and control of saccadic eye movements. Participants fixated a central cross and initiated a saccade as fast as possible in response to a tactile stimulus that was presented to either the index or the middle fingertip of their unseen left hand. In a static condition, the hand remained at a target location for the entire block of trials and the stimulus was presented at a fixed time after an auditory tone. Therefore, the target location was derived only from proprioceptive and tactile information. In a moving condition, the hand was first actively moved to the same target location and the stimulus was then presented immediately. Thus, in the moving condition additional kinesthetic information about the target location was available. We found shorter saccade latencies in the moving compared to the static condition, but no differences in accuracy or precision of saccadic endpoints. In a second experiment, we introduced variable delays after the auditory tone (static condition) or after the end of the hand movement (moving condition) in order to reduce the predictability of the moment of the stimulation and to allow more time to process the kinesthetic information. Again, we found shorter latencies in the moving compared to the static condition but no improvement in saccade accuracy or precision. In a third experiment, we showed that the shorter saccade latencies in the moving condition cannot be explained by the temporal proximity between the relevant event (auditory tone or end of hand movement) and the moment of the stimulation. Our findings suggest that kinesthetic information facilitates planning, but not control, of saccadic eye movements to proprioceptive-tactile targets. |
Dimitris Voudouris; Jeroen B. J. Smeets; Eli Brenner Fixation biases towards the index finger in almost-natural grasping Journal Article In: PLoS ONE, vol. 11, no. 1, pp. e0146864, 2016. @article{Voudouris2016a, We use visual information to guide our grasping movements. When grasping an object with a precision grip, the two digits need to reach two different positions more or less simulta- neously, but the eyes can only be directed to one position at a time. Several studies that have examined eye movements in grasping have found that people tend to direct their gaze near where their index finger will contact the object. Here we aimed at better understanding why people do so by asking participants to lift an object off a horizontal surface. They were to grasp the object with a precision grip while movements of their hand, eye and head were recorded. We confirmed that people tend to look closer to positions that a digit needs to reach more accurately. Moreover, we show that where they look as they reach for the object depends on where they were looking before, presumably because they try to minimize the time during which the eyes are moving so fast that no new visual information is acquired. Most importantly, we confirmed that people have a bias to direct gaze towards the index fin- ger's contact point rather than towards that of the thumb. In our study, this cannot be explained by the index finger contacting the object before the thumb. Instead, it appears to be because the index finger moves to a position that is hidden behind the object that is grasped, probably making this the place at which one is most likely to encounter unex- pected problems that would benefit from visual guidance. However, this cannot explain the bias that was found in previous studies, where neither contact point was hidden, so it cannot be the only explanation for the bias. |
Anita E. Wagner; Paolo Toffanin; Deniz Baskent The timing and effort of lexical access in natural and degraded speech Journal Article In: Frontiers in Psychology, vol. 7, pp. 398, 2016. @article{Wagner2016, Understanding speech is effortless in ideal situations, and although adverse conditions, such as caused by hearing impairment, often render it an effortful task, they do not necessarily suspend speech comprehension. A prime example of this is speech perception by cochlear implant users, whose hearing prostheses transmit speech as a significantly degraded signal. It is yet unknown how mechanisms of speech processing deal with such degraded signals, and whether they are affected by effortful processing of speech. This paper compares the automatic process of lexical competition between natural and degraded speech, and combines gaze fixations, which capture the course of lexical disambiguation, with pupillometry, which quantifies the mental effort involved in processing speech. Listeners' ocular responses were recorded during disambiguation of lexical embeddings with matching and mismatching durational cues. Durational cues were selected due to their substantial role in listeners' quick limitation of the number of lexical candidates for lexical access in natural speech. Results showed that lexical competition increased mental effort in processing natural stimuli in particular in presence of mismatching cues. Signal degradation reduced listeners' ability to quickly integrate durational cues in lexical selection, and delayed and prolonged lexical competition. The effort of processing degraded speech was increased overall, and because it had its sources at the pre-lexical level this effect can be attributed to listening to degraded speech rather than to lexical disambiguation. In sum, the course of lexical competition was largely comparable for natural and degraded speech, but showed crucial shifts in timing, and different sources of increased mental effort. We argue that well-timed progress of information from sensory to pre-lexical and lexical stages of processing, which is the result of perceptual adaptation during speech development, is the reason why in ideal situations speech is perceived as an undemanding task. Degradation of the signal or the receiver channel can quickly bring this well-adjusted timing out of balance and lead to increase in mental effort. Incomplete and effortful processing at the early pre-lexical stages has its consequences on lexical processing as it adds uncertainty to the forming and revising of lexical hypotheses. |
Basil Wahn; Daniel P. Ferris; W. David Hairston; Peter König Pupil sizes scale with attentional load and task experience in a multiple object tracking task Journal Article In: PLoS ONE, vol. 11, no. 12, pp. e0168087, 2016. @article{Wahn2016a, Previous studies have related changes in attentional load to pupil size modulations. How-ever, studies relating changes in attentional load and task experience on a finer scale to pupil size modulations are scarce. Here, we investigated how these changes affect pupil sizes. To manipulate attentional load, participants covertly tracked between zero and five objects among several randomly moving objects on a computer screen. To investigate effects of task experience, the experiment was conducted on three consecutive days. We found that pupil sizes increased with each increment in attentional load. Across days, we found systematic pupil size reductions. We compared the model fit for predicting pupil size modulations using attentional load, task experience, and task performance as predictors. We found that a model which included attentional load and task experience as predictors had the best model fit while adding performance as a predictor to this model reduced the overall model fit. Overall, results suggest that pupillometry provides a viable metric for pre-cisely assessing attentional load and task experience in visuospatial tasks. |
Basil Wahn; Peter König In: Frontiers in Integrative Neuroscience, vol. 10, pp. 13, 2016. @article{Wahn2016, Humans constantly process and integrate sensory input from multiple sensory modalities. However, the amount of input that can be processed is constrained by limited attentional resources. A matter of ongoing debate is whether attentional resources are shared across sensory modalities, and whether multisensory integration is dependent on attentional resources. Previous research suggested that the distribution of attentional resources across sensory modalities depends on the the type of tasks. Here, we tested a novel task combination in a dual task paradigm: Participants performed a self-terminated visual search task and a localization task in either separate sensory modalities (i.e., haptics and vision) or both within the visual modality. Tasks considerably interfered. However, participants performed the visual search task faster when the localization task was performed in the tactile modality in comparison to performing both tasks within the visual modality. This finding indicates that tasks performed in separate sensory modalities rely in part on distinct attentional resources. Nevertheless, participants integrated visuotactile information optimally in the localization task even when attentional resources were diverted to the visual search task. Overall, our findings suggest that visual search and tactile localization partly rely on distinct attentional resources, and that optimal visuotactile integration is not dependent on attentional resources. |
Stephen C. Walenchok; Michael C. Hout; Stephen D. Goldinger Implicit object naming in visual search: Evidence from phonological competition Journal Article In: Attention, Perception, and Psychophysics, vol. 78, no. 8, pp. 2633–2654, 2016. @article{Walenchok2016, During visual search, people are distracted by ob- jects that visually resemble search targets; search is impaired when targets and distractors share overlapping features. In this study, we examined whether a nonvisual form of similarity, overlapping object names, can also affect search performance. In three experiments, people searched for images of real-world objects (e.g., a beetle) among items whose names either all shared the same phonological onset (/bi/), or were phonolog- ically varied. Participants either searched for 1 or 3 potential targets per trial, with search targets designated either visually or verbally.We examined standard visual search (Experiments 1 and 3) and a self-paced serial search task wherein partici- pants manually rejected each distractor (Experiment 2). We hypothesized that people would maintain visual templates when searching for single targets, but would rely more on object names when searching for multiple items and when targets were verbally cued. This reliance on target names would make performance susceptible to interference from similar-sounding distractors. Experiments 1 and 2 showed the predicted interference effect in conditions with high mem- ory load and verbal cues. In Experiment 3, eye-movement results showed that phonological interference resulted from small increases in dwell time to all distractors. The results suggest that distractor names are implicitly activated during search, slowing attention disengagement when targets and distractors share similar names. |
W. C. Walker; W. Carne; L. M. Franke; T. Nolen; S. D. Dikmen; D. X. Cifu; K. Wilson; H. G. Belanger; R. Williams In: Brain Injury, vol. 30, no. 12, pp. 1469–1480, 2016. @article{Walker2016, PRIMARY OBJECTIVES To establish and comprehensively evaluate a large cohort of US veterans who served in recent military conflicts in order to better understand possible chronic and late-life effects of mild traumatic brain injury (mTBI), including those that may stem from neurodegeneration. RESEARCH DESIGN Cross-sectional and prospective longitudinal. METHODS AND PROCEDURES Inclusion criteria are prior combat exposure and deployment(s) in Operation Enduring Freedom, Operation Iraqi Freedom or one of their follow-on conflicts (collectively OEF/OIF). Effects of mTBI will be assessed by enrolling participants across the entire spectrum of mTBI, from entirely negative to many mTBIs. Longitudinal assessments consist of in-person comprehensive testing at least every 5 years, with interval annual telephonic testing. The primary outcome is the composite score on the NIH Toolbox neuropsychological test battery. Assessments also include structured interviews, questionnaires, traditional neuropsychological testing, motor, sensory and vestibular functions, neuroimaging, electrophysiology, genotypes and biomarkers. MAIN OUTCOMES AND RESULTS The authors fully describe the study methods and measures and report demographic and exposure characteristics from the early portion of the cohort of OEF/OIF veterans. CONCLUSIONS This centrepiece observational study of the Chronic Effects of Neurotrauma Consortium (CENC) is successfully launched and, within several years, should provide fertile data to begin investigating its aims. |
Thomas S. A. Wallis; M. Bethge; Felix A. Wichmann Testing models of peripheral encoding using metamerism in an oddity paradigm Journal Article In: Journal of Vision, vol. 16, no. 2, pp. 1–30, 2016. @article{Wallis2016, Most of the visual field is peripheral, and the periphery encodes visual input with less fidelity compared to the fovea. What information is encoded, and what is lost in the visual periphery? A systematic way to answer this question is to determine how sensitive the visual system is to different kinds of lossy image changes compared to the unmodified natural scene. If modified images are indiscriminable from the original scene, then the information discarded by the modification is not important for perception under the experimental conditions used. We measured the detectability of modifications of natural image structure using a temporal three-alternative oddity task, in which observers compared modified images to original natural scenes.We consider two lossy image transformations, Gaussian blur and Portilla and Simoncelli texture synthesis. Although our paradigm demonstrates metamerism (physically different images that appear the same) under some conditions, in general we find that humans can be capable of impressive sensitivity to deviations from natural appearance. The representations we examine here do not preserve all the information necessary to match the appearance of natural scenes in the periphery. |
Aiping Wang; Junmo Yeon; Wei Zhou; Hua Shu; Ming Yan Cross-language parafoveal semantic processing: Evidence from Korean-Chinese bilinguals Journal Article In: Psychonomic Bulletin & Review, vol. 23, no. 1, pp. 285–290, 2016. @article{Wang2016, In the present study, we aimed at testing cross- language cognate and semantic preview effects. We tested how native Korean readers who learned Chinese as a second language make use of the parafoveal information during the reading of Chinese sentences. There were 3 types of Korean preview words: cognate translations of the Chinese target words, semantically related noncognate words, and unrelated words. Together with a highly significant cognate preview effect, more critically, we also observed reliable facilitation in processing of the target word from the semantically related previews in all fixation measures. Results from the present study provide first evidence for semantic processing from parafoveally presented Korean words and for cross-language parafoveal semantic processing. |
Chin-An Wang; Hailey McInnis; Donald C. Brien; Giovanna Pari; Douglas P. Munoz Disruption of pupil size modulation correlates with voluntary motor preparation deficits in Parkinson's disease Journal Article In: Neuropsychologia, vol. 80, pp. 176–184, 2016. @article{Wang2016c, Pupil size is an easy-to-measure, non-invasive method to index various cognitive processes. Although a growing number of studies have incorporated measures of pupil size into clinical investigation, there have only been limited studies in Parkinson's disease (PD). Convergent evidence has suggested PD patients exhibit cognitive impairment at or soon after diagnosis. Here, we used an interleaved pro- and anti-saccade paradigm while monitoring pupil size with saccadic eye movements to examine the relationship between executive function deficits and pupil size in PD patients. Subjects initially fixated a central cue, the color of which instructed them to either look at a peripheral stimulus automatically (pro-saccade) or suppress the automatic response and voluntarily look in the opposite direction of the stimulus (anti-saccade). We hypothesized that deficits of voluntary control should be revealed not only on saccadic but also on pupil responses because of the recently suggested link between the saccade and pupil control circuits. In elderly controls, pupil size was modulated by task preparation, showing larger dilation prior to stimulus appearance in preparation for correct anti-saccades, compared to correct pro-saccades, or erroneous pro-saccades made in the anti-saccade condition. Moreover, the size of pupil dilation correlated negatively with anti-saccade reaction times. However, this profile of pupil size modulation was significantly blunted in PD patients, reflecting dysfunctional circuits for anti-saccade preparation. Our results demonstrate disruptions of modulated pupil responses by voluntary movement preparation in PD patients, highlighting the potential of using low-cost pupil size measurement to examine executive function deficits in early PD. |
Massimo Turatto; David Pascucci Short-term and long-term plasticity in the visual-attention system: Evidence from habituation of attentional capture Journal Article In: Neurobiology of learning and memory, vol. 130, pp. 159–169, 2016. @article{Turatto2016, Attention is known to be crucial for learning and to regulate activity-dependent brain plasticity. Here we report the opposite scenario, with plasticity affecting the onset-driven automatic deployment of spatial attention. Specifically, we showed that attentional capture is subject to habituation, a fundamental form of plasticity consisting in a response decrement to repeated stimulations. Participants performed a visual discrimination task with focused attention, while being occasionally exposed to a distractor consisting of a high-luminance peripheral onset. With practice, short-term and long-term habituation of attentional capture emerged, making the visual-attention system fully immune to distraction. Furthermore, spontaneous recovery of attentional capture was found when the distractor was temporarily removed. Capture, however, once habituated was surprisingly resistant to spontaneous recovery, taking from several minutes to days to recover. The results suggest that the mechanisms subserving exogenous attentional orienting are subject to profound and enduring plastic changes based on previous experience, and that habituation can impact high-order cognitive functions. |
Alexandra Ţurcan; Ruth Filik An eye-tracking investigation of written sarcasm comprehension: The roles of familiarity and context Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 42, no. 12, pp. 1867–1893, 2016. @article{Turcan2016, This article addresses a current theoretical debate between the standard pragmatic model, the graded salience hypothesis, and the implicit display theory, by investigating the roles of the context and of the properties of the sarcastic utterance itself in the comprehension of a sarcastic remark. Two eye-tracking experiments were conducted where we manipulated the speaker's expectation in the context and the familiarity of the sarcastic remark. The results of the first eye-tracking study showed that literal comments were read faster than unfamiliar sarcastic comments, regardless of whether an explicit expectation was present in the context. The results of the second eye-tracking study indicated an early processing difficulty for unfamiliar sarcastic comments, but not for familiar sarcastic comments. Later reading time measures indicated a general difficulty for sarcastic comments. Overall, results seem to suggest that the familiarity of the utterance does indeed affect the time course of sarcasm processing (supporting the graded salience hypothesis), although there is no evidence that making the speaker's expectation explicit in the context affects it as well (thus failing to support the implicit display theory). (PsycINFO Database Record |