EyeLink 临床和动眼神经眼球追踪出版物
EyeLink临床和oculomotor研究出版物至2023年(一些早于2024年)列在以下年份。您可以使用Saccadic Adaptation、Schizophrenia、Nystagmus等关键词搜索出版物。您还可以按年份搜索个人作者姓名和有限搜索(选择年份,然后单击搜索按钮)。如果我们错过了任何EyeLink临床或oculomotor文章,请给我们发电子邮件!
2020 |
Francesco Cimminella; Sergio Della Sala; Moreno I. Coco; Sergio Della Sala; Moreno I. Coco Extra-foveal processing of object semantics guides early overt attention during visual search Journal Article In: Attention, Perception, and Psychophysics, vol. 82, no. 2, pp. 655–670, 2020. @article{Cimminella2020, Eye-tracking studies using arrays of objects have demonstrated that some high-level processing of object semantics can occur in extra-foveal vision, but its role on the allocation of early overt attention is still unclear. This eye-tracking visual search study contributes novel findings by examining the role of object-to-object semantic relatedness and visual saliency on search responses and eye-movement behaviour across arrays of increasing size (3, 5, 7). Our data show that a critical object was looked at earlier and for longer when it was semantically unrelated than related to the other objects in the display, both when it was the search target (target-present trials) and when it was a target's semantically related competitor (target-absent trials). Semantic relatedness effects manifested already during the very first fixation after array onset, were consistently found for increasing set sizes, and were independent of low-level visual saliency, which did not play any role. We conclude that object semantics can be extracted early in extra-foveal vision and capture overt attention from the very first fixation. These findings pose a challenge to models of visual attention which assume that overt attention is guided by the visual appearance of stimuli, rather than by their semantics. |
Kathy Conklin; Sara Alotaibi; Ana Pellicer-Sánchez; Laura Vilkaitė-Lozdienė What eye-tracking tells us about reading-only and reading-while-listening in a first and second language Journal Article In: Second Language Research, vol. 36, no. 3, pp. 257–276, 2020. @article{Conklin2020, Reading-while-listening has been shown to be advantageous in second language learning. However, research to date has not addressed how the addition of auditory input changes reading itself. Identifying how reading differs in reading-while-listening and reading-only might help explain the advantages associated with the former. The aim of the present study was to provide a detailed description of reading patterns with and without audio. To address this, we asked first language (L1) and second language (L2) speakers to read two passages (one in a reading-only mode and another in a reading-while-listening mode) while their eye movements were monitored. In reading-only, L2 readers had more and longer fixations (i.e. slower reading) than L1 readers. In reading-while-listening, eye-movement patterns were very similar in the L1 and L2. In general, neither group of participants fixated the word that they were hearing, although the L2 readers' eye movements were more aligned to the auditory input. When reading and listening were not aligned, both groups' eye movements generally preceded the audio. However, L2 readers had more cases where their fixations lagged behind the audio. We consider why reading slightly ahead of the audio could explain some of the benefits attributed to reading-while-listening contexts. |
Kathy Conklin; Gareth Carrol Words go together like ‘Bread and Butter': The rapid, automatic acquisition of lexical patterns Journal Article In: Applied Linguistics, pp. 1–23, 2020. @article{Conklin2020a, While it is possible to express the same meaning in different ways (‘bread and butter' versus ‘butter and bread'), we tend to say things in the same way. As much as half of spoken discourse is made up of formulaic language or linguistic patterns. Despite its prevalence, little is known about how the processing system treats novel patterns and how rapidly a sensitivity to them arises in natural contexts. To address this, we monitored native English speakers' eye movements when reading short stories containing existing (conventional) patterns (‘time and money'), seen once, and novel patterns (‘wires and pipes'), seen one to five times. Subsequently, readers saw both existing and novel phrases in the reversed order (‘money and time'; ‘pipes and wires'). In four to five exposures, much like existing lexical patterns, novel ones demonstrate a processing advantage. Sensitivity to lexical patterns—including the co-occurrence of lexical items and the order in which they occur—arises rapidly and automatically during natural reading. This has implications for language learning and is in line with usage-based models of language processing. |
Carla Contemori; Paola E. Dussias The processing of subject pronouns in highly proficient L2 speakers of English Journal Article In: Glossa: a journal of general linguistics, vol. 5, no. 1, pp. 1–19, 2020. @article{Contemori2020, Studies on second language (L2) anaphora resolution have mainly focused on learners of null-subject languages, demonstrating that L2 speakers show residual indeterminacy in the L2 referential choice, even at the highest levels of proficiency. On the other hand, studies on anaphora resolution in L2 learners of non-null-subject languages have shown conflicting results, indicating that L2 speakers may process referential expressions in the L2 like native speakers. Using a visual word paradigm task, we test the online processing of pronouns in highly proficient L2 speakers of English whose L1 is Spanish, and compare their performance to a group of native English speakers. The native speakers' data show rapid use of the first mention bias (i.e., interpreting a pronoun towards the first-mentioned referent) and gender information upon encountering a gender ambiguous or unambiguous pronoun. For the L2 participants, we find similar underlying processes of pronoun resolution in comparison to native speakers. The results do not reveal a processing cost for L2 speakers of a non-null-subject language during anaphora resolution. Overall, our study demonstrates that L2 speakers of a non-null-subject language ( English) can achieve native-like processing of the default referential form signaling topic continuity (i.e., the overt pronoun; Sorace 2011). |
Carla Contemori; Lucia Pozzan; Phillip Galinsky; Paola E. Dussias In: Linguistic Approaches to Bilingualism, vol. 10, no. 5, pp. 623–656, 2020. @article{Contemori2020a, In two eye tracking experiments, we investigate how adult child-L2 speakers of English resolve prepositional phrase (PP) attachment ambiguity in their dominant language (English), and whether they use prosodic information to aid in the process of garden-path recovery. The findings showed an increased processing cost associated with the revision of temporary ambiguous sentences for the child-L2 adults relative to the native English speakers. When prosody was informative, the child-L2 adults were able to use prosodic information to guide the interpretation of their later acquired, dominant language. However, they performed revision significantly less successfully than the native speakers. Although processing was similar for the native English speakers and the adult child-L2 speakers of English, when it comes to sensitivity to prosodic information and referential context, the two groups differed with regards to reanalysis both in the presence and absence of salient prosodic and referential information. |
Jason C. Coronel; Ryan C. Moore; Brahm DeBuys In: Political Communication, pp. 1–24, 2020. @article{Coronel2020, An important feature of the information environment is its multimodal nature. In politics, people encounter representations of political candidates that combine images and text. Among the most prominent pieces of information people encounter are a candidate's gender, obtained from images of the candidate's face, and the candidate's partisan identification, often represented as text. This feature of the environment is important given previous work showing that individuals infer gender categories from faces rapidly and effortlessly. In this study (N = 113), we use eye movements to determine how individuals assign stereotypical policy positions to candidates in an environment in which photos of candidates' faces are paired with labels of their partisan IDs. We find that politically-knowledgeable individuals are more likely to use partisan- than gender-based stereotypes. In contrast, political novices did not prioritize one type of stereotype over the other. Our findings have implications for understanding how individuals make political evaluations in multimodal settings and show the advantages of measuring eye movements when studying stereotyping in multimodal environments. |
Natalie V. Covington; Jake Kurczek; Melissa C. Duff; Sarah Brown-Schmidt The effect of repetition on pronoun resolution in patients with memory impairment Journal Article In: Journal of Clinical and Experimental Neuropsychology, vol. 42, no. 2, pp. 171–184, 2020. @article{Covington2020, Referring to things in the world–that woman, her idea, she–is a central component of language. Understanding reference requires the listener to keep track of the unfolding discourse history while integrating multiple sources of information to interpret the speech stream as it unfolds in time. Pronouns are a common way to establish reference. But due to their impoverished form, to understand them listeners must relate features of the pronoun (e.g., gender, animacy) with existing representations of potential discourse referents. Successful referential processing seems to place demands on memory. In a previous study, patients with hippocampal amnesia and healthy participants listened to short stories as their eye movements were monitored. When interpreting ambiguous pronouns, healthy participants demonstrated order-of-mention effects, whereby ambiguous pronouns are interpreted as referring to the first-mentioned referent in the story. By contrast, memory-impaired patients exhibited significant disruptions in their ability to use information about which character had been mentioned first to interpret pronouns. Repetition of the most salient information is a common clinical recommendation for improving pronoun resolution and communication in individuals with memory disorders (e.g., Alzheimer's disease) but this recommendation lacks an evidentiary basis. The present study seeks to determine whether the pronoun resolution performance of hippocampal patients can be improved, by repetition of the target referent, increasing its salience. Results indicate that patients with hippocampal damage demonstrate improved processing of pronouns following repetition of the target referent, but benefit from this repetition to a significantly smaller degree compared to healthy participants. These results provide further evidence for the role of the hippocampal-dependent memory system in language processing and point to the need for empirically tested communication interventions. |
Valentina Cristante; Sarah Schimke The processing of passive sentences in German: Evidence from an eye-tracking study with seven- and ten-year-olds and adults Journal Article In: Language, Interaction and Acquisition, vol. 11, no. 2, pp. 163–195, 2020. @article{Cristante2020, This study examines the processing and interpretation of passive sentences in German-speaking seven-year-olds, ten-year-olds, and adults. This structure is often assumed to be particularly difficult to understand, and not yet fully mastered in primary school ( Kemp, Bredel, & Reich, 2008 ), i.e. in children aged between six and eleven. Few studies provide empirical data concerning this age range; it is therefore unknown whether this assumption is warranted. Against this background, we tested whether the three age groups differed in their off-line comprehension of passive sentences. In addition, we employed Visual World eye-tracking to measure processing difficulties that may differ between age groups and may not be reflected in the final interpretations. Previous studies on adult language processing in German and English have documented a preference to interpret sentences according to an agent-first strategy . Our results show that all three groups make use of this strategy, and that all of them are able to revise this interpretation once the first cue indicating a passive sentence is encountered (the auxiliary verb form wurde ). We conclude that at least from age seven on, children have the linguistic and cognitive prerequisites to process the passive morphosyntax of German and to revise initial sentence misinterpretations. |
Jinghui Ouyang; Lingshan Huang; Jingyang Jiang The effects of glossing on incidental vocabulary learning during second language reading: Based on an eye-tracking study Journal Article In: Journal of Research in Reading, vol. 43, no. 4, pp. 496–515, 2020. @article{Ouyang2020, Providing glosses that explain the meanings of unknown words is a common method of promoting learners' learning of new words. Numerous studies have shown that compared with no-gloss condition, glosses benefit the learning of the meaning of new words. This study combines both online (i.e., eye-tracking) and offline (i.e., immediate vocabulary tests) measures to investigate the influences of glosses on incidental vocabulary learning and evaluating the degree to which glossing influences reading behaviour during second language (L2) reading. The eye movements of 45 high-intermediate adult learners of English were recorded when they read a text presented on-screen. Two different text versions (both with 17 new words) were presented to two different groups of participants: first language (L1) textual glossed and no-glossed. After reading, unannounced vocabulary tests were administered to gauge learners' recall and recognition of vocabulary meaning. Learners performed better in meaning recall and meaning recognition tests under L1-glossed condition. Eye-tracking measures of the target words were significantly different in two conditions. Eye-tracking measures of new words and their glosses in L1-glossed condition were significantly correlated with learners' scores of vocabulary tests. L1 glosses promote the learning of the meaning of new words in an incidental condition. The attention allocated to the new words is different in L1-glossed and no-glossed conditions. More importantly, there is a relationship between the online reading behaviour and the vocabulary test performance in gloss condition. |
Danbi Ahn; Matthew J. Abbott; Keith Rayner; Victor S. Ferreira; Tamar H. Gollan Minimal overlap in language control across production and comprehension: Evidence from read-aloud versus eye-tracking tasks Journal Article In: Journal of Neurolinguistics, vol. 54, pp. 1–22, 2020. @article{Ahn2020a, Bilinguals are remarkable at language control—switching between languages only when they want. However, language control in production can involve switch costs. That is, switching to another language takes longer than staying in the same language. Moreover, bilinguals sometimes produce language intrusion errors, mistakenly producing words in an unintended language (e.g., Spanish–English bilinguals saying “pero” instead of “but”). Switch costs are also found in comprehension. For example, reading times are longer when bilinguals read sentences with language switches compared to sentences with no language switches. Given that both production and comprehension involve switch costs, some language–control mechanisms might be shared across modalities. To test this, we compared language switch costs found in eye–movement measures during silent sentence reading (comprehension) and intrusion errors produced when reading aloud switched words in mixed–language paragraphs (production). Bilinguals who made more intrusion errors during the read–aloud task did not show different switch cost patterns in most measures in the silent–reading task, except on skipping rates. We suggest that language switching is mostly controlled by separate, modality–specific processes in production and comprehension, although some points of overlap might indicate the role of domain general control and how it can influence individual differences in bilingual language control. |
Noor Z. Al Dahhan; John R. Kirby; Ying Chen; Donald C. Brien; Douglas P. Munoz Examining the neural and cognitive processes that underlie reading through naming speed tasks Journal Article In: European Journal of Neuroscience, vol. 51, no. 11, pp. 2277–2298, 2020. @article{AlDahhan2020, We combined fMRI with eye tracking and speech recording to examine the neural and cognitive mechanisms that underlie reading. To simplify the study of the complex processes involved during reading, we used naming speed (NS) tasks (also known as rapid automatized naming or RAN) as a focus for this study, in which average reading right-handed adults named sets of stimuli (letters or objects) as quickly and accurately as possible. Due to the possibility of spoken output during fMRI studies creating motion artifacts, we employed both an overt session and a covert session. When comparing the two sessions, there were no significant differences in behavioral performance, sensorimotor activation (except for regions involved in the motor aspects of speech production) or activation in regions within the left-hemisphere-dominant neural reading network. This established that differences found between the tasks within the reading network were not attributed to speech production motion artifacts or sensorimotor processes. Both behavioral and neuroimaging measures showed that letter naming was a more automatic and efficient task than object naming. Furthermore, specific manipulations to the NS tasks to make the stimuli more visually and/or phonologically similar differentially activated the reading network in the left hemisphere associated with phonological, orthographic and orthographic-to-phonological processing, but not articulatory/motor processing related to speech production. These findings further our understanding of the underlying neural processes that support reading by examining how activation within the reading network differs with both task performance and task characteristics. |
Anja Arnhold; Vincent Porretta; Aoju Chen; Saskia A. J. M. Verstegen; Ivy Mok; Juhani Järvikivi (Mis) understanding your native language: Regional accent impedes processing of information status Journal Article In: Psychonomic Bulletin & Review, vol. 27, no. 4, pp. 801–808, 2020. @article{Arnhold2020, Native-speaker listeners constantly predict upcoming units of speech as part of language processing, using various cues. However, this process is impeded in second-language listeners, as well as when the speaker has an unfamiliar accent. Whereas previous research has largely concentrated on the pronunciation of individual segments in foreign-accented speech, we show that regional accent impedes higher levels of language processing, making native listeners' processing resemble that of second-language listeners. In Experiment 1, 42 native speakers of Canadian English followed instructions spoken in British English to move objects on a screen while their eye movements were tracked. Native listeners use prosodic cues to information status to disambiguate between two possible referents, a new and a previously mentioned one, before they have heard the complete word. By contrast, the Canadian participants, similarly to second-language speakers, were not able to make full use of prosodic cues in the way native British listeners do. In Experiment 2, 19 native speakers of Canadian English rated the British English instructions used in Experiment 1, as well as the same instructions spoken by a Canadian imitating the British English prosody. While information status had no effect for the Canadian imitations, the original stimuli received higher ratings when prosodic realization and information status of the referent matched than for mismatches, suggesting a native-like competence in these offline ratings. These findings underline the importance of expanding psycholinguistic models of second language/dialect processing and representation to include both prosody and regional variation. |
Nicolai D. Ayasse; Arthur Wingfield The two sides of linguistic context: Eye-tracking as a measure of semantic competition in spoken word recognition among younger and older adults Journal Article In: Frontiers in Human Neuroscience, vol. 14, pp. 132, 2020. @article{Ayasse2020a, Studies of spoken word recognition have reliably shown that both younger and older adults' recognition of acoustically degraded words is facilitated by the presence of a linguistic context. Against this benefit, older adults' word recognition can be differentially hampered by interference from other words that could also fit the context. These prior studies have primarily used off-line response measures such as the signal-to-noise ratio needed for a target word to be correctly identified. Less clear is the locus of these effects; whether facilitation and interference have their influence primarily during response selection, or whether their effects begin to operate even before a sentence-final target word has been uttered. This question was addressed by tracking 20 younger and 20 older adults' eye fixations on a visually presented target word that corresponded to the final word of a contextually constraining or neutral sentence, accompanied by a second word on the computer screen that in some cases could also fit the sentence context. Growth curve analysis of the time-course of eye-gaze on a target word showed facilitation and inhibition effects begin to appear even as a spoken sentence is unfolding in time. Consistent with an age-related inhibition deficit, older adults' word recognition was slowed by the presence of a semantic competitor to a degree not observed for younger adults, with this effect operating early in the recognition process. |
James Bartolotti; Scott R. Schroeder; Sayuri Hayakawa; Sirada Rochanavibhata; Peiyao Chen; Viorica Marian Listening to speech and non-speech sounds activates phonological and semantic knowledge differently Journal Article In: Quarterly Journal of Experimental Psychology, vol. 73, no. 8, pp. 1135–1149, 2020. @article{Bartolotti2020, How does the mind process linguistic and non-linguistic sounds? The current study assessed the different ways that spoken words (e.g., “dog”) and characteristic sounds (e.g., <barking>) provide access to phonological information (e.g., word-form of “dog”) and semantic information (e.g., knowledge that a dog is associated with a leash). Using an eye-tracking paradigm, we found that listening to words prompted rapid phonological activation, which was then followed by semantic access. The opposite pattern emerged for sounds, with early semantic access followed by later retrieval of phonological information. Despite differences in the time courses of conceptual access, both words and sounds elicited robust activation of phonological and semantic knowledge. These findings inform models of auditory processing by revealing the pathways between speech and non-speech input and their corresponding word forms and concepts, which influence the speed, magnitude, and duration of linguistic and nonlinguistic activation. |
Mahsa Barzy; Ruth Filik; David Williams; Heather J. Ferguson Emotional processing of ironic versus literal criticism in autistic and nonautistic adults: Evidence from eye-tracking Journal Article In: Autism Research, vol. 13, no. 4, pp. 563–578, 2020. @article{Barzy2020, Typically developing adults are able to keep track of story characters' emotional states online while reading. Filik et al. showed that initially, participants expected the victim to be more hurt by ironic comments than literal, but later considered them less hurtful; ironic comments were regarded as more amusing. We examined these processes in autistic adults, since previous research has demonstrated socio-emotional difficulties among autistic people, which may lead to problems processing irony and its related emotional processes despite an intact ability to integrate language in context. We recorded eye movements from autistic and nonautistic adults while they read narratives in which a character (the victim) was either criticized in an ironic or a literal manner by another character (the protagonist). A target sentence then either described the victim as feeling hurt/amused by the comment, or the protagonist as having intended to hurt/amused the victim by making the comment. Results from the nonautistic adults broadly replicated the key findings from Filik et al., supporting the two-stage account. Importantly, the autistic adults did not show comparable two-stage processing of ironic language; they did not differentiate between the emotional responses for victims or protagonists following ironic versus literal criticism. These findings suggest that autistic people experience a specific difficulty taking into account other peoples' communicative intentions (i.e., infer their mental state) to appropriately anticipate emotional responses to an ironic comment. We discuss how these difficulties might link to atypical socio-emotional processing in autism, and the ability to maintain successful real-life social interactions. |
Yulia Esaulova; Martina Penke; Sarah Dolscheid Referent cueing, position, and animacy as accessibility factors in visually situated sentence production Journal Article In: Frontiers in Psychology, vol. 11, pp. 2111, 2020. @article{Esaulova2020, Speakers' readiness to describe event scenes using active or passive constructions has previously been attributed—among other factors—to the accessibility of referents. While most research has highlighted the accessibility of agents, the present study examines whether patients' accessibility can be modulated by means of visual preview of the patient character (derived accessibility), as well as by manipulating the animacy status of patients (inherent accessibility). Crucially, we also examined whether effects of accessibility were amenable to the visuospatial position of the patient by presenting the patient character either to the left or to the right of the agent. German native speakers were asked to describe drawings depicting event scenes while their gaze and speech were recorded. Our results show that making patients more accessible using derived and inherent accessibility factors led to more produced passives, shorter speech onsets, and a reduction of fixations on patients. Complementing previous research on agent accessibility, our findings demonstrate that the accessibility of patients affected both sentence production and looking behavior. While effects were observed for both inherent and derived accessibility, they appeared to be more pronounced for the latter. Regarding character position, we observed a significant effect of position on participants' gaze patterns and structural choices, suggesting that position itself can be considered an accessibility-related factor. Importantly, the position of a patient also interacted with our manipulation of its accessibility via visual preview. Participants produced more passives after preview than no preview for left-positioned but not for right-positioned patients, demonstrating that effects of patient accessibility (i.e., visual preview) were susceptible to character position. A similar interaction was observed for participants' viewing patterns. These findings provide the first evidence that the position of a referent is a factor that interacts with other accessibility-related factors (i.e., cueing), emphasizing the need of controlling for position effects when testing referent accessibility. |
Núria Esteve-Gibert; Amy J. Schafer; Barbara Hemforth; Cristel Portes; Céline Pozniak; Mariapaola D'Imperio Empathy influences how listeners interpret intonation and meaning when words are ambiguous Journal Article In: Memory & Cognition, vol. 48, no. 4, pp. 566–580, 2020. @article{EsteveGibert2020, This study examines how individual pragmatic skills, and more specifically, empathy, influences language processing when a temporary lexical ambiguity can be resolved via intonation. We designed a visual-world eye-tracking experiment in which participants could anticipate a referent before disambiguating lexical information became available, by inferring either a contrast meaning or a confirmatory meaning from the intonation contour alone. Our results show that individual empathy skills determine how listeners deal with the meaning alternatives of an ambiguous referent, and the way they use intonational meaning to disambiguate the referent. Listeners with better pragmatic skills (higher empathy) were sensitive to intonation cues when forming sound–meaning associations during the unfolding of an ambiguous referent, and showed higher sensitivity to all the alternative interpretations of that ambiguous referent. Less pragmatically skilled listeners showed weaker processing of intonational meaning because they needed subsequent disambiguating material to select a referent and showed less sensitivity to the set of alternative interpretations. Overall, our results call for taking into account individual pragmatic differences in the study of intonational meaning processing and sentence comprehension in general. |
Myrthe Faber; Marloes Mak; Roel M. Willems Word skipping as an indicator of individual reading style during literary reading Journal Article In: Journal of Eye Movement Research, vol. 13, no. 3, pp. 1–9, 2020. @article{Faber2020a, Decades of research have established that the content of language (e.g. lexical characteristics of words) predicts eye movements during reading. Here we investigate whether there exist individual differences in 'stable' eye movement patterns during narrative reading. We computed Euclidean distances from correlations between gaze durations time courses (word level) across 102 participants who each read three literary narratives in Dutch. The resulting distance matrices were compared between narratives using a Mantel test. The results show that correlations between the scaling matrices of different narratives are relatively weak (r ≤.11) when missing data points are ignored. However, when including these data points as zero durations (i.e. skipped words), we found significant correlations between stories (r >.51). Word skipping was significantly positively associated with print exposure but not with self-rated attention and story-world absorption, suggesting that more experienced readers are more likely to skip words, and do so in a comparable fashion. We interpret this finding as suggesting that word skipping might be a stable individual eye movement pattern. |
J. Benjamin Falandays; Sarah Brown-Schmidt; Joseph C. Toscano Long-lasting gradient activation of referents during spoken language processing Journal Article In: Journal of Memory and Language, vol. 112, pp. 1–14, 2020. @article{Falandays2020, During speech processing, listeners must map a fundamentally continuous acoustic signal onto discrete symbols, such as words. A current debate concerns the time-course over which sub-phonemic (i.e., gradient) acoustic information continues to influence symbolic (i.e., linguistic) interpretation, which can provide evidence regarding the level of representation at which gradient information is maintained. In a visual-world paradigm experiment, participants indicated whether a spoken sentence matched a display while eye-gaze was monitored. Participants heard an acoustically ambiguous stimulus (a pronoun referring to either a male or female referent in the display), which was not disambiguated until later in the discourse. The acoustic properties of the pronouns and length of the ambiguous period were varied while responses and eye-movements to the discourse-relevant items were recorded, providing a measure of whether gradient referential uncertainty is maintained over time. Fixation patterns during the ambiguous period and latencies to fixate the target at the end of the trial varied linearly with the acoustics of the earlier pronoun, indicating that gradient information can be maintained over intervening periods of 35 syllables. These results provide strong evidence that gradient uncertainty is maintained at the level of referent representations. |
Xi Fan; Ronan Reilly In: Journal of Eye Movement Research, vol. 13, no. 6, pp. 1–16, 2020. @article{Fan2020, This paper describes the use of semantic similarity measures based on distributed representations of words, sentences, and paragraphs (so-called "embeddings") to assess the impact of supra-lexical factors on eye-movement data from early readers of Chinese. In addition, we used a corpus-based measure of surprisal to assess the impact of local word predictability. Eye movement data from 56 Chinese students were collected (a) in the students' 4th grade and (b) one year later while they were in 5th grade. Results indicated that surprisal and some text similarity measures have a significant impact on the moment-to-moment processing of words in reading. The paper presents an easy-to-use set of tools for linking the low-level aspects of fixation durations to a hierarchy of sentence-level and paragraph-level features that can be computed automatically. The study is the first attempt, as far as we are aware, to track the developmental trajectory of these influences in developing readers across a range of reading abilities. The similarity-based measures described here can be used (a) to provide a measure of reader sensitivity to sentence and paragraph cohesion and (b) to assess specific texts for their suitability for readers of different reading ability levels. |
Mojgan Farahani; Vijay Parsa; Björn Herrmann; Mason Kadem; Ingrid Johnsrude; Philip C. Doyle An auditory-perceptual and pupillometric study of vocal strain and listening effort in adductor spasmodic dysphonia Journal Article In: Applied Sciences, vol. 10, no. 17, pp. 5907, 2020. @article{Farahani2020, This study evaluated ratings of vocal strain and perceived listening effort by normal hearing participants while listening to speech samples produced by talkers with adductor spasmodic dysphonia (AdSD). In addition, objective listening effort was measured through concurrent pupillometry to determine whether listening to disordered voices changed arousal as a result of emotional state or cognitive load. Recordings of the second sentence of the "Rainbow Passage" produced by talkers with varying degrees of AdSD served as speech stimuli. Twenty naïve young adult listeners perceptually evaluated these stimuli on the dimensions of vocal strain and listening effort using two separate visual analogue scales. While making the auditory-perceptual judgments, listeners' pupil characteristics were objectively measured in synchrony with the presentation of each voice stimulus. Data analyses revealed moderate-to-high inter- and intra-rater reliability. A significant positive correlation was found between the ratings of vocal strain and listening effort. In addition, listeners displayed greater peak pupil dilation (PPD) when listening to more strained and effortful voice samples. Findings from this study suggest that when combined with an auditory-perceptual task, non-volitional physiologic changes in pupil response may serve as an indicator of listening and cognitive effort or arousal. |
Marion Fechino; Arthur M. Jacobs; Jana Lüdtke In: Journal of Eye Movement Research, vol. 13, no. 3, pp. 1–19, 2020. @article{Fechino2020, Following Jakobson and Levi-Strauss famous analysis of Baudelaire's poem 'Les Chats' ('The Cats'), in the present study we investigated the reading of French poetry from a Neurocognitive Poetics perspective. Our study is exploratory and a first attempt in French, most previous work having been done in either German or English (e.g., Jacobs, 2015a, 2018a, b; Muller et al., 2017; Xue et al., 2019). We varied the presentation mode of the poem Les Chats (verse vs. prose form) and measured the eye movements of our readers to test the hypothesis of an interaction between presentation mode and reading behavior. We specifically focussed on rhyme scheme effects on standard eye movement parameters. Our results replicate those from previous English poetry studies in that there is a specific pattern in poetry reading with longer gaze durations and more rereading in the verse than in the prose format. Moreover, presentation mode also matters for making salient the rhyme scheme. This first study generates interesting hypotheses for further research applying quantitative narrative analysis to French poetry and developing the Neurocognitive Poetics Model of literary reading (NCPM; Jacobs, 2015a) into a cross-linguistic model of poetry reading. |
Leigh B. Fernandez; Paul E. Engelhardt; Angela G. Patarroyo; Shanley E. M. Allen In: Quarterly Journal of Experimental Psychology, vol. 73, no. 12, pp. 2348–2361, 2020. @article{Fernandez2020a, Research has shown that suprasegmental cues in conjunction with visual context can lead to anticipatory (or predictive) eye movements. However, the impact of speech rate on anticipatory eye movements has received little empirical attention. The purpose of the current study was twofold. From a methodological perspective, we tested the impact of speech rate on anticipatory eye movements by systemically varying speech rate (3.5, 4.5, 5.5, and 6.0 syllables per second) in the processing of filler-gap dependencies. From a theoretical perspective, we examined two groups thought to show fewer anticipatory eye movements, and thus likely to be more impacted by speech rate. Experiment 1 compared anticipatory eye movements across the lifespan with younger (18–24 years old) and older adults (40–75 years old). Experiment 2 compared L1 speakers of English and L2 speakers of English with an L1 of German. Results showed that all groups made anticipatory eye movements. However, L2 speakers only made anticipatory eye movements at 3.5 syllables per second, older adults at 3.5 and 4.5 syllables per second, and younger adults at speech rates up to 5.5 syllables per second. At the fastest speech rate, all groups showed a marked decrease in anticipatory eye movements. This work highlights (1) the importance of speech rate on anticipatory eye movements, and (2) group-level performance differences in filler-gap prediction. |
Leigh Fernandez; Christoph Scheepers; Shanley Allen The impact of uninformative parafoveal masks on L1 and late L2 speakers Journal Article In: Journal of Eye Movement Research, vol. 13, no. 6, pp. 1–26, 2020. @article{Fernandez2020b, Much reading research has found that informative parafoveal masks lead to a reading benefit for native speakers (see, Schotter et al., 2012). However, little reading research has tested the impact of uninformative parafoveal masks during reading. Additionally, parafoveal processing research is primarily restricted to native speakers. In the current study we manipulated the type of uninformative preview using a gaze contingent boundary paradigm with a group of L1 English speakers and a group of late L2 English speakers (L1 German). We were interested in how different types of uninformative masks impact on parafoveal processing, whether L1 and L2 speakers are similarly impacted, and whether they are sensitive to parafoveally viewed language-specific sub-lexical orthographic in- formation. We manipulated six types of uninformative masks to test these objectives: an Identical, English pseudo-word, German pseudo-word, illegal string of letters, series of X's, and a blank mask. We found that X masks affect reading the most with slight graded differences across the other masks, L1 and L2 speakers are impacted similarly, and neither group is sensitive to sub-lexical orthographic information. Overall these data show that not all previews are equal, and research should be aware of the way uninformative masks affect reading behavior. Additionally, we hope that future research starts to approach models of eye-movement behavior during reading from not only a monolingual but also from a multilingual perspective. |
Gemma Fitzsimmons; Lewis T. Jayes; Mark J. Weal; Denis Drieghe The impact of skim reading and navigation when reading hyperlinks on the web Journal Article In: PLoS ONE, vol. 15, no. 9, pp. e0239134, 2020. @article{Fitzsimmons2020, It has been shown that readers spend a great deal of time skim reading on the Web and that this type of reading can affect lexical processing of words. Across two experiments, we utilised eye tracking methodology to explore how hyperlinks and navigating webpages affect reading behaviour. In Experiment 1, participants read static Webpages either for comprehension or whilst skim reading, while in Experiment 2, participants additionally read through a navigable Web environment. Embedded target words were either hyperlinks or not and were either high-frequency or low-frequency words. Results from Experiment 1 show that while readers lexically process both linked and unlinked words when reading for comprehension, readers only fully lexically process linked words when skim reading, as was evidenced by a frequency effect that was absent for the unlinked words. They did fully lexically process both linked and unlinked words when reading for comprehension. In Experiment 2, which allowed for navigating, readers only fully lexically processed linked words compared to unlinked words, regardless of whether they were skim reading or reading for comprehension. We suggest that readers engage in an efficient reading strategy where they attempt to minimise comprehension loss while maintaining a high reading speed. Readers use hyperlinks as markers to suggest important information and use them to navigate through the text in an efficient and effective way. The task of reading on the Web causes readers to lexically process words in a markedly different way from typical reading experiments. |
Francesca Foppolo; Adrian Staub The puzzle of number agreement with disjunction Journal Article In: Cognition, vol. 198, pp. 1–20, 2020. @article{Foppolo2020, In English, when two nouns in a disjunctive subject differ in number (e.g., the dogs or the cat), the verb tends to agree with the number of the nearer noun. This is exceptional, as a noun's linear proximity to the verb does not generally play a role in agreement. In the present study, we investigate a further puzzle about agreement with disjunction, namely, the existence of a pattern in which two singular disjuncts trigger plural agreement (e.g., The lawyer or the accountant are…). Two eyetracking studies in English show that plural agreement with a disjunction of singulars does not reliably disrupt readers' eye movements, in contrast to the immediate disruptive effect of other agreement violations. Three off-line rating studies in English show that plural agreement results in only a small decrement in acceptability, compared to other agreement violations, and that in some structural configurations there is no decrement at all. On the whole, the data do not support the hypothesis that plural agreement is licensed only when or has an inclusive reading; even when it has an exclusive reading, there is only a small penalty for plural agreement. Finally, we explored this issue in Italian, which has a richer system of inflectional morphology. Italian speakers showed a plural preference in a completion experiment, and singular and plural agreement did not differ in acceptability in a rating experiment. We conclude that agreement with disjunction is a grammatical lacuna or gap, in the sense that speakers' grammar simply does not prescribe a verb number following a disjunctive subject. |
Ana Laura Frapiccini; Jessica A. Del Punta; Karina V. Rodriguez; Leonardo Dimieri; Gustavo Gasaneo A simple model to analyse the activation force in eyeball movements Journal Article In: The European Physical Journal B, vol. 93, no. 2, pp. 1–10, 2020. @article{Frapiccini2020, Abstract: Starting with a proposal to model horizontal eye movements, we study the parameters involved in it. Particularly, we investigate the values that best fit the parameters describing the activation force responsible for horizontal saccades, independently of the task being performed. The fitting process is based on data sets gathered with an eye tracker device. The simplicity of the model allows to profit from analytical expressions useful to simplify the fitting process. Finally, we use our model to obtain the activation force corresponding to a reading task, finding a very good agreement with the experimental data. |
Deanna C. Friesen; Olivia Ward; Jessica Bohnet; Pierre Cormier; Debra Jared Early activation of cross-language meaning from phonology during sentence processing Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 46, no. 9, pp. 1754–1767, 2020. @article{Friesen2020, The current study investigated whether shared phonology across languages activates cross-language meaning when reading in context. Eighty-five bilinguals read English sentences while their eye movements were tracked. Critical sentences contained English members of English-French interlingual homophone pairs (e.g., mow; French homophone mate mot means "word") or they contained spelling control words (e.g., mop). Only the meaning of the unseen French homophone mate fit the context (e.g., Hannah wrote another mow/mop on the blackboard for the spelling test). Differences in fixation durations between homophone errors and spelling control errors provided evidence for cross-language activation that extended to semantic representations. When the unseen French homophone was of high frequency, shorter first fixations and gaze durations were observed on English interlingual homophones than on English control words, providing evidence that the French meaning associated with the shared phonology was activated during early stage word identification. Individual differences analyses showed that these effects were larger when bilinguals were using the nontarget language (i.e., French) more regularly in daily life. Results provide evidence that cross-language activation of phonology can be sufficiently strong to activate corresponding semantic representations during single language sentence processing. |
Deanna C. Friesen; Veronica Whitford; Debra Titone; Debra Jared The impact of individual differences on cross-language activation of meaning by phonology Journal Article In: Bilingualism: Language and Cognition, vol. 23, no. 2, pp. 323–343, 2020. @article{Friesen2020a, We investigated how individual differences in language proficiency and executive control impact cross-language meaning activation through phonology. Ninety-six university students read English sentences that contained French target words. Target words were high- and low-frequency French interlingual homophones (i.e., words that share pronunciation, but not meaning across langauges; mot means 'word' in French and sounds like 'mow' in English) and matched French control words (e.g., mois - 'month' in French). Readers could use the homophones' shared phonology to activate their English meanings and, ultimately, make sense of the sentence (e.g., Tony was too lazy to mot/mois the grass on Sunday). Shorter reading times were observed on interlingual homophones than control words, suggesting that phonological representations in one language activate cross-language semantic representations. Importantly, the magnitude of the effect was modulated by word frequency, and several participant-level characteristics, including French proficiency, English word knowledge, and executive control ability. |
Rebecca L. A. Frost; Andrew Jessop; Samantha Durrant; Michelle S. Peter; Amy Bidgood; Julian M. Pine; Caroline F. Rowland; Padraic Monaghan Non-adjacent dependency learning in infancy, and its link to language development Journal Article In: Cognitive Psychology, vol. 120, pp. 1–19, 2020. @article{Frost2020, To acquire language, infants must learn how to identify words and linguistic structure in speech. Statistical learning has been suggested to assist both of these tasks. However, infants' capacity to use statistics to discover words and structure together remains unclear. Further, it is not yet known how infants' statistical learning ability relates to their language development. We trained 17-month-old infants on an artificial language comprising non-adjacent dependencies, and examined their looking times on tasks assessing sensitivity to words and structure using an eye-tracked head-turn-preference paradigm. We measured infants' vocabulary size using a Communicative Development Inventory (CDI) concurrently and at 19, 21, 24, 25, 27, and 30 months to relate performance to language development. Infants could segment the words from speech, demonstrated by a significant difference in looking times to words versus part-words. Infants' segmentation performance was significantly related to their vocabulary size (receptive and expressive) both currently, and over time (receptive until 24 months, expressive until 30 months), but was not related to the rate of vocabulary growth. The data also suggest infants may have developed sensitivity to generalised structure, indicating similar statistical learning mechanisms may contribute to the discovery of words and structure in speech, but this was not related to vocabulary size. |
Hiroki Fujita; Ian Cunnings Reanalysis and lingering misinterpretation of linguistic dependencies in native and non-native sentence comprehension Journal Article In: Journal of Memory and Language, vol. 115, pp. 104154, 2020. @article{Fujita2020, Research on temporarily ambiguous “garden-path” sentences (e.g., After Mary dressed the baby laughed) has shown that initially assigned misinterpretations linger after reanalysis of the temporarily ambiguous phrase in both native (L1) and non-native (L2) readers. L2 speakers have particular difficulty with reanalysis, but the source of this L1/L2 difference is debated. Furthermore, how lingering misinterpretation may influence other aspects of language processing has not been systematically examined. We report three offline and two online experiments investigating reanalysis and misinterpretation of filler-gap dependences (e.g., Elisa noticed the truck which the policeman watched the car from). Our results showed that L1 and L2 speakers are prone to lingering misinterpretation during dependency resolution. L1/L2 differences were observed such that L2 speakers had increased difficulty reanalysing some filler-gap dependencies, however this was dependent on how the dependency was disambiguated. These results are compatible with the “good-enough” approach to language processing, and suggest that L1/L2 differences are more likely when reanalysis is particularly difficult. |
Xiaolei Gao; Xiaowei Li; Min Sun; Xuejun Bai; Lei Gao The word frequency effect of fovea and its effect on the preview effect of parafovea in Tibetan reading Journal Article In: Acta Psychologica Sinica, vol. 52, no. 10, pp. 1143–1155, 2020. @article{Gao2020a, In the process of reading, readers mainly obtain information through the fovea region—in particular, the parafovea plays an important role in information acquisition. Readers can obtain certain information from the parafovea through previewing processing, thus promoting the improvement of reading efficiency, which is called the “previewing effect”. The effect of the processing load of the fovea on the previewing effect of parafovea has become a popular research focus of late. For example, studies based on alphabetic languages have found that the previewing effect of the parafovea is greater for high-frequency and short words than for low-frequency and the long words. While Tibetan is an analphabetic language, it also belongs to the Sino-Tibetan language family and has many similarities with Chinese. However, it is still largely unclear how to reflect the above role in the process of Tibetan reading. Will it only show the common characters of alphabetic languages or will it show some Chinese characteristics? The present study aimed to provide experimental evidence to respond to these research questions. Two experiments were carried out on 119 Tibetan undergraduate students. More specifically, participants were asked to read Tibetan sentences and their eye movements during reading were recorded using an SR Research EyeLink 1000Plus eye tracker (sampling rate = 1000 Hz). Experiment 1 manipulated the fovea word frequency (i.e., high vs. low frequency) to investigate the word frequency effect and word frequency delay effect of fovea words in Tibetan reading. The results showed a word frequency effect and a word frequency delay effect in Tibetan reading. Experiment 2 manipulated both fovea word frequency and parafovea previewing word types with the aid of boundary paradigm to investigate the previewing effect of parafovea and the effect of fovea word frequency on the previewing effect of parafovea in Tibetan reading. The results showed a previewing effect of parafovea in Tibetan reading and that, when compared with low-frequency fovea words, high-frequency fovea words had a greater promoting effect on the previewing effect of parafovea. The primary findings can be summarized as follows: (1) significant word frequency effect exists in Tibetan reading, which is reflected in the whole process of vocabulary processing; (2) there is a significant word frequency delay effect in Tibetan reading, which runs through the whole process of vocabulary processing; (3) there is a significant previewing effect of parafovea in Tibetan reading, through which the reader can extract speech and font information; and (4) in Tibetan reading, fovea word frequency affects the size of the previewing effect of parafovea—moreover, word frequency only affects the extraction of shape previewing information in the early stage of lexical processing, that is, the previewing effect of high-frequency words is greater under the condition of shape previewing. In conclusion, the effect of the processing load of the fovea on the previewing effect of parafovea shows the common characteristics of alphabetic languages in Tibetan reading. In addition, this study found that reading Tibetan involves the word frequency delay effect and the previewing effect of parafovea; these findings support the theory of parafovea sequence processing in the E-Z reader model. |
Alan Garnham; Scarlett Child; Sam Hutton Anticipating causes and consequences Journal Article In: Journal of Memory and Language, vol. 114, pp. 1–13, 2020. @article{Garnham2020, Two visual world eye-tracking experiments investigated anticipatory looks to implicit causes and implicit consequences in two clause sentences with mental state verbs (Stimulus-Experiencer and Experiencer-Stimulus) in the first main clause, and an explicit cause or consequence in the second. The first experiment showed that, just as when all continuations are causes, people look early at the implicit cause, when all continuations are consequences they look early at the implicit consequence, for the same verbs. When causes and consequences are intermixed, people direct their looks at the cause or consequence on a trial-by-trial basis depending on the connective (“because” or “and so”). Numerically, causes were favored overall, even when all the endings were consequences, but the effect was only significant at the end of the sentences in Experiment 2. The results are discussed in terms of rapid deployment of causal and consequential information implicit in mental state verbs, and in relation to conflicting accounts of why causes or consequences might generally be favored. |
Hallie Garrison; Gladys Baudet; Elise Breitfeld; Alexis Aberman; Elika Bergelson Familiarity plays a small role in noun comprehension at 12–18 months Journal Article In: Infancy, vol. 25, no. 4, pp. 458–477, 2020. @article{Garrison2020, Infants amass thousands of hours of experience with particular items, each of which is representative of a broader category that often shares perceptual features. Robust word comprehension requires generalizing known labels to new category members. While young infants have been found to look at common nouns when they are named aloud, the role of item familiarity has not been well examined. This study compares 12- to 18-month-olds' word comprehension in the context of pairs of their own items (e.g., photographs of their own shoe and ball) versus new tokens from the same category (e.g., a new shoe and ball). Our results replicate previous work showing that noun comprehension improves rapidly over the second year, while also suggesting that item familiarity appears to play a far smaller role in comprehension in this age range. This in turn suggests that even before age 2, ready generalization beyond particular experiences is an intrinsic component of lexical development. |
Marion Beretti; Naomi Havron; Anne Christophe Four- and 5-year-old children adapt to the reliability of conflicting sources of information to learn novel words Journal Article In: Journal of Experimental Child Psychology, vol. 200, pp. 1–21, 2020. @article{Beretti2020, A central challenge in language acquisition is the integration of multiple sources of information, potentially in conflict, to acquire new knowledge and adjust current linguistic representations. One way to accomplish this is to assign more weight to more reliable sources of information in context. We tested the hypothesis that children adjust the weight of different sources of information during learning, considering two specific sources of information: their knowledge of the meaning of familiar words (semantics) and their familiarity with syntax. We varied the reliability of these sources of information through an induction phase (reliable syntax or reliable semantics). At test, French 4- and 5-year-old children and adults listened to sentences where information provided by these two cues conflicted and were asked to choose between two videos that illustrate the sentence. One video presented the reasonable choice if the sentence is assumed to be syntactically correct, but familiar words refer to novel things (e.g., une mange–“an eats” describes a novel object). The other video was the reasonable choice if the sentence is assumed to be syntactically incorrect and familiar words' meaning is preserved (e.g., “an eats” describes a girl eating and actually should have been “she eats”). As predicted, the proportion of syntactic choices (e.g., interpreting mange–“eats” as a novel noun) was found to be higher in the reliable syntax condition than in the reliable semantics condition, showing that children and adults can adapt their expectations to the reliability of sources of information. |
Raymond Bertram; Victor Kuperman The English disease in Finnish compound processing: Backward transfer effects in Finnish-English bilinguals Journal Article In: Bilingualism: Language and Cognition, vol. 23, no. 3, pp. 579–590, 2020. @article{Bertram2020, Most English compounds are spaced compounds, whereas spelling regulations prescribe Finnish compounds to be written in a concatenated format. However, as in English, Finnish compounds are commonly spaced nowadays (e.g., piha juhla 'garden party'), a phenomenon that we labeled the 'English disease'. In this eye movement study with Finnish-English bilinguals we investigate whether the reading of a concatenated or illegally spaced Finnish compound is affected by the spelling of an English translation equivalent (ETE). We found that spaced Finnish compounds were read slower than their concatenated counterparts, but this effect was attenuated when ETEs were thought to be spaced. Similarly, concatenated Finnish compounds were read faster when their ETEs were also concatenated. These backward transfer effects are in line with studies that show that processing behavior in L1 is affected by a strong concurrent L2, even when the L1 is the native language as well as the dominant community language. |
Elisabeth Beyersmann; Signy Wegener; Kate Nation; Ayako Prokupzcuk; Hua-Chen Wang; Elisabeth Beyersmann; Signy Wegener; Kate Nation; Hua-chen Wang; Anne Castles Learning morphologically complex spoken words: Orthographic expectations of embedded stems are formed prior to print exposure Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, pp. 1–13, 2020. @article{Beyersmann2020, It is well known that information from spoken language is integrated into reading processes, but the nature of these links and how they are acquired is less well understood. Recent evidence has suggested that predictions about the written form of newly learned spoken words are already generated prior to print exposure. We extend this work to morphologically complex words and ask whether the information that is available in spoken words goes beyond the mappings between phonology and orthography. Adults were taught the oral form of a set of novel morphologically complex words (e.g., “neshing”, “neshed”, “neshes”), with a 2nd set serving as untrained items. Following oral training, participants saw the printed form of the novel word stems for the first time (e.g., nesh), embedded in sentences, and their eye movements were monitored. Half of the stems were allocated a predictable and half an unpredictable spelling. Reading times were shorter for orally trained than untrained stems and for stems with predictable rather than unpredictable spellings. Crucially, there was an interaction between spelling predictability and training. This suggests that orthographic expectations of embedded stems are formed during spoken word learning. Reading aloud and spelling tests complemented the eye movement data, and findings are discussed in the context of theories of reading acquisition. |
Katherine S. Binder; Kathryn A. Tremblay; Alison Joseph Vocabulary accessibility and acquisition: Do you get more from a financestor or a sociophite? Journal Article In: Journal of Research in Reading, vol. 43, no. 4, pp. 395–416, 2020. @article{Binder2020, Background: The purpose of the current study was to examine how the morphological structure of a real word or novel word affected the incidental vocabulary learning of participants and to examine how these target items are processed as they are read. In addition, we examined the roles of vocabulary depth and breadth in the process of incidental vocabulary learning. Methods: We had participants read short passages that contained real words or novel words that differed on their morphological accessibility as we collected eye movement data. Participants also completed several vocabulary depth and breadth measures. Results: Accessible real words and novel words were learned better than inaccessible and less accessible items, but there was a processing cost associated with accessible real words compared with inaccessible real words. In contrast, participants spent more time on the less accessible novel words compared with accessible novel words, but that extra processing time did not translate into better acquisition scores. Finally, both vocabulary breadth and depth explained variance in incidental vocabulary acquisition, while breadth explained variance in gaze duration and depth explained variance in regressive eye movements. Conclusions: Accessibility of the targets affected both acquisition and reading time, and depth and breadth are both individual differences that explain variance in incidental acquisition and the processing of those words. |
Hazel I. Blythe; Jonathan H. Dickins; Colin R. Kennedy; Simon P. Liversedge The role of phonology in lexical access in teenagers with a history of dyslexia Journal Article In: PLoS ONE, vol. 15, no. 3, pp. e0229934, 2020. @article{Blythe2020, We examined phonological recoding during silent sentence reading in teenagers with a history of dyslexia and their typically developing peers. Two experiments are reported in which participants' eye movements were recorded as they read sentences containing correctly spelled words (e.g., church), pseudohomophones (e.g., cherch), and spelling controls (e.g., charch). In Experiment 1 we examined foveal processing of the target word/nonword stimuli, and in Experiment 2 we examined parafoveal pre-processing. There were four participant groups-older teenagers with a history of dyslexia, older typically developing teenagers who were matched for age, younger typically developing teenagers who were matched for reading level, and younger teenagers with a history of dyslexia. All four participant groups showed a pseudohomophone advantage, both from foveal processing and parafoveal preprocessing, indicating that teenagers with a history of dyslexia engage in phonological recoding for lexical identification during silent sentence reading in a comparable manner to their typically developing peers. |
Giulia Borghini; Valerie Hazan Effects of acoustic and semantic cues on listening effort during native and non-native speech perception Journal Article In: The Journal of the Acoustical Society of America, vol. 147, no. 6, pp. 3783–3794, 2020. @article{Borghini2020, Relative to native listeners, non-native listeners who are immersed in a second language environment experience increased listening effort and reduced ability to successfully perform an additional task while listening. Previous research demonstrated that listeners can exploit a variety of intelligibility-enhancing cues to cope with adverse listening conditions. However, little is known about the implications of those speech perception strategies for listening effort. The current research aims to investigate by means of pupillometry how listening effort is modulated in native and non-native listeners by the availability of semantic context and acoustic enhancements during the comprehension of spoken sentences. For this purpose, semantic plausibility and speaking style were manipulated both separately and in combination during a speech perception task in noise. The signal to noise ratio was individually adjusted for each participant in order to target 50% intelligibility level. Behavioural results indicated that native and non-native listeners were equally able to fruitfully exploit both semantic and acoustic cues to aid their comprehension. Pupil data indicated that listening effort was reduced for both groups of listeners when acoustic enhancements were available, while the presence of a plausible semantic context did not lead to a reduction in listening effort. |
Arielle Borovsky In: Developmental Science, vol. 23, pp. 1–15, 2020. @article{Borovsky2020, This project explores how children disambiguate and retain novel object-label mappings in the face of semantic similarity. Burgeoning evidence suggests that semantic structure in the developing lexicon promotes word learning in ostensive contexts, whereas other findings indicate that semantic similarity interferes with and temporarily slows familiar word recognition. This project explores how these distinct processes interact when mapping and retaining labels for novel objects (i.e., low-frequency objects that are unfamiliar to toddlers) via disambiguation from a semantically similar familiar referent in 24-month-olds (N = 65). Toddlers' log-adjusted looking to labeled target objects (relative to distractor objects) was measured in three conditions: Familiar trials (familiar label spoken while viewing semantically related familiar and novel objects), Disambiguation trials (unfamiliar label spoken while viewing semantically similar familiar and unfamiliar object), and Retention trials (unfamiliar label spoken while viewing novel object pairs). Toddlers' individual vocabulary structure was then compared to performance on each condition. Vocabulary structure was measured at two levels: category-level structure (semantic density) for experimental items, and lexicon-level structure (global clustering coefficient). The findings suggest, consistent with prior results, that semantic density interfered with known word recognition, and facilitated unfamiliar word retention. Children did not show a significant novel word preference during disambiguation, and disambiguation behavior was not impacted by semantic structure. These findings connect seemingly disparate mechanisms of semantic interference in processing and semantic leveraging in word learning. Semantic interference momentarily slows word recognition and resolution of referential uncertainty for novel label-object mappings. Nevertheless, this slowing might support retention by enabling comparison between related objects. |
Hans Rutger Bosker; David Peeters; Judith Holler How visual cues to speech rate influence speech perception Journal Article In: Quarterly Journal of Experimental Psychology, vol. 73, no. 10, pp. 1523–1536, 2020. @article{Bosker2020, Spoken words are highly variable and therefore listeners interpret speech sounds relative to the surrounding acoustic context, such as the speech rate of a preceding sentence. For instance, a vowel midway between short /ɑ/ and long /a:/ in Dutch is perceived as short /ɑ/ in the context of preceding slow speech, but as long /a:/ if preceded by a fast context. Despite the well-established influence of visual articulatory cues on speech comprehension, it remains unclear whether visual cues to speech rate also influence subsequent spoken word recognition. In two “Go Fish”–like experiments, participants were presented with audio-only (auditory speech + fixation cross), visual-only (mute videos of talking head), and audiovisual (speech + videos) context sentences, followed by ambiguous target words containing vowels midway between short /ɑ/ and long /a:/. In Experiment 1, target words were always presented auditorily, without visual articulatory cues. Although the audio-only and audiovisual contexts induced a rate effect (i.e., more long /a:/ responses after fast contexts), the visual-only condition did not. When, in Experiment 2, target words were presented audiovisually, rate effects were observed in all three conditions, including visual-only. This suggests that visual cues to speech rate in a context sentence influence the perception of following visual target cues (e.g., duration of lip aperture), which at an audiovisual integration stage bias participants' target categorisation responses. These findings contribute to a better understanding of how what we see influences what we hear. |
Evelyn Bosma; Naomi Nota Cognate facilitation in Frisian–Dutch bilingual children's sentence reading: An eye-tracking study Journal Article In: Journal of Experimental Child Psychology, vol. 189, pp. 1–18, 2020. @article{Bosma2020, Bilingual adults are faster in reading cognates than in reading non-cognates in both their first language (L1) and second language (L2). This cognate effect has been shown to be gradual: recognition was facilitated when words had higher degrees of cross-linguistic similarity. The aim of the current study was to investigate whether cognate facilitation can also be observed in bilingual children's sentence reading. To answer this question, a group of Frisian–Dutch bilingual children (N = 37) aged 9–12 years completed a reading task in both their languages. All children had Dutch as their dominant reading language, but most of them spoke mainly Frisian at home. Identical cognates (e.g., Dutch–Frisian boek–boek ‘book'), non-identical cognates (e.g., beam–boom ‘tree'), and non-cognates (e.g., beppe–oma ‘grandmother') were presented in sentence context, and eye movements were recorded. The results showed a non-gradual cognate facilitation effect in Frisian: identical cognates were read faster than non-identical cognates and non-cognates. In Dutch, no cognate facilitation effect could be observed. This suggests that bilingual children use their dominant reading language while reading in their non-dominant one, but not vice versa. |
Mathieu Bourguignon; Martijn Baart; Efthymia C. Kapnoula; Nicola Molinaro Lip-reading enables the brain to synthesize auditory features of unknown silent speech Journal Article In: Journal of Neuroscience, vol. 40, no. 5, pp. 1053–1065, 2020. @article{Bourguignon2020, Lip-reading is crucial for understanding speech in challenging conditions. But how the brain extracts meaning from, silent, visual speech is still under debate. Lip-reading in silence activates the auditory cortices, but it is not known whether such activation reflects immediate synthesis of the corresponding auditory stimulus or imagery of unrelated sounds. To disentangle these possibilities, we used magnetoencephalography to evaluate how cortical activity in 28 healthy adult humans (17 females) entrained to the auditory speech envelope and lip movements (mouth opening) when listening to a spoken story without visual input (audio-only), and when seeing a silent video of a speaker articulating another story (video-only). In video-only, auditory cortical activity entrained to the absent auditory signal at frequencies <1 Hz more than to the seen lip movements. This entrainment process was characterized by an auditory-speech-to-brain delay of ~70 ms in the left hemisphere, compared with ~20 ms in audio-only. Entrainment to mouth opening was found in the right angular gyrus at <1 Hz, and in early visual cortices at 1– 8 Hz. These findings demonstrate that the brain can use a silent lip-read signal to synthesize a coarse-grained auditory speech representation in early auditory cortices. Our data indicate the following underlying oscillatory mechanism: seeing lip movements first modulates neuronal activity in early visual cortices at frequencies that match articulatory lip movements; the right angular gyrus then extracts slower features of lip movements, mapping them onto the corresponding speech sound features; this information is fed to auditory cortices, most likely facilitating speech parsing. |
Rodrigo M. Braga; Lauren M. DiNicola; Hannah C. Becker; Randy L. Buckner Situating the left-lateralized language network in the broader organization of multiple specialized large-scale distributed networks Journal Article In: Journal of neurophysiology, vol. 124, no. 5, pp. 1415–1448, 2020. @article{Braga2020, Using procedures optimized to explore network organization within the individual, the topography of a candidate language network was characterized and situated within the broader context of adjacent networks. The candidate network was first identified using functional connectivity and replicated across individuals, acquisition tasks, and analytical methods. In addition to classical language regions near the perisylvian cortex and temporal pole, regions were also observed in dorsal posterior cingulate, midcingulate, and anterior superior frontal and inferior temporal cortex. The candidate network was selectively activated when processing meaningful (as contrasted with nonword) sentences, whereas spatially adjacent networks showed minimal or even decreased activity. Results were replicated and triplicated across two prospectively acquired cohorts. Examined in relation to adjacent networks, the topography of the language network was found to parallel the motif of other association networks, including the transmodal association networks linked to theory of mind and episodic remembering (often collectively called the default network). The several networks contained juxtaposed regions in multiple association zones. Outside of these juxtaposed higher-order networks, we further noted a distinct frontotemporal network situated between language regions and a frontal orofacial motor region and a temporal auditory region. A possibility is that these functionally related sensorimotor regions might anchor specialization of neighboring association regions that develop into a language network. What is most striking is that the canonical language network appears to be just one of multiple similarly organized, differentially specialized distributed networks that populate the evolutionarily expanded zones of human association cortex. |
Violet A. Brown; Drew J. McLaughlin; Julia F. Strand; Kristin J. Van Engen Rapid adaptation to fully intelligible nonnative-accented speech reduces listening effort Journal Article In: Quarterly Journal of Experimental Psychology, vol. 73, no. 9, pp. 1431–1443, 2020. @article{Brown2020b, In noisy settings or when listening to an unfamiliar talker or accent, it can be difficult to understand spoken language. This difficulty typically results in reductions in speech intelligibility, but may also increase the effort necessary to process the speech even when intelligibility is unaffected. In this study, we used a dual-task paradigm and pupillometry to assess the cognitive costs associated with processing fully intelligible accented speech, predicting that rapid perceptual adaptation to an accent would result in decreased listening effort over time. The behavioural and physiological paradigms provided converging evidence that listeners expend greater effort when processing nonnative- relative to native-accented speech, and both experiments also revealed an overall reduction in listening effort over the course of the experiment. Only the pupillometry experiment, however, revealed greater adaptation to nonnative- relative to native-accented speech. An exploratory analysis of the dual-task data that attempted to minimise practice effects revealed weak evidence for greater adaptation to the nonnative accent. These results suggest that even when speech is fully intelligible, resolving deviations between the acoustic input and stored lexical representations incurs a processing cost, and adaptation may attenuate this cost. |
Christophe Carlei; Dirk Kerzel Looking up improves performance in verbal tasks Journal Article In: Laterality, vol. 25, no. 2, pp. 198–214, 2020. @article{Carlei2020, Earlier research suggested that gaze direction has an impact on cognitive processing. It is likely that horizontal gaze direction increases activation in specific areas of the contralateral cerebral hemisphere. Consistent with the lateralization of memory functions, we previously showed that shifting gaze to the left improves visuo-spatial short-term memory. In the current study, we investigated the effect of unilateral gaze on verbal processing. We expected better performance with gaze directed to the right because language is lateralized in the left hemisphere. Also, an advantage of gaze directed upward was expected because local processing and object recognition are facilitated in the upper visual field. Observers directed their gaze at one of the corners of the computer screen while they performed lexical decision, grammatical gender and semantic discrimination tasks. Contrary to expectations, we did not observe performance differences between gaze directed to the left or right, which is consistent with the inconsistent literature on horizontal asymmetries with verbal tasks. However, RTs were shorter when observers looked at words in the upper compared to the lower part of the screen, suggesting that looking upwards enhances verbal processing. |
Marije Michel; Andrea Révész; Xiaojun Lu; Nektaria Efstathia Kourtali; Minjin Lee; Lais Borges Investigating L2 writing processes across independent and integrated tasks: A mixed-methods study Journal Article In: Second Language Research, vol. 36, no. 3, pp. 307–334, 2020. @article{Michel2020, Most research into second language (L2) writing has focused on the products of writing tasks; much less empirical work has examined the behaviours in which L2 writers engage and the cognitive processes that underlie writing behaviours. We aimed to fill this gap by investigating the extent to which writing speed fluency, pausing, eye-gaze behaviours and the cognitive processes associated with pausing may vary across independent and integrated tasks throughout the whole, and at five different stages, of the writing process. Sixty L2 writers performed two independent and two integrated TOEFL iBT writing tasks counterbalanced across participants. While writing, we logged participants' keystrokes and captured their eye-movements. Participants took part in a stimulated recall interview based on the last task they had completed. Mixed effects regressions and qualitative analyses revealed that, apart from source use on the integrated task, L2 writers engaged in similar writing behaviours and cognitive processes during the independent and integrated tasks. The integrated task, however, elicited more dynamic and varied behaviours and cognitive processes across writing stages. Adopting a mixed-methods approach enabled us to gain more complete and specific insights than using a single method. |
Ailsa E. Millen; Lorraine Hope; Anne P. Hillstrom Eye spy a liar: Assessing the utility of eye fixations and confidence judgments for detecting concealed recognition of faces, scenes and objects Journal Article In: Cognitive Research: Principles and Implications, vol. 5, no. 38, pp. 1–18, 2020. @article{Millen2020, Background: In criminal investigations, uncooperative witnesses might deny knowing a perpetrator, the location of a murder scene or knowledge of a weapon. We sought to identify markers of recognition in eye fixations and confidence judgments whilst participants told the truth and lied about recognising faces (Experiment 1) and scenes and objects (Experiment 2) that varied in familiarity. To detect recognition we calculated effect size differences in markers of recognition between familiar and unfamiliar items that varied in familiarity (personally familiar, newly learned). Results: In Experiment 1, recognition of personally familiar faces was reliably detected across multiple fixation markers (e.g. fewer fixations, fewer interest areas viewed, fewer return fixations) during honest and concealed recognition. In Experiment 2, recognition of personally familiar non-face items (scenes and objects) was detected solely by fewer fixations during honest and concealed recognition; differences in other fixation measures were not consistent. In both experiments, fewer fixations exposed concealed recognition of newly learned faces, scenes and objects, but the same pattern was not observed during honest recognition. Confidence ratings were higher for recognition of personally familiar faces than for unfamiliar faces. Conclusions: Robust memories of personally familiar faces were detected in patterns of fixations and confidence ratings, irrespective of task demands required to conceal recognition. Crucially, we demonstrate that newly learned faces should not be used as a proxy for real-world familiarity, and that conclusions should not be generalised across different types of familiarity or stimulus class. |
Krista A. Miller; Gary E. Raney; Alexander P. Demos Time to throw in the towel? No evidence for automatic conceptual metaphor access in idiom processing Journal Article In: Journal of Psycholinguistic Research, vol. 49, no. 5, pp. 885–913, 2020. @article{Miller2020, The goal of the current research was to determine if conceptual metaphors are activated when people read idioms within a text. Participants read passages that included idioms that were consistent (blow your top) or inconsistent (bite his head off) with an underlying conceptual metaphor (ANGER IS HEATED FLUID IN A CONTAINER) followed by target words that were related (heat) or unrelated (lead) to the conceptual metaphor. Reading time (Experiment 1) or lexical decision time (Experiment 2) for the target words were measured. We found no evidence supporting conceptual metaphor activation. Target word reading times were unaffected by whether they were related or unrelated to underlying conceptual metaphors. Lexical decision times were facilitated for related target words in both the consistent and inconsistent idiom conditions. We suggest that the conceptual (target) domain, not a specific underlying conceptual metaphor, facilitates processing of related target words. |
Jonathan Mirault; Jeremy Yeaton; Fanny Broqua; Stéphane Dufau; Phillip J. Holcomb; Jonathan Grainger Parafoveal-on-foveal repetition effects in sentence reading: A co-registered eye-tracking and electroencephalogram study Journal Article In: Psychophysiology, vol. 57, no. 8, pp. e13553, 2020. @article{Mirault2020, When reading, can the next word in the sentence (word n + 1) influence how you read the word you are currently looking at (word n)? Serial models of sentence reading state that this generally should not be the case, whereas parallel models predict that this should be the case. Here we focus on perhaps the simplest and the strongest Parafoveal-on-Foveal (PoF) manipulation: word n + 1 is either the same as word n or a different word. Participants read sentences for comprehension and when their eyes left word n, the repeated or unrelated word at position n + 1 was swapped for a word that provided a syntactically correct continuation of the sentence. We recorded electroencephalogram and eye-movements, and time-locked the analysis of fixation-related potentials (FRPs) to fixation of word n. We found robust PoF repetition effects on gaze durations on word n, and also on the initial landing position on word n. Most important is that we also observed significant effects in FRPs, reaching significance at 260 ms post-fixation of word n. Repetition of the target word n at position n + 1 caused a widely distributed reduced negativity in the FRPs. Given the timing of this effect, we argue that it is driven by orthographic processing of word n + 1, while readers were still looking at word n, plus the spatial integration of orthographic information extracted from these two words in parallel. |
Sanako Mitsugi Generating predictions based on semantic categories in a second language: A case of numeral classifiers in Japanese Journal Article In: IRAL - International Review of Applied Linguistics in Language Teaching, vol. 58, no. 3, pp. 323–349, 2020. @article{Mitsugi2020, This study examined whether native Japanese speakers and second language (L2) speakers of Japanese use information from numeral classifiers to predict possible referents. Using a visual-world eye-tracking paradigm, we asked participants to identify picture objects that take either the same or different numeral classifiers while they listened to Japanese sentences referring to one object. The results showed that native speakers looked to the target predictively more often when the classifier was informative about noun identity than when it was not. L2 learners also showed a facilitative effect of classifiers that was comparable to that of native speakers. In addition, we found that the level of proficiency played a role in the speed of real-time referent resolutions when the participants heard the target nouns in audio input. However, such an effect was not observed during the period when the predictions were generated. |
Holger Mitterer; Sahyang Kim; Taehong Cho Datasets on the production and perception of underlying and epenthetic glottal stops in Maltese Journal Article In: Data in Brief, vol. 30, pp. 1–9, 2020. @article{Mitterer2020, This article provides some supplementary analysis data of speech production and perception of glottal stops in the Semitic language Maltese. In Maltese, a glottal stop can occur as a phoneme, but also as a phonetic marker of vowel-initial words (as in the case with Germanic languages like English). Data from four experiments are provided, which will allow other researchers to reproduce the results and apply their own data-analysis techniques to these data for further data exploration. A production experiment (Experiment 1) investigates how often the glottal marking of vowel-initial words occurs (causing vowel-initial words to be ambiguous with words starting with a glottal stop as a phoneme) and whether the glottal gesture for this marking can be differentiated from an underlying (phonemic) glottal stop in its acoustic properties. Experiments 2 to 4 investigate how and to what extent Maltese listeners perceive glottal markings as lexical (phonemic) or epenthetic (phonetic), using a two-alternative forced choice task (Experiment 2), a visual-world eye tracking task with printed target words (Experiment 3) and a gating task (Experiment 4). A full account of theoretical consequences of these data can be found in the full length article entitled “The glottal stop between segmental and suprasegmental processing: The case of Maltese” [1]. |
Francisco J. Moreno-Pérez; Isabel R. Rodríguez-Ortiz; Gema Tavares; David Saldaña Comprehending reflexive and clitic constructions in children with autism spectrum disorder and developmental language disorder Journal Article In: International Journal of Language and Communication Disorders, vol. 55, no. 6, pp. 884–898, 2020. @article{MorenoPerez2020, Background: It has been established that people with autism spectrum disorder (ASD) often have difficulties understanding spoken language. Understanding reflexive and clitic pronouns is vital to establishing reference-based inference, but it is as yet unclear whether such constructions pose specific difficulties for those with ASD. Pronoun interpretation seems be connected to the development of pragmatic abilities, and can therefore be considered a plausible marker in the differential diagnosis between ASD and developmental language disorder (DLD). Aims: To establish whether or not there are differences between ASD and DLD in relation to their understanding of pronoun constructions (both reflexive and clitic). The working hypothesis was that although no differences were expected between groups in relation to automatic (online) pronoun processing, the comprehension of reflexive pronouns would constitute a diagnostic marker between the group with ASD and language disorder and the DLD group. Methods & Procedures: The study carried out two experiments with three clinical groups (two with ASD and different levels of language proficiency and one with DLD) and two control groups with typically developing people (with equivalent language levels), analysing their on- and offline processing in pronoun resolution tasks. The first experiment uses an online method (eye-tracking) to record pronoun processing in real time. The second uses an offline method to analyse comprehension accuracy. Outcomes & Results: The results of the two experiments indicated no differences in the way in which the clinical and control groups resolved the tasks, but a shorter reaction time was observed only in the age-matched control group in comparison with the ASD group without language disorder in the first experiment, perhaps due to the fact that processing pronouns involves a greater cognitive load among the latter group. Conclusions & Implications: The comprehension of reflexive pronouns cannot be considered a diagnostic marker for distinguishing ASD from DLD. What this paper adds What is already known on the subject Previous studies have found that the performance of children with ASD in the comprehension of personal pronouns is equivalent to youngest control groups, but poorer regarding the interpretation of reflective pronouns. However, children with DLD do not usually have problems with the use of pronouns, which suggests that their pronoun processing is not affected. As pronoun interpretation seems be connected to the development of pragmatic abilities, it could be considered a plausible marker in the differential diagnosis between ASD and DLD. What this paper adds to existing knowledge This paper presents the results of two experiments involving pronoun processing by those with ASD (both with and without language disorder) and those with DLD. The design enables us to analyse the reflexive and clitic pronoun processing in people with ASD and DLD, regardless of their language proficiency. One experiment uses an eye-tracking methodology that allows us to obtain data about how the pronouns are processed in real time. It represents an attempt to identify language markers that may help distinguish between the two groups and adapt the interventions to the specific problems experienced by each one. What are the potential or actual clinical implications of this work? The results indicate that it is not possible to identify any specific impairment in pronoun processing among the clinical groups (ASD and DLD). |
Laura M. Morett; Jennifer M. Roche; Scott H. Fraundorf; James C. McPartland Contrast Is in the eye of the beholder: Infelicitous beat gesture increases cognitive load during online spoken discourse comprehension Journal Article In: Cognitive Science, vol. 44, no. 10, pp. 1–46, 2020. @article{Morett2020, We investigated how two cues to contrast—beat gesture and contrastive pitch accenting—affect comprehenders' cognitive load during processing of spoken referring expressions. In two visual-world experiments, we orthogonally manipulated the presence of these cues and their felicity, or fit, with the local (sentence-level) referential context in critical referring expressions while comprehenders' task-evoked pupillary responses (TEPRs) were examined. In Experiment 1, beat gesture and contrastive accenting always matched the referential context of filler referring expressions and were therefore relatively felicitous on the global (experiment) level, whereas in Experiment 2, beat gesture and contrastive accenting never fit the referential context of filler referring expressions and were therefore infelicitous on the global level. The results revealed that both beat gesture and contrastive accenting increased comprehenders' cognitive load. For beat gesture, this increase in cognitive load was driven by both local and global infelicity. For contrastive accenting, this increase in cognitive load was unaffected when cues were globally felicitous but exacerbated when cues were globally infelicitous. Together, these results suggest that comprehenders' cognitive resources are taxed by processing infelicitous use of beat gesture and contrastive accenting to convey contrast on both the local and global levels. |
Adam M. Morgan; Titus Malsburg; Victor S. Ferreira; Eva Wittenberg Shared syntax between comprehension and production: Multi-paradigm evidence that resumptive pronouns hinder comprehension Journal Article In: Cognition, vol. 205, pp. 1–21, 2020. @article{Morgan2020, Language comprehension and production are generally assumed to use the same representations, but resumption poses a problem for this view: This structure is regularly produced, but judged highly unacceptable. Production-based solutions to this paradox explain resumption in terms of processing pressures, whereas the Facilitation Hypothesis suggests resumption is produced to help listeners comprehend. Previous research purported to support the Facilitation Hypothesis did not test its keystone prediction: that resumption improves accuracy of interpretation. Here, we test this prediction directly, controlling for factors that previous work did not. Results show that resumption in fact hinders comprehension in the same sentences that native speakers produced, a finding which replicated across four high-powered experiments with varying paradigms: sentence-picture matching (N=300), self-paced reading (N=96), visual world eye-tracking (N=96), and multiple-choice comprehension question (N=150). These findings are consistent with production-based accounts, indicating that comprehension and production may indeed share representations, although our findings point toward a limit on the degree of overlap. Methodologically speaking, the findings highlight the importance of measuring interpretation when studying comprehension. |
Malik M. Naeem Mannan; M. Ahmad Kamran; Shinil Kang; Hak Soo Choi; Myung Yung Jeong A hybrid speller design using eye tracking and SSVEP brain–computer interface Journal Article In: Sensors, vol. 20, no. 3, pp. 1–20, 2020. @article{NaeemMannan2020, Steady‐state visual evoked potentials (SSVEPs) have been extensively utilized to develop brain–computer interfaces (BCIs) due to the advantages of robustness, large number of commands, high classification accuracies, and information transfer rates (ITRs). However, the use of several simultaneous flickering stimuli often causes high levels of user discomfort, tiredness, annoyingness, and fatigue. Here we propose to design a stimuli‐responsive hybrid speller by using electroencephalography (EEG) and video‐based eye‐tracking to increase user comfortability levels when presented with large numbers of simultaneously flickering stimuli. Interestingly, a canonical correlation analysis (CCA)‐based framework was useful to identify target frequency with a 1 s duration of flickering signal. Our proposed BCI‐speller uses only six frequencies to classify forty-eight targets, thus achieve greatly increased ITR, whereas basic SSVEP BCI‐spellers use an equal number of frequencies to the number of targets. Using this speller, we obtained an average classification accuracy of 90.35 ± 3.597% with an average ITR of 184.06 ± 12.761 bits per minute in a cued‐spelling task and an ITR of 190.73 ± 17.849 bits per minute in a free‐spelling task. Consequently, our proposed speller is superior to the other spellers in terms of targets classified, classification accuracy, and ITR, while producing less fatigue, annoyingness, tiredness and discomfort. Together, our proposed hybrid eye tracking and SSVEP BCI‐based system will ultimately enable a truly high-speed communication channel. |
Leanne Nagels; Roelien Bastiaanse; Deniz Başkent; Anita Wagnera Individual differences in lexical access among cochlear implant users Journal Article In: Journal of Speech, Language, and Hearing Research, vol. 63, pp. 286–304, 2020. @article{Nagels2020, Purpose: The current study investigates how individual differences in cochlear implant (CI) users' sensitivity to word–nonword differences, reflecting lexical uncertainty, relate to their reliance on sentential context for lexical access in processing continuous speech. Method: Fifteen CI users and 14 normal-hearing (NH) controls participated in an auditory lexical decision task (Experiment 1) and a visual-world paradigm task (Experiment 2). Experiment 1 tested participants' reliance on lexical statistics, and Experiment 2 studied how sentential context affects the time course and patterns of lexical competition leading to lexical access. Results: In Experiment 1, CI users had lower accuracy scores and longer reaction times than NH listeners, particularly for nonwords. In Experiment 2, CI users' lexical competition patterns were, on average, similar to those of NH listeners, but the patterns of individual CI users varied greatly. Individual CI users' word–nonword sensitivity (Experiment 1) explained differences in the reliance on sentential context to resolve lexical competition, whereas clinical speech perception scores explained competition with phonologically related words. Conclusions: The general analysis of CI users' lexical competition patterns showed merely quantitative differences with NH listeners in the time course of lexical competition, but our additional analysis revealed more qualitative differences in CI users' strategies to process speech. Individuals' word–nonword sensitivity explained different parts of individual variability than clinical speech perception scores. These results stress, particularly for heterogeneous clinical populations such as CI users, the importance of investigating individual differences in addition to group averages, as they can be informative for clinical rehabilitation. |
Chie Nakamura; Manabu Arai; Yuki Hirose; Suzanne Flynn In: Frontiers in Psychology, vol. 10, pp. 2835, 2020. @article{Nakamura2020, It has long been debated whether non-native speakers can process sentences in the same way as native speakers do or they suffer from certain qualitative deficit in their ability of language comprehension. The current study examined the influence of prosodic and visual information in processing sentences with a temporarily ambiguous prepositional phrase (“Put the cake on the plate in the basket”) with native English speakers and Japanese learners of English. Specifically, we investigated (1) whether native speakers assign different pragmatic functions to the same prosodic cues used in different contexts and (2) whether L2 learners can reach the correct analysis by integrating prosodic cues with syntax with reference to the visually presented contextual information. The results from native speakers showed that contrastive accents helped to resolve the referential ambiguity when a contrastive pair was present in visual scenes. However, without a contrastive pair in the visual scene, native speakers were slower to reach the correct analysis with the contrastive accent, which supports the view that the pragmatic function of intonation categories are highly context dependent. The results from L2 learners showed that visually presented context alone helped L2 learners to reach the correct analysis. However, L2 learners were unable to assign contrastive meaning to the prosodic cues when there were two potential referents in the visual scene. The results suggest that L2 learners are not capable of integrating multiple sources of information in an interactive manner during real-time language comprehension. |
M. J. Nelson; S. Moeller; A. Basu; L. Christopher; E. J. Rogalski; M. Greicius; S. Weintraub; B. Bonakdarpour; R. S. Hurley; M. M. Mesulam Taxonomic interference associated with phonemic paraphasias in agrammatic primary progressive aphasia Journal Article In: Cerebral Cortex, vol. 30, no. 4, pp. 2529–2541, 2020. @article{Nelson2020, Phonemic paraphasias are thought to reflect phonological (post-semantic) deficits in language production. Here we present evidence that phonemic paraphasias in non-semantic primary progressive aphasia (PPA) may be associated with taxonomic interference. Agrammatic and logopenic PPA patients and control participants performed a word-To-picture visual search task where they matched a stimulus noun to 1 of 16 object pictures as their eye movements were recorded. Participants were subsequently asked to name the same items. We measured taxonomic interference (ratio of time spent viewing related vs. unrelated foils) during the search task for each item. Target items that elicited a phonemic paraphasia during object naming elicited increased taxonomic interference during the search task in agrammatic but not logopenic PPA patients. These results could reflect either very subtle sub-clinical semantic distortions of word representations or partial degradation of specific phonological word forms in agrammatic PPA during both word-To-picture matching (input stage) and picture naming (output stage). The mechanism for phonemic paraphasias in logopenic patients seems to be different and to be operative at the pre-Articulatory stage of phonological retrieval. Glucose metabolic imaging suggests that degeneration in the left posterior frontal lobe and left temporo-parietal junction, respectively, might underlie these different patterns of phonemic paraphasia. |
Thomas Geyer; Franziska Günther; Hermann J. Müller; Jim Kacian; Heinrich René Liesefeld; Stella Pierides Reading English-language haiku: An eye-movement study of the 'cut effect' Journal Article In: Journal of Eye Movement Research, vol. 13, no. 2, pp. 1–29, 2020. @article{Geyer2020, The current study, set within the larger enterprise of Neuro-Cognitive Poetics, was designed to examine how readers deal with the 'cut'-a more or less sharp semantic-conceptual break-in normative, three-line English-language haiku poems (ELH). Readers were presented with three-line haiku that consisted of two (seemingly) disparate parts, a (two-line) 'phrase' image and a one-line 'fragment' image, in order to determine how they process the conceptual gap between these images when constructing the poem's meaning-as reflected in their patterns of reading eye movements. In addition to replicating the basic 'cut effect', i.e., the extended fixation dwell time on the fragment line relative to the other lines, the present study examined (a) how this effect is influenced by whether the cut is purely implicit or explicitly marked by punctuation, and (b) whether the effect pattern could be delineated against a control condition of 'uncut', one-image haiku. For 'cut' vs. 'uncut' haiku, the results revealed the distribution of fixations across the poems to be modulated by the position of the cut (after line 1 vs. after line 2), the presence vs. absence of a cut marker, and the semanticconceptual distance between the two images (context-action vs. juxtaposition haiku). These formal-structural and conceptual-semantic properties were associated with systematic changes in how individual poem lines were scanned at first reading and then (selectively) re-sampled in second-and third-pass reading to construct and check global meaning. No such effects were found for one-image (control) haiku. We attribute this pattern to the operation of different meaning resolution processes during the comprehension of two-image haiku, which are invoked by both form-and meaning-related features of the poems. |
Peter C. Gordon; Mariah Moore; Wonil Choi; Renske S. Hoedemaker; Matthew W. Lowder Individual differences in reading: Separable effects of reading experience and processing skill Journal Article In: Memory & Cognition, vol. 48, no. 4, pp. 553–565, 2020. @article{Gordon2020, A large-scale eye-tracking study examined individual variability in measures of word recognition during reading among 546 college students, focusing on two established individual-differences measures: the Author Recognition Test (ART) and Rapid Automatized Naming (RAN). ART and RAN were only slightly correlated, suggesting that the two tasks reflect independent cognitive abilities in this large sample of participants. Further, individual variability in ART and RAN scores were related to distinct facets of word-recognition processes. Higher ART scores were associated with increased skipping rates, shorter gaze duration, and reduced effects of word frequency on gaze duration, suggesting that this measure reflects efficiency of basic processes of word recognition during reading. In contrast, faster times on RAN were associated with enhanced foveal-on-parafoveal effects, fewer first-pass regressions, and shorter second-pass reading times, suggesting that this measure reflects efficient coordination of perceptual-motor and attentional processing during reading. These results demonstrate that ART and RAN tasks make independent contributions to predicting variability in word-recognition processes during reading. |
Margaret Grant; Shayne Sloggett; Brian Dillon Processing ambiguities in attachment and pronominal reference Journal Article In: Glossa: a journal of general linguistics, vol. 5, no. 1, pp. 1–30, 2020. @article{Grant2020a, The nature of ambiguity resolution has important implications for models of sentence processing in general. Studies of structural ambiguities, such as modifier attachment ambiguities, have generally supported a model in which a single analysis of ambiguous material is adopted without a cost to processing. Concurrently, a separate literature has observed a processing penalty for ambiguities in pronominal reference, suggesting that potential referents compete for selection during the processing of ambiguous pronouns. We argue that the apparent distinction between the ambiguity resolution mechanisms in attachment and pronominal reference ambiguities war- rants further study. We present evidence from two experiments measuring eye movements during reading, showing that the separation held in the literature between these two ambiguity types is, at least, not uniformly supported. |
Matteo Greco; Paolo Canal; Valentina Bambini; Andrea Moro Modulating “surprise” with syntax: A study on negative sentences and eye-movement recording Journal Article In: Journal of Psycholinguistic Research, vol. 49, no. 3, pp. 415–434, 2020. @article{Greco2020, This work focuses on a particular case of negative sentences, the Surprise Negation sentences (SNEGs). SNEGs belong to the class of expletive negation sentences, i.e., they are affirmative in meaning but involve a clausal negation. A clear example is offered by Italian: ‘Enonmi è scesa dal treno Maria?!' (let. ‘and not CLITIC.to_me is got off-the train Mary' = ‘The surprise was that Maria got off the train!'). From a theoretical point of view, the interpretation of SNEGs as affirmative can be derived from their specific syntactic and semantic structure. Here we offer an implementation of the visual world paradigm to test how SNEGs are interpreted. Participants listened to affirmative, negative or expletive negative clauses while four objects (two relevant—either expected or unexpected—and two unrelated) were shown on the screen and their eye movements were recorded. Growth Curve Analysis showed that the fixation patterns to the relevant objects were very similar for affirmative and expletive negative sentences, while striking differences were observed between negative and affirmative sentences. These results showed that negation does play a different role in the mental representation of a sentence, depending on its syntactic derivation. Moreover, we also found that, compared to affirmative sentences, SNEGs require higher processing efforts due to both their syntactic complexity and pragmatic integration, with slower response time and lower accuracy. |
Jeffrey J. Green; Michael McCourt; Ellen Lau; Alexander Williams Processing adjunct control: Evidence on the use of structural information and prediction in reference resolution Journal Article In: Glossa: a journal of general linguistics, vol. 5, no. 1, pp. 1–33, 2020. @article{Green2020, The comprehension of anaphoric relations may be guided not only by discourse, but also syntactic information. In the literature on online processing, however, the focus has been on audible pronouns and descriptions whose reference is resolved mainly on the former. This paper examines one relation that both lacks overt exponence, and relies almost exclusively on syntax for its resolution: adjunct control, or the dependency between the null subject of a non-finite adjunct and its antecedent in sentences such as Mickey talked to Minnie before ___ eating. Using visual-world eyetracking, we compare the timecourse of interpreting this null subject and overt pronouns (Mickey talked to Minnie before he ate). We show that when control structures are highly frequent, listeners are just as quick to resolve reference in either case. When control structures are less frequent, reference resolution based on structural information still occurs upon hearing the non-finite verb, but more slowly, especially when unaided by structural and referential predictions. This may be due to increased difficulty in recognizing that a referential dependency is necessary. These results indicate that in at least some contexts, referential expressions whose resolution depends on very different sources of information can be resolved approximately equally rapidly, and that the speed of interpretation is largely independent of whether or not the dependency is cued by an overt referring expression. |
Kristi Hendrickson; Jessica Spinelli; Elizabeth Walker Cognitive processes underlying spoken word recognition during soft speech Journal Article In: Cognition, vol. 198, pp. 1–15, 2020. @article{Hendrickson2020, In two eye-tracking experiments using the Visual World Paradigm, we examined how listeners recognize words when faced with speech at lower intensities (40, 50, and 65 dBA). After hearing the target word, participants (n = 32) clicked the corresponding picture from a display of four images – a target (e.g., money), a cohort competitor (e.g., mother), a rhyme competitor (e.g., honey) and an unrelated item (e.g., whistle) – while their eye-movements were tracked. For slightly soft speech (50 dBA), listeners demonstrated an increase in cohort activation, whereas for rhyme competitors, activation started later and was sustained longer in processing. For very soft speech (40 dBA), listeners waited until later in processing to activate potential words, as illustrated by a decrease in activation for cohorts, and an increase in activation for rhymes. Further, the extent to which words were considered depended on word length (mono- vs. bi-syllabic words), and speech-extrinsic factors such as the surrounding listening environment. These results advance current theories of spoken word recognition by considering a range of speech levels more typical of everyday listening environments. From an applied perspective, these results motivate models of how individuals who are hard of hearing approach the task of recognizing spoken words. |
Annina K. Hessel; Sascha Schroeder Interactions between lower- and higher-level processing when reading in a second language: An eye-tracking study Journal Article In: Discourse Processes, vol. 57, no. 10, pp. 940–964, 2020. @article{Hessel2020a, This experiment investigated interactions between lower- and higher-level processing when reading in a second language (L2). We conducted an eye-tracking experiment with the within-subject manipulation inconsistency (to tap higher-level coherence-building) crossed with a within-subject manipulation of word-processing difficulty (to alter the ease of lower-level processing), both manipulated on the text level. Sixty-three L2 learners read 48 short expository texts containing inconsistencies created through mismatches between pretargets such as soya and targets such as corn, or consistent controls. Word-processing difficulty was manipulated by inserting either shorter and higher-frequency words such as often or longer and lower-frequency words such as increasingly. We found evidence of interactions between lower-level word-processing difficulty and higher-level coherence building, as revealed by reduced a inconsistency effect showing in go-past durations and rereading in the difficult condition. This effect did not, however, extend to targeted regressions into inconsistent information. Our findings provide the first experimental evidence for online interactions between lower-level word processing and higher-level coherence building. |
Florian Hintz; Antje S. Meyer; Falk Huettig Visual context constrains language-mediated anticipatory eye movements Journal Article In: Quarterly Journal of Experimental Psychology, vol. 73, no. 3, pp. 458–467, 2020. @article{Hintz2020, Contemporary accounts of anticipatory language processing assume that individuals predict upcoming information at multiple levels of representation. Research investigating language-mediated anticipatory eye gaze typically assumes that linguistic input restricts the domain of subsequent reference (visual target objects). Here, we explored the converse case: Can visual input restrict the dynamics of anticipatory language processing? To this end, we recorded participants' eye movements as they listened to sentences in which an object was predictable based on the verb's selectional restrictions (“The man peels a banana”). While listening, participants looked at different types of displays: the target object (banana) was either present or it was absent. On target-absent trials, the displays featured objects that had a similar visual shape as the target object (canoe) or objects that were semantically related to the concepts invoked by the target (monkey). Each trial was presented in a long preview version, where participants saw the displays for approximately 1.78 s before the verb was heard (pre-verb condition), and a short preview version, where participants saw the display approximately 1 s after the verb had been heard (post-verb condition), 750 ms prior to the spoken target onset. Participants anticipated the target objects in both conditions. Importantly, robust evidence for predictive looks to objects related to the (absent) target objects in visual shape and semantics was found in the post-verb but not in the pre-verb condition. These results suggest that visual information can restrict language-mediated anticipatory gaze and delineate theoretical accounts of predictive processing in the visual world. |
2019 |
Fabio Parente; Kathy Conklin; Josephine Guy; Gareth Carrol; Rebekah Scott Reader expertise and the literary significance of small-scale textual features in prose fiction Journal Article In: Scientific Study of Literature, vol. 9, no. 1, pp. 3–33, 2019. @article{Parente2019, We use eye tracking to investigate the attention readers pay to different textual features to determine their significance in the appreciation of prose fiction. Previous research examined attention allocation to lexical and punctuation variants, and the impact on reading dynamics for the remainder of the text, demonstrating that readers notice both kinds of variants but assign less value to the latter ( Carrol, Conklin, Guy, & Scott, 2016 ). Here, in two experiments, we examine two conditions that may affect attention allocation: We investigate the influence of reader expertise (Experiment 1) and whether performance is influenced by a task-specific “spot-the-difference” effect (Experiment 2). We found that expertise plays little role in readers' greater sensitivity to lexical rather than punctuation changes, and that the advantage for lexical changes persisted when the time interval between exposures is increased. These results confirm earlier findings: that small-scale features may not possess the creative significance predicated of them by critics and text-editors. |
Chun-Ting Hsu; Roy Clariana; Benjamin Schloss; Ping Li Neurocognitive signatures of naturalistic reading of scientific texts: A fixation-related fMRI study Journal Article In: Scientific Reports, vol. 9, pp. 10678, 2019. @article{Hsu2019, How do students gain scientific knowledge while reading expository text? This study examines the underlying neurocognitive basis of textual knowledge structure and individual readers' cognitive differences and reading habits, including the influence of text and reader characteristics, on outcomes of scientific text comprehension. By combining fixation-related fMRI and multiband data acquisition, the study is among the first to consider self-paced naturalistic reading inside the MRI scanner. Our results revealed the underlying neurocognitive patterns associated with information integration of different time scales during text reading, and significant individual differences due to the interaction between text characteristics (e.g., optimality of the textual knowledge structure) and reader characteristics (e.g., electronic device use habits). Individual differences impacted the amount of neural resources deployed for multitasking and information integration for constructing the underlying scientific mental models based on the text being read. Our findings have significant implications for understanding science reading in a population that is increasingly dependent on electronic devices. |
Dexiang Zhang; Jukka Hyönä; Lei Cui; Zhaoxia Zhu; Shouxin Li Effects of task instructions and topic signaling on text processing among adult readers with different reading styles: An eye-tracking study Journal Article In: Learning and Instruction, vol. 64, pp. 101246, 2019. @article{Zhang2019b, Effects of task instructions and topic signaling on text processing among adult readers with different reading styles were studied by eye-tracking. In Experiment 1, readers read two multiple-topic expository texts guided either by a summary or a verification task. In Experiment 2, readers read a text with or without the topic sentences underlined. Four types of readers emerged: topic structure processors (TSPs), fast linear readers (FLRs), slow linear readers (SLRs), and nonselective reviewers (NSRs). TSPs paid ample fixation time on topic sentences regardless of their signaling. FLRs were characterized by fast first-pass reading, little rereading of previous text, and some signs of structure processing. The common feature of SLRs and NSRs was their slow first-pass reading. They differed from each other in that NSRs were characterized by spending ample time also during second-pass reading. They only showed some signs of topic structure processing when cued by task instructions or topic signaling. |
Jarkko Hautala; Otto Loberg; Najla Azaiez; Sara Taskinen; Simon P. Tiffin-Richards; Paavo H. T. Leppänen What information should I look for again? Attentional difficulties distracts reading of task assignments Journal Article In: Learning and Individual Differences, vol. 75, pp. 101775, 2019. @article{Hautala2019, This large-scale eye-movement study (N=164) investigated how students read short task assignments to complete information search problems and how their cognitive resources are associated with this reading behavior. These cognitive resources include information searching subskills, prior knowledge, verbal memory, reading fluency, and attentional difficulties. In this study, the task assignments consisted of four sentences. The first and last sentences provided context, while the second or third sentence was the relevant or irrelevant sentence under investigation. The results of a linear mixed-model and latent change score analyses showed the ubiquitous influence of reading fluency on first-pass eye movement measures, and the effects of sentence re- levancy on making more and longer reinspections and look-backs to the relevant than irrelevant sentence. In addition, the look-backs to the relevant sentence were associated with better information search subskills. Students with attentional difficulties made substantially fewer look-backs specifically to the relevant sentence. These results provide evidence that selective look-backs are used as an important index of comprehension monitoring independent of reading fluency. In this framework, slow reading fluency was found to be associated with laborious decoding but with intact comprehension monitoring, whereas attention difficulty was associated with intact decoding but with deficiency in comprehension monitoring. |
Bob McMurray; Jamie Klein-Packard; J. Bruce Tomblin A real-time mechanism underlying lexical deficits in developmental language disorder: Between-word inhibition Journal Article In: Cognition, vol. 191, pp. 104000, 2019. @article{McMurray2019a, Eight to 11% of children have a clinical disorder in oral language (Developmental Language Disorder, DLD). Language deficits in DLD can affect all levels of language and persist through adulthood. Word-level processing may be critical as words link phonology, orthography, syntax and semantics. Thus, a lexical deficit could cascade throughout language. Cognitively, word recognition is a competition process: as the input (e.g., lizard) unfolds, multiple candidates (liver, wizard) compete for recognition. Children with DLD do not fully resolve this competition, but it is unclear what cognitive mechanisms underlie this. We examined lexical inhibition—the ability of more active words to suppress competitors—in 79 adolescents with and without DLD. Participants heard words (e.g. net) in which the onset was manipulated to briefly favor a competitor (neck). This was predicted to inhibit the target, slowing recognition. Word recognition was measured using a task in which participants heard the stimulus, and clicked on a picture of the item from an array of competitors, while eye-movements were monitored as a measure of how strongly the participant was committed to that interpretation over time. TD listeners showed evidence of inhibition with greater interference for stimuli that briefly activated a competitor word. DLD listeners did not. This suggests deficits in DLD may stem from a failure to engage lexical inhibition. This in turn could have ripple effects throughout the language system. This supports theoretical approaches to DLD that emphasize lexical-level deficits, and deficits in real-time processing. |
Michelle Perdomo; Edith Kaan Prosodic cues in second-language speech processing: A visual world eye-tracking study Journal Article In: Second Language Research, 2019. @article{Perdomo2019, Listeners interpret cues in speech processing immediately rather than waiting until the end of a sentence. In particular, prosodic cues in auditory speech processing can aid listeners in building information structure and contrast sets. Native speakers even use this information in combination with syntactic and semantic information to build mental representations predictively. Research on second-language (L2) learners suggests that learners have difficulty integrating linguistic information across various domains, likely subject to L2 proficiency levels. The current study investigated eye-movement behavior of native speakers of English and Chinese learners of English in their use of contrastive intonational cues to restrict the set of upcoming referents in a visual world paradigm. Both native speakers and learners used contrastive pitch accent to restrict the set of referents. Whereas native speakers anticipated the upcoming set of referents, this was less clear in the L2 learners. This suggests that learners are able to integrate information across multiple domains to build information structure in the L2 but may not do so predictively. Prosodic processing was not affected by proficiency or working memory in the L2 speakers. |
Elizabeth R. Schotter; Anna Marie Fennell Readers can identify the meanings of words without looking at them: Evidence from regressive eye movements Journal Article In: Psychonomic Bulletin & Review, vol. 26, no. 5, pp. 1697–1704, 2019. @article{Schotter2019, Previewing words prior to fixating them leads to faster reading, but does it lead to word identification (i.e., semantic encoding)? We tested this with a gaze-contingent display change study and a subsequent plausibility manipulation. Both the preview and the target words were plausible when encountered, and we manipulated the end of the sentence so that the different preview was rendered implausible (in critical sentences) or remained plausible (in neutral sentences). Regressive saccades from the end ofthe sentence increased when the preview was rendered implausible compared to when it was plausible, especially when the preview was high frequency. These data add to a growing body ofresearch suggesting that linguistic information can be obtained during preview, to the point where word meaning is accessed. In addition, these findings suggest that the meaning of the fixated target does not always override the semantic information obtained during preview. |
Georgia Zellou; Delphine Dahan Listeners maintain phonological uncertainty over time and across words: The case of vowel nasality in English Journal Article In: Journal of Phonetics, vol. 76, pp. 1–20, 2019. @article{Zellou2019, While the fact that phonetic information is evaluated in a non-discrete, probabilistic fashion is well established, there is less consensus regarding how long such encoding is maintained. Here, we examined whether people maintain in memory the amount of vowel nasality present in a word when processing a subsequent word that holds a semantic dependency with the first one. Vowel nasality in English is an acoustic correlate of the oral vs. nasal status of an adjacent consonant, and sometimes it is the only distinguishing phonetic feature (e.g., bet vs. bent). In Experiment 1, we show that people can perceive differences in nasality between two vowels above and beyond differences in the categorization of those vowels. In Experiment 2, we tracked listeners' eye-movements as they heard a sentence that mentioned one of four displayed images (e.g., ‘money') following a prime word (e.g., ‘bet') that held a semantic relationship with the target word. Recognition of the target was found to be modulated by the degree of nasality in the first word's vowel: Slightly greater uncertainty regarding the oral status of the post-vocalic consonant in the first word translated into a weaker semantic cue for the identification of the second word. Thus, listeners appear to maintain in memory the degree of vowel nasality they perceived on the first word and bring this information to bear onto the interpretation of a subsequent, semantically-dependent word. Probabilistic cue integration across words that hold semantic coherence, we argue, contributes to achieving robust language comprehension despite the inherent ambiguity of the speech signal. |
Jessica E. Goold; Wonil Choi; John M. Henderson Cortical control of eye movements in natural reading: Evidence from MVPA Journal Article In: Experimental Brain Research, vol. 237, no. 12, pp. 3099–3107, 2019. @article{Goold2019, Language comprehension during reading requires fine-grained management of saccadic eye movements. A critical question, therefore, is how the brain controls eye movements in reading. Neural correlates of simple eye movements have been found in multiple cortical regions, but little is known about how this network operates in reading. To investigate this question in the present study, participants were presented with normal text, pseudo-word text, and consonant string text in a magnetic resonance imaging (MRI) scanner with eyetracking. Participants read naturally in the normal text condition and moved their eyes “as if they were reading” in the other conditions. Multi-voxel pattern analysis was used to analyze the fMRI signal in the oculomotor network. We found that activation patterns in a subset of network regions differentiated between stimulus types. These results suggest that the oculomotor network reflects more than simple saccade generation and are consistent with the hypothesis that specific network areas interface with cognitive systems. |
Dato Abashidze; Maria Nella Carminati; Pia Knoeferle Anticipating a future versus integrating a recent event? Evidence from eye-tracking Journal Article In: Acta Psychologica, vol. 200, pp. 102916, 2019. @article{Abashidze2019, When comprehending a spoken sentence that refers to a visually-presented event, comprehenders both integrate their current interpretation of language with the recent event and develop expectations about future event possibilities. Tense cues can disambiguate this linking, but temporary ambiguity in these cues may lead comprehenders to also rely on further, experience-based (e.g., frequency or an actor's gaze) cues. How comprehenders reconcile these different cues in real time is an open issue. Extant results suggest that comprehenders preferentially relate their unfolding interpretation to a recent event by inspecting its target object. We investigated to what extent this recent-event preference could be overridden by short-term experiential and situation-specific cues. In Experiments 1–2 participants saw substantially more future than recent events and listened to more sentences about future-events (75% in Experiment 1 and 88% in Experiment 2). Experiment 3 cued future target objects and event possibilities via an actor's gaze. The event frequency increase yielded a reduction in the recent event inspection preference early during sentence processing in Experiments 1–2 compared with Experiment 3 (where event frequency and utterance tense were balanced) but did not eliminate the overall recent-event preference. Actor gaze also modulated the recent-event preference, and jointly with future tense led to its reversal in Experiment 3. However, our results showed that people overall preferred to focus on recent (vs. future) events in their interpretation, suggesting that while two cues (actor gaze and short-term event frequency) can partially override the recent-event preference, the latter still plays a key role in shaping participants' interpretation. |
Jesse A. Harris; Katy Carlson Correlate not optional: PP sprouting and parallelism in “much less” ellipsis Journal Article In: Glossa: a journal of general linguistics, vol. 4, no. 1, pp. 1–35, 2019. @article{Harris2019, Clauses that are parallel in form and meaning show processing advantages in ellipsis and coordination structures (Frazier et al. 1984; Kehler 2000; Carlson 2002). However, the constructions that have been used to show a parallelism advantage do not always require a strong semantic relationship between clauses. We present two eye tracking while reading studies on focus-sensitive coordination structures, an understudied form of ellipsis which requires the generation of a contextually salient semantic relation or scale between conjuncts. However, when the remnant of ellipsis lacks an overt correlate in the matrix clause and must be “sprouted” in the ellipsis site, the relation between clauses is simplified to entailment. Instead of facilitation for sentences with an entailment relation between clauses, our online processing results suggest that violating parallelism is costly, even when doing so could ease the semantic relations required for interpretation. |
Benjamin T. Carter; Steven G. Luke In: Data in Brief, vol. 25, pp. 1–21, 2019. @article{Carter2019a, The data presented in this document was created to explore the effect of including or excluding word length, word frequency, the lexical predictability of function words and first pass reading time (or the duration of the first fixation on a word) as either baseline regressors or duration modulators on the final analysis for a fixation-related fMRI investigation of linguistic processing. The effect of these regressors was a central question raised during the review of Linguistic networks associated with lexical, semantic and syntactic predictability in reading: A fixation-related fMRI study [1]. Three datasets were created and compared to the original dataset to determine their effect. The first examines the effect of adding word length and word frequency as baseline regressors. The second examines the effect of removing first pass reading time as a duration modulator. The third examines the inclusion of function word predictability into the baseline hemodynamic response function. Statistical maps were created for each dataset and compared to the primary dataset (published in [1]) across the linguistic conditions of the initial dataset (lexical predictability, semantic predictability or syntax predictability). |
Lijing Chen; Kevin B. Paterson; Xingshan Li; Lin Li; Yufang Yang Pragmatic influences on sentence integration: Evidence from eye movements Journal Article In: Quarterly Journal of Experimental Psychology, vol. 72, no. 12, pp. 2742–2751, 2019. @article{Chen2019b, To understand a discourse, readers must rapidly process semantic and syntactic information and extract the pragmatic information these sources imply. An important question concerns how this pragmatic information influences discourse processing in return. We address this issue in two eye movement experiments that investigate the influence of pragmatic inferences on the processing of inter-sentence integration. In Experiments 1a and 1b, participants read two-sentence discourses in Chinese in which the first sentence introduced an event and the second described its consequence, where the sentences were linked using either the causal connective “suoyi” (meaning “so” or “therefore”) or not. The second sentence included a target word that was unmarked or marked using the focus particle “zhiyou” (meaning “only”) in Experiment 1a or “shi” (equivalent to an it-cleft) in Experiment 1b. These particles have the pragmatic function of implying a contrast between a target element and its alternatives. The results showed that while the causal connective facilitated the processing of unmarked words in causal contexts (a connective facilitation effect), this effect was eliminated by the presence of the focus particle. This implies that contrastive information is inferred sufficiently rapidly during reading that it can influence semantic processes involved in sentence integration. Experiment 2 showed that disruption due to conflict between the processing requirements of focus and inter-sentence integration occurred only in causal and not adversative connective contexts, confirming that processing difficulty occurred when a contrastive relationship was not possible. |
Arielle Borovsky; Ryan E. Peters Vocabulary size and structure affects real-time lexical recognition in 18-month-olds Journal Article In: PLoS ONE, vol. 14, no. 7, pp. e0219290, 2019. @article{Borovsky2019, The mature lexicon encodes semantic relations between words, and these connections can alternately facilitate and interfere with language processing. We explore the emergence of these processing dynamics in 18-month-olds (N = 79) using a novel approach that calculates individualized semantic structure at multiple granularities in participants' productive vocabularies. Participants completed two interleaved eye-tracked word recognition tasks involving semantically unrelated and related picture contexts, which sought to measure the impact of lexical facilitation and interference on processing, respectively. Semantic structure and vocabulary size differentially impacted processing in each task. Category level structure facilitated word recognition in 18-month-olds with smaller productive vocabularies, while overall lexical connectivity interfered with word recognition for toddlers with relatively larger vocabularies. The results suggest that, while semantic structure at multiple granularities is measurable even in small lexicons, mechanisms of semantic interference and facilitation are driven by the development of structure at different granularities. We consider these findings in light of accounts of adult word recognition that posits that different levels of structure index strong and weak activation from nearby and distant semantic neighbors. We also consider further directions for developmental change in these patterns. |
Yipu Wei; Willem M. Mak; Jacqueline Evers-Vermeul; Ted J. M. Sanders Causal connectives as indicators of source information: Evidence from the visual world paradigm Journal Article In: Acta Psychologica, vol. 198, pp. 1–13, 2019. @article{Wei2019, Causal relations can be presented as subjective, involving someone's reasoning, or objective, depicting a real-world cause-consequence relation. Subjective relations require longer processing times than objective relations. We hypothesize that the extra time is due to the involvement of a Subject of Consciousness (SoC) in the mental representation of subjective information. To test this hypothesis, we conducted a Visual World Paradigm eye-tracking experiment on Dutch and Chinese connectives that differ in the degree of subjectivity they encode. In both languages, subjective connectives triggered an immediate increased attention to the SoC, compared to objective connectives. Only when the subjectivity information was not expressed by the connective, modal verbs presented later in the sentence induced an increase in looks at the SoC. This focus on the SoC due to the linguistic cues can be explained as the tracking of the information source in the situation models, which continues throughout the sentence. |
Shira Klorfeld-Auslender; Nitzan Censor Visual-oculomotor interactions facilitate consolidation of perceptual learning Journal Article In: Journal of Vision, vol. 19, no. 6, pp. 1–10, 2019. @article{KlorfeldAuslender2019, Visual skill learning is commonly considered a manifestation of brain plasticity. Following encoding, consolidation of the skill may result in between-session performance gains. A great volume of studies have demonstrated that during the offline consolidation interval, the skill is susceptible to external inputs that modify the preformed representation of the memory, affecting future performance. However, while basic visual perceptual learning is thought to be mediated by sensory brain regions or their higher-order readout pathways, the possibility of visual-oculomotor interactions affecting the consolidation interval and reshaping visual learning remains uncharted. Motivated by findings mapping connections between oculomotor behavior and visual performance, we examined whether visual consolidation can be facilitated by visual-oculomotor interactions. To this aim, we paired reactivation of an oculomotor memory with consolidation of a typical visual texture discrimination task. Importantly, the oculomotor memory was encoded by learning of the pure motor component of the movement, removing visual cues. When brief reactivation of the oculomotor memory preceded the visual task, visual gains were substantially enhanced compared with those achieved by visual practice per se and were strongly related to the magnitude of oculomotor gains, suggesting that the brain utilizes oculomotor memory to enhance basic visual perception. |
Timothy G. Shepard; Fang Hou; Peter J. Bex; Luis A. Lesmes; Zhong-Lin Lu; Deyue Yu Assessing reading performance in the periphery with a Bayesian adaptive approach: The qReading method Journal Article In: Journal of Vision, vol. 19, no. 5, pp. 1–14, 2019. @article{Shepard2019, Reading is a crucial visual activity and a fundamental skill in daily life. Rapid Serial Visual Presentation (RSVP) is a text-presentation paradigm that has been extensively used in the laboratory to study basic characteristics of reading performance. However, measuring reading function (reading speed vs. print size) is time-consuming for RSVP reading using conventional testing procedures. In this study, we develop a novel method, qReading, utilizing the Bayesian adaptive testing framework to measure reading function in the periphery. We perform both a psychophysical experiment and computer simulations to validate the qReading method. In the experiment, words are presented using an RSVP paradigm at 10° in the lower visual field. The reading function obtained from the qReading method with 50 trials exhibits good agreement (i.e., high accuracy) with the reading function obtained from a conventional method (method of constant stimuli [MCS]) with 186 trials (mean root mean square error: 0.12 log10 units). Simulations further confirm that the qReading method provides an unbiased measure. The qReading procedure also demonstrates excellent precision (half width of 68.2% credible interval: 0.02 log10 units with 50 trials) compared to the MCS method (0.03 log10 units with 186 trials). This investigation establishes that the qReading method can adequately measure the reading function in the normal periphery with high accuracy, precision, and efficiency, and is a potentially valuable tool for both research and clinical assessments. |
Lili Yu; Jianping Xiong; Qiaoming Zhang; Denis Drieghe; Erik D. Reichle Eye-movement evidence for the mental representation of strokes in Chinese characters Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 45, no. 3, pp. 544–551, 2019. @article{Yu2019, Although strokes are the smallest identifiable units in Chinese words, the fact that they are often embedded within larger units (i.e., radicals and/or characters that comprise Chinese words) raises questions about how and even if strokes are separately represented in lexical memory. The present experiment examined these questions using a gaze-contingent boundary paradigm (Rayner, 1975) to manipulate the parafoveal preview of the first of two-character target words. Relative to a normal preview, the removal of whole strokes was more disruptive (i.e., resulting in longer looking times on targets) than the removal of an equivalent amount of visual information (i.e., number of pixels) from strokes located either in similar locations or throughout the entire character. These findings suggest that strokes are represented as discrete functional units rather than visual features or integral parts of the radicals/characters in which they are embedded. We discuss the theoretical implications of this conclusion for models of Chinese word identification. |
S. Xue; Jana Lüdtke; Teresa Sylvester; Arthur M. Jacobs Reading Shakespeare sonnets: Combining quantitative barrative analysis and predictive modeling — an eye tracking study Journal Article In: Journal of Eye Movement Research, vol. 12, no. 5, pp. 1–16, 2019. @article{Xue2019, As a part of a larger interdisciplinary project on Shakespeare sonnets' reception (Jacobs et al., 2017; Xue et al., 2017), the present study analyzed the eye movement behavior of participants reading three of the 154 sonnets as a function of seven lexical features extracted via Quantitative Narrative Analysis (QNA). Using a machine learning- based predictive modeling approach five ‘surface' features (word length, orthographic neighborhood density, word frequency, orthographic dissimilarity and sonority score) were detected as important predictors of total reading time and fixation probability in poetry reading. The fact that one phonological feature, i.e., sonority score, also played a role is in line with current theorizing on poetry reading. Our approach opens new ways for future eye movement research on reading poetic texts and other complex literary materials (cf. Jacobs, 2015c). |
Ming Yan; Jinger Pan; Wenshuo Chang; Reinhold Kliegl Read sideways or not: Vertical saccade advantage in sentence reading Journal Article In: Reading and Writing, vol. 32, no. 8, pp. 1911–1926, 2019. @article{Yan2019a, During the reading of alphabetic scripts and scene perception, eye movements are programmed more efficiently in horizontal direction than in vertical direction. We propose that such a directional advantage may be due the overwhelming reading experience in the horizontal direction. Writing orientation is highly flexible for Traditional Chinese sentences. We compare horizontal and vertical eye movements during reading of such sentences and provide first evidence of a text-orientation effect on eye-movement control during reading. In addition to equivalent reading speed in both directions, more fine-grained analyses demonstrate a tradeoff between longer fixation durations and better fixation locations in vertical than in horizontal reading. Our results suggest that with extensive reading experience, Traditional Chinese readers can generate saccades more efficiently in vertical than in horizontal direction. |
Ming Yan; Jinger Pan; Reinhold Kliegl Eye movement control in Chinese reading: A cross- sectional study Journal Article In: Developmental Psychology, vol. 55, no. 11, pp. 2275–2285, 2019. @article{Yan2019b, The present study explored the age-related changes of eye movement control in reading-that is, where to send the eyes and when to move them. Different orthographies present readers with somewhat different problems to solve, and this might, in turn, be reflected in different patterns of development of reading skill. Participants of different developmental levels (Grade 3 |
Ming Yan; Werner Sommer The effects of emotional significance of foveal words on the parafoveal processing of N + 2 words in reading Chinese sentences Journal Article In: Reading and Writing, vol. 32, no. 5, pp. 1243–1256, 2019. @article{Yan2019c, The emotional significance of stimuli has a strong effect on lexical processing across different reading paradigms. In the present study, we investigated whether foveal and parafoveal lexical processing is influenced by foveal emotional words (positive, negative, or neutral) during the reading of Chinese sentences. We tested word N + 2 preview effect by manipulating the visibility of the upcoming word, located two words away from the foveal word. Processing benefits due to valid parafoveal preview were found for all three valence classes of foveal words. Most interestingly, for negative as compared to both neutral and positive foveal target words, the parafoveal preview effect was reduced when preview duration had been long. These findings suggest that negative words are more likely to attract readers' attention, narrowing the attentional spotlight to the fovea as affected information becomes activated during word processing. We discuss implications for the notion of attention attraction due to emotional content. |
Ming Yan; Aiping Wang; Hosu Song; Reinhold Kliegl Parafoveal processing of phonology and semantics during the reading of Korean sentences Journal Article In: Cognition, vol. 193, pp. 104009, 2019. @article{Yan2019, The present study sets out to address two fundamental questions in the reading of continuous texts: Whether semantic and phonological information from upcoming words can be accessed during natural reading. In the present study we investigated parafoveal processing during the reading of Korean sentences, manipulating semantic and phonological information from parafoveal preview words. In addition to the first evidence for a semantic preview effect in Korean, we found that Korean readers have stronger and more long-lasting phonological than semantic activation from parafoveal words in second-pass reading. The present study provides an example that human mind can flexibly adjust processing priority to different types of information based on the linguistic environment. |
Angele Yazbec; Michael P. Kaschak; Arielle Borovsky Developmental timescale of rapid adaptation to conflicting cues in real-time sentence processing Journal Article In: Cognitive Science, vol. 43, no. 1, pp. 1–41, 2019. @article{Yazbec2019, Children and adults use established global knowledge to generate real-time linguistic predictions, but less is known about how listeners generate predictions in circumstances that semantically conflict with long-standing event knowledge. We explore these issues in adults and 5- to 10-year-old children using an eye-tracked sentence comprehension task that tests real-time activation of unexpected events that had been previously encountered in brief stories. Adults generated predictions for these previously unexpected events based on these discourse cues alone, whereas children overall did not override their established global knowledge to generate expectations for semantically conflicting material; however, they do show an increased ability to integrate discourse cues to generate appropriate predictions for sentential endings. These results indicate that the ability to rapidly integrate and deploy semantically conflicting knowledge has a long developmental trajectory, with adult-like patterns not emerging until later in childhood. |
Miao Yu; Brandon Sommers; Yuxia Yin; Guoli Yan Effects of implicit prosody and semantic bias on the resolution of ambiguous Chinese phrases Journal Article In: Frontiers in Psychology, vol. 10, pp. 1308, 2019. @article{Yu2019a, By manipulating the location of prosodic boundary and the semantic bias of the ambiguous "V+N1+de+N2" phrase, which is composed of one verb (V), one noun (N1), one functional word (de), and another noun (N2), this study investigated how prosodic boundary and the semantic bias affect the processing of temporary ambiguous sentences formed by the ambiguous phrase "V+N1+de+N2" through an eye movement experiment. We found the effect of prosodic boundary in the late processing stage and observed an interaction between prosodic boundary and the semantic bias of ambiguous phrases as well. The participants required more time for fixation and more regressions occurred when the meaning of the ambiguous phrase guided by prosodic boundary was inconsistent with context, especially when the ambiguous phrase was biased to the narrative-object phrase. This result suggests that prosodic boundary affects the processing of temporal ambiguous sentences and is influenced by the semantic bias of the ambiguous phrase. These findings provide further evidence from Chinese that indicate that implicit prosody plays a general role in language comprehension. |
Katharina Zahner; Sophie Kutscheid; Bettina Braun Alignment of f0 peak in different pitch accent types affects perception of metrical stress Journal Article In: Journal of Phonetics, vol. 74, pp. 75–95, 2019. @article{Zahner2019, In intonation languages, pitch accents are associated with stressed syllables, therefore accentuation is a sufficient cue to the position of metrical stress in perception. This paper investigates how stress perception in German is affected by different pitch accent types (with different f0 alignments). Experiment 1 showed more errors in stress identification when f0 peaks and stressed syllables were not aligned – despite phonological association of pitch accent and stressed syllable. Erroneous responses revealed a response bias towards the syllable with the f0 peak. In a visual-world eye-tracking study (Experiment 2), listeners fixated a stress competitor with initial stress more when the spoken target, which had penultimate stress, was realized with an early-peak accent (f0 peak preceding stressed syllable), compared to a condition with the f0 peak on the stressed syllable. Hence, high-pitched unstressed syllables are temporarily interpreted as stressed – a process directly affecting lexical activation. To investigate whether this stress competitor activation is guided by the frequent co-occurrence of high f0 and lexical stress, Experiment 3 increased the frequency of low-pitched stressed syllables in the immediate input. The effect of intonation on competitor fixations disappeared. Our findings are discussed with respect to a frequency-based mechanism and their implications for the nature of f0 processing. |
Chuanli Zang; Li Zhang; Manman Zhang; Xuejun Bai; Guoli Yan; Xiaoming Jiang; Zhewen He; Xiaolin Zhou In: Frontiers in Psychology, vol. 10, pp. 2211, 2019. @article{Zang2019, An event-related potential (ERP) study demonstrated that construction-based pragmatic constraints in Chinese (e.g., lian…dou that constrains a low-likelihood event and is similar to even in English) can rapidly influence sentence comprehension and the mismatch of such constraints would lead to increased neural activity on the mismatching word. Here we examine to what extent readers' eye movements can instantly reveal the difficulties of mismatching constraints when participants read sentences with the structure lian + determiner phrase + object noun + subject noun + dou + verb phrase (VP) + final commenting clause. By embedding high-likelihood or neutral events in the construction, we created incongruent and underspecified sentences and compared such sentences with congruent ones describing events of low expectedness. Relative to congruent sentences, the VP region of incongruent sentences showed no significant differences on first-pass reading time measures, but the total fixation duration was reliably longer. Moreover, readers made more regressions from the VP and the sentence-final region to previous regions in the incongruent than the congruent condition. These findings suggest that the effect of pragmatic constraints is observable during naturalistic sentence reading, reflecting the activation of the construction-based pragmatic information for the late integration of linguistic and extra-linguistic information at sentential level. |
Andrea M. Zawoyski; Scott P. Ardoin Using eye-tracking technology to examine the impact of question format on reading behavior in elementary students Journal Article In: School Psychology Review, vol. 48, no. 4, pp. 320–332, 2019. @article{Zawoyski2019, Reading comprehension assessments often include multiple-choice (MC) questions, but some researchers doubt their validity in measuring comprehension. Consequently, new assessments may include more short-answer (SA) questions. The current study contributes to the research comparing MC and SA questions by evaluating the effects of anticipated question format on elementary students' reading behavior. Third- and fourth-grade participants were divided into the MC (n = 43) or SA condition (n = 44) and expected to answer questions consistent with their group assignment. Eye movements (EMs) were analyzed across the passage and on areas significant to its meaning. Correlational analyses between EMs and reading measures were conducted. Findings support modification of question format in reading assessments. Implications for school psychologists, teachers, and EM researchers are addressed. |
Sandra A. Zerkle; Jennifer E. Arnold Does planning explain why predictability affects reference production? Journal Article In: Dialogue and Discourse, vol. 10, no. 2, pp. 34–55, 2019. @article{Zerkle2019, How does thematic role predictability affect reference production? This study tests a planning facilitation hypothesis-that the predictability effect on reference form can be explained in terms of the time course of utterance planning. In a discourse production task, participants viewed two sequential event pictures, listened to a description of the first picture (depicting a transfer event between two characters), and then provided a description of the second picture (continuing with one thematic role character, either goal or source). We replicated previous findings that goal continuations lead to more reduced forms of reference and shorter latency to begin speaking than source continuations. Additionally, we tracked speakers' eye movements in two periods of utterance planning, early vs. late. We found that 1) early planning supports the use of reduced forms but is not affected by thematic role; 2) thematic role only affects late planning; and 3) in contrast with our hypothesis, planning does not account for predictability effects on reduced forms. We then speculate that discourse connectedness drives the thematic role predictability effect on reference form choice. |
Sainan Zhao; Lin Li; Min Chang; Qianqian Xu; Kuo Zhang; Jingxin Wang; Kevin B. Paterson Older adults make greater use of word predictability in Chinese reading Journal Article In: Psychology and Aging, vol. 34, no. 6, pp. 780–790, 2019. @article{Zhao2019, An influential account of normative aging effects on reading holds that older adults make greater use of contextual predictability to facilitate word identification. However, supporting evidence is scarce. Accordingly, we used measures of eye movements to experimentally investigate age differences in word predictability effects in Chinese reading, as this nonalphabetic language has characteristics that may promote such effects. Word-skipping rates were higher and reading times lower for more highly predictable words for both age groups. Effects of word predictability on word skipping did not differ across the 2 adult age groups. However, word predictability effects in reading time measures sensitive to both lexical identification (i.e., gaze duration) and contextual integration (i.e., regression-path reading times) were larger for the older than younger adults. Our findings therefore reveal that older Chinese readers make greater use of a word's predictability to facilitate both its lexical identification and integration with the prior sentence context. |
Manman Zhang; Simon P. Liversedge; Xuejun Bai; Guoli Yan; Chuanli Zang The influence of foveal lexical processing load on parafoveal preview and saccadic targeting during Chinese reading Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 45, no. 6, pp. 812–825, 2019. @article{Zhang2019e, Whether increased foveal load causes a reduction of parafoveal processing remains equivocal. The present study examined foveal load effects on parafoveal processing in natural Chinese reading. Parafoveal preview of a single-character parafoveal target word was manipulated by using the boundary paradigm (Rayner, 1975; pseudocharacter or identity previews) under high foveal load (low-frequency pretarget word) compared with low foveal load (high-frequency pretarget word) conditions. Despite an effective manipulation of foveal processing load, we obtained no evidence of any modulatory influence on parafoveal processing in first-pass reading times. However, our results clearly showed that saccadic targeting, in relation to forward saccade length from the pretarget word and in relation to target word skipping, was influenced by foveal load and this influence occurred independent of parafoveal preview. Given the optimal experimental conditions, these results provide very strong evidence that preview benefit is not modulated by foveal lexical load during Chinese reading. |
Wei Zhou; Yadong Gao; Yulin Chang; Mengmeng Su Hemispheric processing of lexical information in Chinese character recognition and its relationship to reading performance Journal Article In: Journal of General Psychology, vol. 146, no. 1, pp. 34–49, 2019. @article{Zhou2019b, Hemispheric predominance has been well documented in the visual perception of alphabetic words. However, the hemispheric processing of lexical information in Chinese character recognition and its relationship to reading performance are far from clear. In the divided visual field paradigm, participants were required to judge the orthography, phonology, or semantics of Chinese characters, which were presented randomly in the left or right visual field. The results showed a right visual field/left hemispheric superiority in the phonological judgment task, but no hemispheric advantage in the orthographic or semantic task was found. In addition, reaction times in the right visual field for phonological and semantic tasks were significantly correlated with the reading test score. These results suggest that both hemispheres involved in the orthographic and semantic processing of Chinese characters, and that the left lateralized phonological processing is important for Chinese fluent reading. |