Dato Abashidze; Maria Nella Carminati; Pia Knoeferle Anticipating a future versus integrating a recent event? Evidence from eye-tracking Journal Article In: Acta Psychologica, vol. 200, pp. 102916, 2019. @article{Abashidze2019,
title = {Anticipating a future versus integrating a recent event? Evidence from eye-tracking},
author = {Dato Abashidze and Maria Nella Carminati and Pia Knoeferle},
doi = {10.1016/j.actpsy.2019.102916},
year = {2019},
date = {2019-09-01},
journal = {Acta Psychologica},
volume = {200},
pages = {102916},
publisher = {Elsevier},
abstract = {When comprehending a spoken sentence that refers to a visually-presented event, comprehenders both integrate their current interpretation of language with the recent event and develop expectations about future event possibilities. Tense cues can disambiguate this linking, but temporary ambiguity in these cues may lead comprehenders to also rely on further, experience-based (e.g., frequency or an actor's gaze) cues. How comprehenders reconcile these different cues in real time is an open issue. Extant results suggest that comprehenders preferentially relate their unfolding interpretation to a recent event by inspecting its target object. We investigated to what extent this recent-event preference could be overridden by short-term experiential and situation-specific cues. In Experiments 1–2 participants saw substantially more future than recent events and listened to more sentences about future-events (75% in Experiment 1 and 88% in Experiment 2). Experiment 3 cued future target objects and event possibilities via an actor's gaze. The event frequency increase yielded a reduction in the recent event inspection preference early during sentence processing in Experiments 1–2 compared with Experiment 3 (where event frequency and utterance tense were balanced) but did not eliminate the overall recent-event preference. Actor gaze also modulated the recent-event preference, and jointly with future tense led to its reversal in Experiment 3. However, our results showed that people overall preferred to focus on recent (vs. future) events in their interpretation, suggesting that while two cues (actor gaze and short-term event frequency) can partially override the recent-event preference, the latter still plays a key role in shaping participants' interpretation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
When comprehending a spoken sentence that refers to a visually-presented event, comprehenders both integrate their current interpretation of language with the recent event and develop expectations about future event possibilities. Tense cues can disambiguate this linking, but temporary ambiguity in these cues may lead comprehenders to also rely on further, experience-based (e.g., frequency or an actor's gaze) cues. How comprehenders reconcile these different cues in real time is an open issue. Extant results suggest that comprehenders preferentially relate their unfolding interpretation to a recent event by inspecting its target object. We investigated to what extent this recent-event preference could be overridden by short-term experiential and situation-specific cues. In Experiments 1–2 participants saw substantially more future than recent events and listened to more sentences about future-events (75% in Experiment 1 and 88% in Experiment 2). Experiment 3 cued future target objects and event possibilities via an actor's gaze. The event frequency increase yielded a reduction in the recent event inspection preference early during sentence processing in Experiments 1–2 compared with Experiment 3 (where event frequency and utterance tense were balanced) but did not eliminate the overall recent-event preference. Actor gaze also modulated the recent-event preference, and jointly with future tense led to its reversal in Experiment 3. However, our results showed that people overall preferred to focus on recent (vs. future) events in their interpretation, suggesting that while two cues (actor gaze and short-term event frequency) can partially override the recent-event preference, the latter still plays a key role in shaping participants' interpretation. |
Matthew J Abbott; Bernhard Angele; Danbi Y Ahn; Keith Rayner Skipping syntactically illegal the previews: The role of predictability. Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 41, no. 6, pp. 1703–1714, 2015. @article{Abbott2015a,
title = {Skipping syntactically illegal the previews: The role of predictability.},
author = {Matthew J Abbott and Bernhard Angele and Danbi Y Ahn and Keith Rayner},
doi = {10.1037/xlm0000142},
year = {2015},
date = {2015-11-01},
journal = {Journal of Experimental Psychology: Learning, Memory, and Cognition},
volume = {41},
number = {6},
pages = {1703--1714},
abstract = {Readers tend to skip words, particularly when they are short, frequent, or predictable. Angele and Rayner (2013) recently reported that readers are often unable to detect syntactic anomalies in parafoveal vision. In the present study, we manipulated target word predictability to assess whether contextual constraint modulates the-skipping behavior. The results provide further evidence that readers frequently skip the article the when infelicitous in context. Readers skipped predictable words more often than unpredictable words, even when the, which was syntactically illegal and unpredictable from the prior context, was presented as a parafoveal preview. The results of the experiment were simulated using E-Z Reader 10 by assuming that cloze probability can be dissociated from parafoveal visual input. It appears that when a short word is predictable in context, a decision to skip it can be made even if the information available parafoveally conflicts both visually and syntactically with those predictions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Readers tend to skip words, particularly when they are short, frequent, or predictable. Angele and Rayner (2013) recently reported that readers are often unable to detect syntactic anomalies in parafoveal vision. In the present study, we manipulated target word predictability to assess whether contextual constraint modulates the-skipping behavior. The results provide further evidence that readers frequently skip the article the when infelicitous in context. Readers skipped predictable words more often than unpredictable words, even when the, which was syntactically illegal and unpredictable from the prior context, was presented as a parafoveal preview. The results of the experiment were simulated using E-Z Reader 10 by assuming that cloze probability can be dissociated from parafoveal visual input. It appears that when a short word is predictable in context, a decision to skip it can be made even if the information available parafoveally conflicts both visually and syntactically with those predictions. |
Matthew J Abbott; Adrian Staub The effect of plausibility on eye movements in reading: Testing E-Z Reader's null predictions Journal Article In: Journal of Memory and Language, vol. 85, pp. 76–87, 2015. @article{Abbott2015b,
title = {The effect of plausibility on eye movements in reading: Testing E-Z Reader's null predictions},
author = {Matthew J Abbott and Adrian Staub},
doi = {10.1016/j.jml.2015.07.002},
year = {2015},
date = {2015-01-01},
journal = {Journal of Memory and Language},
volume = {85},
pages = {76--87},
publisher = {Elsevier Inc.},
abstract = {The E-Z Reader 10 model of eye movements in reading (Reichle, Warren, & McConnell, 2009) posits that the process of word identification strictly precedes the process of integration of a word into its syntactic and semantic context. The present study reports a single large-scale (N=112) eyetracking experiment in which the frequency and plausibility of a target word in each sentence were factorially manipulated. The results were consistent with E-Z Reader's central predictions: frequency but not plausibility influenced the probability that the word was skipped over by the eyes rather than directly fixated, and the two variables had additive, not interactive, effects on all reading time measures. Evidence in favor of null effects and null interactions was obtained by computing Bayes factors, using the default priors and sampling methods for ANOVA models implemented by Rouder, Morey, Speckman, and Province (2012). The results suggest that though a word's plausibility may have a measurable influence as early as the first fixation duration on the target word, in fact plausibility may be influencing only a post-lexical processing stage, rather than lexical identification itself.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The E-Z Reader 10 model of eye movements in reading (Reichle, Warren, & McConnell, 2009) posits that the process of word identification strictly precedes the process of integration of a word into its syntactic and semantic context. The present study reports a single large-scale (N=112) eyetracking experiment in which the frequency and plausibility of a target word in each sentence were factorially manipulated. The results were consistent with E-Z Reader's central predictions: frequency but not plausibility influenced the probability that the word was skipped over by the eyes rather than directly fixated, and the two variables had additive, not interactive, effects on all reading time measures. Evidence in favor of null effects and null interactions was obtained by computing Bayes factors, using the default priors and sampling methods for ANOVA models implemented by Rouder, Morey, Speckman, and Province (2012). The results suggest that though a word's plausibility may have a measurable influence as early as the first fixation duration on the target word, in fact plausibility may be influencing only a post-lexical processing stage, rather than lexical identification itself. |
Irene Ablinger; Walter Huber; Kerstin I Schattka; Ralph Radach Recovery in a letter-by-letter reader: More efficiency at the expense of normal reading strategy Journal Article In: Neurocase, vol. 19, no. 3, pp. 236–255, 2013. @article{Ablinger2013,
title = {Recovery in a letter-by-letter reader: More efficiency at the expense of normal reading strategy},
author = {Irene Ablinger and Walter Huber and Kerstin I Schattka and Ralph Radach},
doi = {10.1080/13554794.2012.667119},
year = {2013},
date = {2013-01-01},
journal = {Neurocase},
volume = {19},
number = {3},
pages = {236--255},
abstract = {Although changes in reading performance of recovering letter-by-letter readers have been described in some detail, no prior research has provided an in-depth analysis of the underlying adaptive word processing strategies. Our work examined the reading performance of a letter-by-letter reader, FH, over a period of 15 months, using eye movement methodology to delineate the recovery process at two different time points (T1, T2). A central question is whether recovery is characterized either by moving back towards normal word processing or by refinement and possibly automatization of an existing pathological strategy that was developed in response to the impairment. More specifically, we hypothesized that letter-by-letter reading may be executed with at least four different strategies and our work sought to distinguish between these alternatives. During recovery significant improvements in reading performance were achieved. A shift of fixation positions from the far left to the extreme right of target words was combined with many small and very few longer regressive saccades. Apparently, ‘letter-by-letter reading' took the form of local clustering, most likely corresponding to the formation ofsublexical units ofanalysis. This pattern was more pronounced at T2, suggesting that improvements in reading efficiency may come at the expense of making it harder to eventually return to normal reading.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Although changes in reading performance of recovering letter-by-letter readers have been described in some detail, no prior research has provided an in-depth analysis of the underlying adaptive word processing strategies. Our work examined the reading performance of a letter-by-letter reader, FH, over a period of 15 months, using eye movement methodology to delineate the recovery process at two different time points (T1, T2). A central question is whether recovery is characterized either by moving back towards normal word processing or by refinement and possibly automatization of an existing pathological strategy that was developed in response to the impairment. More specifically, we hypothesized that letter-by-letter reading may be executed with at least four different strategies and our work sought to distinguish between these alternatives. During recovery significant improvements in reading performance were achieved. A shift of fixation positions from the far left to the extreme right of target words was combined with many small and very few longer regressive saccades. Apparently, ‘letter-by-letter reading' took the form of local clustering, most likely corresponding to the formation ofsublexical units ofanalysis. This pattern was more pronounced at T2, suggesting that improvements in reading efficiency may come at the expense of making it harder to eventually return to normal reading. |
Irene Ablinger; Walter Huber; Ralph Radach Eye movement analyses indicate the underlying reading strategy in the recovery of lexical readers Journal Article In: Aphasiology, vol. 28, no. 6, pp. 640–657, 2014. @article{Ablinger2014,
title = {Eye movement analyses indicate the underlying reading strategy in the recovery of lexical readers},
author = {Irene Ablinger and Walter Huber and Ralph Radach},
doi = {10.1080/02687038.2014.894960},
year = {2014},
date = {2014-01-01},
journal = {Aphasiology},
volume = {28},
number = {6},
pages = {640--657},
abstract = {Background: Psycholinguistic error analysis of dyslexic responses in various reading tasks provides the primary basis for clinically discriminating subtypes of pathological reading. Within this framework, phonology-related errors are indicative of a sequential word processing strategy, whereas lexical and semantic errors are associated with a lexical reading strategy. Despite the large number of published intervention studies, relatively little is known about changes in error distributions during recovery in dyslexic patients.Aims: The main purpose of the present work was to extend the scope of research on the time course of recovery in readers with acquired dyslexia, using eye tracking methodology to examine word processing in real time. The guiding hypothesis was that in lexical readers a reduction of lexical errors and an emerging predominant production of phonological errors should be associated with a change to a more segmental moment-to-moment reading behaviour.Methods & Procedures: Five patients participated in an eye movement supported reading intervention, where both lexical and segmental reading was facilitated. Reading performance was assessed before (T1) and after (T2) therapy intervention via recording of eye movements. Analyses included a novel way to examine the spatiotemporal dynamics of processing using distributions of fixation positions as different time intervals. These subdistributions reveal the gradual shifting of fixation positions during word processing, providing an adequate metric for objective classification of online reading strategies.Outcome & Results: Therapy intervention led to improved reading accuracy in all subjects. In three of five participants, analyses revealed a restructuring in the underlying reading mechanisms from predominantly lexical to more segmental word processing. In contrast, two subjects maintained their lexical reading procedures. Importantly, the fundamental assumption that a high number of phonologically based reading errors must be associated with segmental word processing routines, while the production of lexical errors is indicative of a holistic reading strategy could not be verified.Conclusions: Our results indicate that despite general improvements in reading performance, only some patients reorganised their word identification process. Contradictive data raise doubts on the validity of psycholinguistic error analysis as an exclusive indicator of changes in reading strategy. We suggest this traditional approach to combine with innovative eye tracking methodology in the interest of more comprehensive diagnostic strategies.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Background: Psycholinguistic error analysis of dyslexic responses in various reading tasks provides the primary basis for clinically discriminating subtypes of pathological reading. Within this framework, phonology-related errors are indicative of a sequential word processing strategy, whereas lexical and semantic errors are associated with a lexical reading strategy. Despite the large number of published intervention studies, relatively little is known about changes in error distributions during recovery in dyslexic patients.Aims: The main purpose of the present work was to extend the scope of research on the time course of recovery in readers with acquired dyslexia, using eye tracking methodology to examine word processing in real time. The guiding hypothesis was that in lexical readers a reduction of lexical errors and an emerging predominant production of phonological errors should be associated with a change to a more segmental moment-to-moment reading behaviour.Methods & Procedures: Five patients participated in an eye movement supported reading intervention, where both lexical and segmental reading was facilitated. Reading performance was assessed before (T1) and after (T2) therapy intervention via recording of eye movements. Analyses included a novel way to examine the spatiotemporal dynamics of processing using distributions of fixation positions as different time intervals. These subdistributions reveal the gradual shifting of fixation positions during word processing, providing an adequate metric for objective classification of online reading strategies.Outcome & Results: Therapy intervention led to improved reading accuracy in all subjects. In three of five participants, analyses revealed a restructuring in the underlying reading mechanisms from predominantly lexical to more segmental word processing. In contrast, two subjects maintained their lexical reading procedures. Importantly, the fundamental assumption that a high number of phonologically based reading errors must be associated with segmental word processing routines, while the production of lexical errors is indicative of a holistic reading strategy could not be verified.Conclusions: Our results indicate that despite general improvements in reading performance, only some patients reorganised their word identification process. Contradictive data raise doubts on the validity of psycholinguistic error analysis as an exclusive indicator of changes in reading strategy. We suggest this traditional approach to combine with innovative eye tracking methodology in the interest of more comprehensive diagnostic strategies. |
Irene Ablinger; Kerstin von Heyden; Christian Vorstius; Katja Halm; Walter Huber; Ralph Radach An eye movement based reading intervention in lexical and segmental readers with acquired dyslexia Journal Article In: Neuropsychological Rehabilitation, vol. 24, no. 6, pp. 833–867, 2014. @article{Ablinger2014a,
title = {An eye movement based reading intervention in lexical and segmental readers with acquired dyslexia},
author = {Irene Ablinger and Kerstin von Heyden and Christian Vorstius and Katja Halm and Walter Huber and Ralph Radach},
doi = {10.1080/09602011.2014.913530},
year = {2014},
date = {2014-01-01},
journal = {Neuropsychological Rehabilitation},
volume = {24},
number = {6},
pages = {833--867},
abstract = {Due to their brain damage, aphasic patients with acquired dyslexia often rely to a greater extent on lexical or segmental reading procedures. Thus, therapy intervention is mostly targeted on the more impaired reading strategy. In the present work we introduce a novel therapy approach based on real-time measurement of patients' eye movements as they attempt to read words. More specifically, an eye movement contingent technique of stepwise letter de-masking was used to support sequential reading, whereas fixation-dependent initial masking of non-central letters stimulated a lexical (parallel) reading strategy. Four lexical and four segmental readers with acquired central dyslexia received our intensive reading intervention. All participants showed remarkable improvements as evident in reduced total reading time, a reduced number of fixations per word and improved reading accuracy. Both types of intervention led to item-specific training effects in all subjects. A generalisation to untrained items was only found in segmental readers after the lexical training. Eye movement analyses were also used to compare word processing before and after therapy, indicating that all patients, with one exclusion, maintained their preferred reading strategy. However, in several cases the balance between sequential and lexical processing became less extreme, indicating a more effective individual interplay of both word processing routes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Due to their brain damage, aphasic patients with acquired dyslexia often rely to a greater extent on lexical or segmental reading procedures. Thus, therapy intervention is mostly targeted on the more impaired reading strategy. In the present work we introduce a novel therapy approach based on real-time measurement of patients' eye movements as they attempt to read words. More specifically, an eye movement contingent technique of stepwise letter de-masking was used to support sequential reading, whereas fixation-dependent initial masking of non-central letters stimulated a lexical (parallel) reading strategy. Four lexical and four segmental readers with acquired central dyslexia received our intensive reading intervention. All participants showed remarkable improvements as evident in reduced total reading time, a reduced number of fixations per word and improved reading accuracy. Both types of intervention led to item-specific training effects in all subjects. A generalisation to untrained items was only found in segmental readers after the lexical training. Eye movement analyses were also used to compare word processing before and after therapy, indicating that all patients, with one exclusion, maintained their preferred reading strategy. However, in several cases the balance between sequential and lexical processing became less extreme, indicating a more effective individual interplay of both word processing routes. |
Irene Ablinger; Ralph Radach Diverging receptive and expressive word processing mechanisms in a deep dyslexic reader Journal Article In: Neuropsychologia, vol. 81, pp. 12–21, 2016. @article{Ablinger2016,
title = {Diverging receptive and expressive word processing mechanisms in a deep dyslexic reader},
author = {Irene Ablinger and Ralph Radach},
doi = {10.1016/j.neuropsychologia.2015.11.023},
year = {2016},
date = {2016-01-01},
journal = {Neuropsychologia},
volume = {81},
pages = {12--21},
publisher = {Elsevier},
abstract = {We report on KJ, a patient with acquired dyslexia due to cerebral artery infarction. He represents an unusually clear case of an "output" deep dyslexic reader, with a distinct pattern of pure semantic reading. According to current neuropsychological models of reading, the severity of this condition is directly related to the degree of impairment in semantic and phonological representations and the resulting imbalance in the interaction between the two word processing pathways. The present work sought to examine whether an innovative eye movement supported intervention combining lexical and segmental therapy would strengthen phonological processing and lead to an attenuation of the extreme semantic over-involvement in KJ's word identification process. Reading performance was assessed before (T1) between (T2) and after (T3) therapy using both analyses of linguistic errors and word viewing patterns. Therapy resulted in improved reading aloud accuracy along with a change in error distribution that suggested a return to more sequential reading. Interestingly, this was in contrast to the dynamics of moment-to-moment word processing, as eye movement analyses still suggested a predominantly holistic strategy, even at T3. So, in addition to documenting the success of the therapeutic intervention, our results call for a theoretically important conclusion: Real-time letter and word recognition routines should be considered separately from properties of the verbal output. Combining both perspectives may provide a promising strategy for future assessment and therapy evaluation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
We report on KJ, a patient with acquired dyslexia due to cerebral artery infarction. He represents an unusually clear case of an "output" deep dyslexic reader, with a distinct pattern of pure semantic reading. According to current neuropsychological models of reading, the severity of this condition is directly related to the degree of impairment in semantic and phonological representations and the resulting imbalance in the interaction between the two word processing pathways. The present work sought to examine whether an innovative eye movement supported intervention combining lexical and segmental therapy would strengthen phonological processing and lead to an attenuation of the extreme semantic over-involvement in KJ's word identification process. Reading performance was assessed before (T1) between (T2) and after (T3) therapy using both analyses of linguistic errors and word viewing patterns. Therapy resulted in improved reading aloud accuracy along with a change in error distribution that suggested a return to more sequential reading. Interestingly, this was in contrast to the dynamics of moment-to-moment word processing, as eye movement analyses still suggested a predominantly holistic strategy, even at T3. So, in addition to documenting the success of the therapeutic intervention, our results call for a theoretically important conclusion: Real-time letter and word recognition routines should be considered separately from properties of the verbal output. Combining both perspectives may provide a promising strategy for future assessment and therapy evaluation. |
Irene Ablinger; Anne Friede; Ralph Radach A combined lexical and segmental therapy approach in a participant with pure alexia Journal Article In: Aphasiology, vol. 33, no. 5, pp. 579–605, 2019. @article{Ablinger2019,
title = {A combined lexical and segmental therapy approach in a participant with pure alexia},
author = {Irene Ablinger and Anne Friede and Ralph Radach},
doi = {10.1080/02687038.2018.1485073},
year = {2019},
date = {2019-01-01},
journal = {Aphasiology},
volume = {33},
number = {5},
pages = {579--605},
publisher = {Routledge},
abstract = {Background: Pure alexia is characterized by effortful left-to-right word processing, leading to a pathological length effect during reading aloud. Results of previous therapy outcome research suggest that patients with pure alexia tend to develop and maintain an adaptive sequential reading strategy in an effort to cope with their severe deficit and at least master a slow and laborious reading mode. Aim: We applied a theory-based, strategy-driven and eye-movement-supported therapy approach on HC, a participant with pure alexia. Our intention was to help optimizing his very persistent sequential reading strategy, while concurrently facilitating fast parallel word processing. Methods & Procedures: Therapy included a systematic combination of segmental and holistic reading as well as text reading components. Exposure duration and font size were gradually reduced. Following a single case experimental reading design with follow-up testing, we assessed reading performance at four testing points focusing on analyses of linguistic errors and word viewing patterns. Outcomes & Results: With respect to reading accuracy and oculomotor measures, the combined therapy approach resulted in sustained training effects evident in significant improvements for trained and untrained word materials. Text reading intervention only led to therapy specific improvements. Spatio-temporal analyses of eye fixation positions revealed a more and more efficient adaptive strategy to compensate for reading difficulties. However, spatial changes in fixation position were less pronounced at T4, suggesting some diminishing of success at follow-up. Conclusions: Our results underscore the need for a continuous systematic training of underlying reading strategies in pure alexia to develop and sustain more economic reading procedures.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Background: Pure alexia is characterized by effortful left-to-right word processing, leading to a pathological length effect during reading aloud. Results of previous therapy outcome research suggest that patients with pure alexia tend to develop and maintain an adaptive sequential reading strategy in an effort to cope with their severe deficit and at least master a slow and laborious reading mode. Aim: We applied a theory-based, strategy-driven and eye-movement-supported therapy approach on HC, a participant with pure alexia. Our intention was to help optimizing his very persistent sequential reading strategy, while concurrently facilitating fast parallel word processing. Methods & Procedures: Therapy included a systematic combination of segmental and holistic reading as well as text reading components. Exposure duration and font size were gradually reduced. Following a single case experimental reading design with follow-up testing, we assessed reading performance at four testing points focusing on analyses of linguistic errors and word viewing patterns. Outcomes & Results: With respect to reading accuracy and oculomotor measures, the combined therapy approach resulted in sustained training effects evident in significant improvements for trained and untrained word materials. Text reading intervention only led to therapy specific improvements. Spatio-temporal analyses of eye fixation positions revealed a more and more efficient adaptive strategy to compensate for reading difficulties. However, spatial changes in fixation position were less pronounced at T4, suggesting some diminishing of success at follow-up. Conclusions: Our results underscore the need for a continuous systematic training of underlying reading strategies in pure alexia to develop and sustain more economic reading procedures. |
Joana Acha; Manuel Perea The effect of neighborhood frequency in reading: Evidence with transposed-letter neighbors Journal Article In: Cognition, vol. 108, pp. 290–300, 2008. @article{Acha2008,
title = {The effect of neighborhood frequency in reading: Evidence with transposed-letter neighbors},
author = {Joana Acha and Manuel Perea},
year = {2008},
date = {2008-01-01},
journal = {Cognition},
volume = {108},
pages = {290--300},
abstract = {Transposed-letter effects (e.g., jugde activates judge) pose serious models for models of visual-word recognition that use position-specific coding schemes. However, even though the evidence of transposed-letter effects with nonword stimuli is strong, the evidence for word stimuli is scarce and inconclusive. The present experiment examined the effect of neighborhood frequency during normal silent reading using transposed-letter neighbors (e.g., silver, sliver). Two sets of low-frequency words were created (equated in the number of substitution neighbors, word frequency, and number of letters), which were embedded in sentences. In one set, the target word had a higher frequency transposed-letter neighbor, and in the other set, the target word had no transposed-letter neighbors. An inhibitory effect of neighborhood frequency was observed in measures that reflect late processing in words (number of regressions back to the target word, and total time). We examine the implications of these findings for models of visual-word recognition and reading.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Transposed-letter effects (e.g., jugde activates judge) pose serious models for models of visual-word recognition that use position-specific coding schemes. However, even though the evidence of transposed-letter effects with nonword stimuli is strong, the evidence for word stimuli is scarce and inconclusive. The present experiment examined the effect of neighborhood frequency during normal silent reading using transposed-letter neighbors (e.g., silver, sliver). Two sets of low-frequency words were created (equated in the number of substitution neighbors, word frequency, and number of letters), which were embedded in sentences. In one set, the target word had a higher frequency transposed-letter neighbor, and in the other set, the target word had no transposed-letter neighbors. An inhibitory effect of neighborhood frequency was observed in measures that reflect late processing in words (number of regressions back to the target word, and total time). We examine the implications of these findings for models of visual-word recognition and reading. |
Zaeinab Afsari; José P Ossandón; Peter Konig The dynamic effect of reading direction habit on spatial asymmetry of image perception Journal Article In: Journal of Vision, vol. 16, no. 11, pp. 1–21, 2016. @article{Afsari2016,
title = {The dynamic effect of reading direction habit on spatial asymmetry of image perception},
author = {Zaeinab Afsari and José P Ossandón and Peter Konig},
doi = {10.1167/16.11.8.doi},
year = {2016},
date = {2016-01-01},
journal = {Journal of Vision},
volume = {16},
number = {11},
pages = {1--21},
abstract = {Exploration of images after stimulus onset is initially biased to the left. Here, we studied the causes of such an asymmetry and investigated effects of reading habits, text primes, and priming by systematically biased eye movements on this spatial bias in visual exploration. Bilinguals first read text primes with right- to-left (RTL) or left-to-right (LTR) reading directions and subsequently explored natural images. In Experiment 1, native RTL speakers showed a leftward free-viewing shift after reading LTR primes but a weaker rightward bias after reading RTL primes. This demonstrates that reading direction dynamically influences the spatial bias. However, native LTR speakers wholearnedanRTL languagelateinlife showed a leftward bias after reading either LTR or RTL primes, which suggests the role of habit formation in the production of the spatial bias. In Experiment 2, LTR bilinguals showed a slightly enhanced leftward bias after reading LTR text primes in their second language. This might contribute to the differences of native RTL and LTR speakers observed in Experiment 1. In Experiment 3, LTR bilinguals read normal (LTR, habitual reading) and mirrored left-to-right (mLTR, nonhabitual reading) texts. We observed a strong leftward bias in both cases, indicating that the bias direction is influenced by habitual reading direction and is not secondary to the actual reading direction. This is confirmed in Experiment 4, in which LTR participants were asked to follow RTL and LTR moving dots in prior image presentation and showed no change in the normal spatial bias. In conclusion, the horizontal bias is a dynamic property and is modulated by habitual reading direction. Introduction},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Exploration of images after stimulus onset is initially biased to the left. Here, we studied the causes of such an asymmetry and investigated effects of reading habits, text primes, and priming by systematically biased eye movements on this spatial bias in visual exploration. Bilinguals first read text primes with right- to-left (RTL) or left-to-right (LTR) reading directions and subsequently explored natural images. In Experiment 1, native RTL speakers showed a leftward free-viewing shift after reading LTR primes but a weaker rightward bias after reading RTL primes. This demonstrates that reading direction dynamically influences the spatial bias. However, native LTR speakers wholearnedanRTL languagelateinlife showed a leftward bias after reading either LTR or RTL primes, which suggests the role of habit formation in the production of the spatial bias. In Experiment 2, LTR bilinguals showed a slightly enhanced leftward bias after reading LTR text primes in their second language. This might contribute to the differences of native RTL and LTR speakers observed in Experiment 1. In Experiment 3, LTR bilinguals read normal (LTR, habitual reading) and mirrored left-to-right (mLTR, nonhabitual reading) texts. We observed a strong leftward bias in both cases, indicating that the bias direction is influenced by habitual reading direction and is not secondary to the actual reading direction. This is confirmed in Experiment 4, in which LTR participants were asked to follow RTL and LTR moving dots in prior image presentation and showed no change in the normal spatial bias. In conclusion, the horizontal bias is a dynamic property and is modulated by habitual reading direction. Introduction |
Zaeinab Afsari; Ashima Keshava; José P Ossandón; Peter König Interindividual differences among native right-to-left readers and native left-to-right readers during free viewing task Journal Article In: Visual Cognition, vol. 26, no. 6, pp. 430–441, 2018. @article{Afsari2018,
title = {Interindividual differences among native right-to-left readers and native left-to-right readers during free viewing task},
author = {Zaeinab Afsari and Ashima Keshava and José P Ossandón and Peter König},
doi = {10.1080/13506285.2018.1473542},
year = {2018},
date = {2018-01-01},
journal = {Visual Cognition},
volume = {26},
number = {6},
pages = {430--441},
abstract = {Human visual exploration is not homogeneous but displays spatial biases. Specifically, early after the onset of a visual stimulus, the majority of eye movements target the left visual space. This horizontal asymmetry of image exploration is rather robust with respect to multiple image manipulations, yet can be dynamically modulated by preceding text primes. This characteristic points to an involvement of reading habits in the deployment of visual attention. Here, we report data of native right-to-left (RTL) readers with a larger variation and stronger modulation of horizontal spatial bias in comparison to native left-to-right (LTR) readers after preceding text primes. To investigate the influences of biological and cultural factors, we measure the correlation of the modulation of the horizontal spatial bias for native RTL readers and native LTR readers with multiple factors: age, gender, second language proficiency, and age at which the second language was acquired. The results demonstrate only weak or no correlations between the magnitude of the horizontal bias and the previously mentioned factors. We conclude that the spatial bias of viewing behaviour for native RTL readers is more variable than for native LTR readers, and this variance could not be demonstrated to be associated with interindividual differences. We speculate the role of strength of habit and/or the interindividual differences in the structural and functional brain regions as a cause of the RTL spatial bias among RTL native readers.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Human visual exploration is not homogeneous but displays spatial biases. Specifically, early after the onset of a visual stimulus, the majority of eye movements target the left visual space. This horizontal asymmetry of image exploration is rather robust with respect to multiple image manipulations, yet can be dynamically modulated by preceding text primes. This characteristic points to an involvement of reading habits in the deployment of visual attention. Here, we report data of native right-to-left (RTL) readers with a larger variation and stronger modulation of horizontal spatial bias in comparison to native left-to-right (LTR) readers after preceding text primes. To investigate the influences of biological and cultural factors, we measure the correlation of the modulation of the horizontal spatial bias for native RTL readers and native LTR readers with multiple factors: age, gender, second language proficiency, and age at which the second language was acquired. The results demonstrate only weak or no correlations between the magnitude of the horizontal bias and the previously mentioned factors. We conclude that the spatial bias of viewing behaviour for native RTL readers is more variable than for native LTR readers, and this variance could not be demonstrated to be associated with interindividual differences. We speculate the role of strength of habit and/or the interindividual differences in the structural and functional brain regions as a cause of the RTL spatial bias among RTL native readers. |
Luis Aguado; Karisa B Parkington; Teresa Dieguez-Risco; José A Hinojosa; Roxane J Itier Joint modulation of facial expression processing by contextual congruency and task demands Journal Article In: Brain Sciences, vol. 9, pp. 1–20, 2019. @article{Aguado2019,
title = {Joint modulation of facial expression processing by contextual congruency and task demands},
author = {Luis Aguado and Karisa B Parkington and Teresa Dieguez-Risco and José A Hinojosa and Roxane J Itier},
doi = {10.3390/brainsci9050116},
year = {2019},
date = {2019-01-01},
journal = {Brain Sciences},
volume = {9},
pages = {1--20},
abstract = {Faces showing expressions of happiness or anger were presented together with sentences that described happiness-inducing or anger-inducing situations. Two main variables were manipulated: (i) congruency between contexts and expressions (congruent/incongruent) and (ii) the task assigned to the participant, discriminating the emotion shown by the target face (emotion task) or judging whether the expression shown by the face was congruent or not with the context (congruency task). Behavioral and electrophysiological results (event-related potentials (ERP)) showed that processing facial expressions was jointly influenced by congruency and task demands. ERP results revealed task effects at frontal sites, with larger positive amplitudes between 250–450 ms in the congruency task, reflecting the higher cognitive effort required by this task. Effects of congruency appeared at latencies and locations corresponding to the early posterior negativity (EPN) and late positive potential (LPP) components that have previously been found to be sensitive to emotion and affective congruency. The magnitude and spatial distribution of the congruency effects varied depending on the task and the target expression. These results are discussed in terms of the modulatory role of context on facial expression processing and the different mechanisms underlying the processing of expressions of positive and negative emotions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Faces showing expressions of happiness or anger were presented together with sentences that described happiness-inducing or anger-inducing situations. Two main variables were manipulated: (i) congruency between contexts and expressions (congruent/incongruent) and (ii) the task assigned to the participant, discriminating the emotion shown by the target face (emotion task) or judging whether the expression shown by the face was congruent or not with the context (congruency task). Behavioral and electrophysiological results (event-related potentials (ERP)) showed that processing facial expressions was jointly influenced by congruency and task demands. ERP results revealed task effects at frontal sites, with larger positive amplitudes between 250–450 ms in the congruency task, reflecting the higher cognitive effort required by this task. Effects of congruency appeared at latencies and locations corresponding to the early posterior negativity (EPN) and late positive potential (LPP) components that have previously been found to be sensitive to emotion and affective congruency. The magnitude and spatial distribution of the congruency effects varied depending on the task and the target expression. These results are discussed in terms of the modulatory role of context on facial expression processing and the different mechanisms underlying the processing of expressions of positive and negative emotions. |
Carlos Aguilar; Eric Castet Evaluation of a gaze-controlled vision enhancement system for reading in visually impaired people Journal Article In: PLoS ONE, vol. 12, no. 4, pp. e0174910, 2017. @article{Aguilar2017,
title = {Evaluation of a gaze-controlled vision enhancement system for reading in visually impaired people},
author = {Carlos Aguilar and Eric Castet},
doi = {10.1371/journal.pone.0174910},
year = {2017},
date = {2017-01-01},
journal = {PLoS ONE},
volume = {12},
number = {4},
pages = {e0174910},
abstract = {People with low vision, especially those with Central Field Loss (CFL), need magnification to read. The flexibility of Electronic Vision Enhancement Systems (EVES) offers several ways of magnifying text. Due to the restricted field of view of EVES, the need for magnification is conflicting with the need to navigate through text (panning). We have developed and implemented a real-time gaze-controlled system whose goal is to optimize the possibility of magnifying a portion of text while maintaining global viewing of the other portions of the text (condition 1). Two other conditions were implemented that mimicked commercially available advanced systems known as CCTV (closed-circuit television systems)-conditions 2 and 3. In these two conditions, magnification was uniformly applied to the whole text without any possibility to specifically select a region of interest. The three conditions were implemented on the same computer to remove differences that might have been induced by dissimilar equipment. A gaze-contingent artificial 10° scotoma (a mask continuously displayed in real time on the screen at the gaze location) was used in the three conditions in order to simulate macular degeneration. Ten healthy subjects with a gaze-contingent scotoma read aloud sentences from a French newspaper in nine experimental one-hour sessions. Reading speed was measured and constituted the main dependent variable to compare the three conditions. All subjects were able to use condition 1 and they found it slightly more comfortable to use than condition 2 (and similar to condition 3). Importantly, reading speed results did not show any significant difference between the three systems. In addition, learning curves were similar in the three conditions. This proof of concept study suggests that the principles underlying the gaze-controlled enhanced system might be further developed and fruitfully incorporated in different kinds of EVES for low vision reading.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
People with low vision, especially those with Central Field Loss (CFL), need magnification to read. The flexibility of Electronic Vision Enhancement Systems (EVES) offers several ways of magnifying text. Due to the restricted field of view of EVES, the need for magnification is conflicting with the need to navigate through text (panning). We have developed and implemented a real-time gaze-controlled system whose goal is to optimize the possibility of magnifying a portion of text while maintaining global viewing of the other portions of the text (condition 1). Two other conditions were implemented that mimicked commercially available advanced systems known as CCTV (closed-circuit television systems)-conditions 2 and 3. In these two conditions, magnification was uniformly applied to the whole text without any possibility to specifically select a region of interest. The three conditions were implemented on the same computer to remove differences that might have been induced by dissimilar equipment. A gaze-contingent artificial 10° scotoma (a mask continuously displayed in real time on the screen at the gaze location) was used in the three conditions in order to simulate macular degeneration. Ten healthy subjects with a gaze-contingent scotoma read aloud sentences from a French newspaper in nine experimental one-hour sessions. Reading speed was measured and constituted the main dependent variable to compare the three conditions. All subjects were able to use condition 1 and they found it slightly more comfortable to use than condition 2 (and similar to condition 3). Importantly, reading speed results did not show any significant difference between the three systems. In addition, learning curves were similar in the three conditions. This proof of concept study suggests that the principles underlying the gaze-controlled enhanced system might be further developed and fruitfully incorporated in different kinds of EVES for low vision reading. |
Danbi Ahn; Matthew J Abbott; Keith Rayner; Victor S Ferreira; Tamar H Gollan Minimal overlap in language control across production and comprehension: Evidence from read-aloud versus eye-tracking tasks Journal Article In: Journal of Neurolinguistics, vol. 54, pp. 1–22, 2020. @article{Ahn2020a,
title = {Minimal overlap in language control across production and comprehension: Evidence from read-aloud versus eye-tracking tasks},
author = {Danbi Ahn and Matthew J Abbott and Keith Rayner and Victor S Ferreira and Tamar H Gollan},
doi = {10.1016/j.jneuroling.2019.100885},
year = {2020},
date = {2020-01-01},
journal = {Journal of Neurolinguistics},
volume = {54},
pages = {1--22},
publisher = {Elsevier},
abstract = {Bilinguals are remarkable at language control—switching between languages only when they want. However, language control in production can involve switch costs. That is, switching to another language takes longer than staying in the same language. Moreover, bilinguals sometimes produce language intrusion errors, mistakenly producing words in an unintended language (e.g., Spanish–English bilinguals saying “pero” instead of “but”). Switch costs are also found in comprehension. For example, reading times are longer when bilinguals read sentences with language switches compared to sentences with no language switches. Given that both production and comprehension involve switch costs, some language–control mechanisms might be shared across modalities. To test this, we compared language switch costs found in eye–movement measures during silent sentence reading (comprehension) and intrusion errors produced when reading aloud switched words in mixed–language paragraphs (production). Bilinguals who made more intrusion errors during the read–aloud task did not show different switch cost patterns in most measures in the silent–reading task, except on skipping rates. We suggest that language switching is mostly controlled by separate, modality–specific processes in production and comprehension, although some points of overlap might indicate the role of domain general control and how it can influence individual differences in bilingual language control.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Bilinguals are remarkable at language control—switching between languages only when they want. However, language control in production can involve switch costs. That is, switching to another language takes longer than staying in the same language. Moreover, bilinguals sometimes produce language intrusion errors, mistakenly producing words in an unintended language (e.g., Spanish–English bilinguals saying “pero” instead of “but”). Switch costs are also found in comprehension. For example, reading times are longer when bilinguals read sentences with language switches compared to sentences with no language switches. Given that both production and comprehension involve switch costs, some language–control mechanisms might be shared across modalities. To test this, we compared language switch costs found in eye–movement measures during silent sentence reading (comprehension) and intrusion errors produced when reading aloud switched words in mixed–language paragraphs (production). Bilinguals who made more intrusion errors during the read–aloud task did not show different switch cost patterns in most measures in the silent–reading task, except on skipping rates. We suggest that language switching is mostly controlled by separate, modality–specific processes in production and comprehension, although some points of overlap might indicate the role of domain general control and how it can influence individual differences in bilingual language control. |
Denis Alamargot; Sylvie Plane; Eric Lambert; David Chesnet Using eye and pen movements to trace the development of writing expertise: Case studies of a 7th, 9th and 12th grader, graduate student, and professional writer Journal Article In: Reading and Writing, vol. 23, no. 7, pp. 853–888, 2010. @article{Alamargot2010,
title = {Using eye and pen movements to trace the development of writing expertise: Case studies of a 7th, 9th and 12th grader, graduate student, and professional writer},
author = {Denis Alamargot and Sylvie Plane and Eric Lambert and David Chesnet},
doi = {10.1007/s11145-009-9191-9},
year = {2010},
date = {2010-01-01},
journal = {Reading and Writing},
volume = {23},
number = {7},
pages = {853--888},
abstract = {This study was designed to enhance our understanding of the changing relationship between low- and high-level writing processes in the course of development. A dual description of writing processes was undertaken, based on (a) the respective time courses of these processes, as assessed by an analysis of eye and pen movements, and (b) the semantic characteristics of the writers' scripts. To conduct a more fine-grained description of processing strategies, a ‘‘case study'' approach was adopted, whereby a comprehensive range of measures was used to assess processes within five writers with different levels of expertise. The task was to continue writing a story based on excerpt from a source document (incipit). The main results showed two developmental patterns linked to expertise: (a) a gradual acceleration in low- and high-level processing (pauses, flow), associated with (b) changes in the way the previous text was (re)read.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
This study was designed to enhance our understanding of the changing relationship between low- and high-level writing processes in the course of development. A dual description of writing processes was undertaken, based on (a) the respective time courses of these processes, as assessed by an analysis of eye and pen movements, and (b) the semantic characteristics of the writers' scripts. To conduct a more fine-grained description of processing strategies, a ‘‘case study'' approach was adopted, whereby a comprehensive range of measures was used to assess processes within five writers with different levels of expertise. The task was to continue writing a story based on excerpt from a source document (incipit). The main results showed two developmental patterns linked to expertise: (a) a gradual acceleration in low- and high-level processing (pauses, flow), associated with (b) changes in the way the previous text was (re)read. |
Denis Alamargot; Lisa Flouret; Denis Larocque; Gilles Caporossi; Virginie Pontart; Carmen Paduraru; Pauline Morisset; Michel Fayol Successful written subject–verb agreement: An online analysis of the procedure used by students in Grades 3, 5 and 12 Journal Article In: Reading and Writing, vol. 28, no. 3, pp. 291–312, 2015. @article{Alamargot2015,
title = {Successful written subject–verb agreement: An online analysis of the procedure used by students in Grades 3, 5 and 12},
author = {Denis Alamargot and Lisa Flouret and Denis Larocque and Gilles Caporossi and Virginie Pontart and Carmen Paduraru and Pauline Morisset and Michel Fayol},
doi = {10.1007/s11145-014-9525-0},
year = {2015},
date = {2015-01-01},
journal = {Reading and Writing},
volume = {28},
number = {3},
pages = {291--312},
abstract = {This study was designed to (1) investigate the procedure responsible for successful written subject–verb agreement, and (2) describe how it develops across grades. Students in Grades 3, 5 and 12 were asked to read noun–noun–verb sen- tences aloud (e.g., Le chien des voisins mange [The dog of the neighbors eats]) and write out the verb inflections. Some of the nouns differed in number, thus inducing attraction errors. Results showed that third graders were successful because they implemented a declarative procedure requiring regressive fixations on the subject noun while writing out the inflection. A dual-step procedure (Hupet, Schelstraete, Demaeght, & Fayol, 1996) emerged in Grade 5, and was fully efficient by Grade 12. This procedure, which couples an automatized agreement rule with a monitoring process operated within working memory (without the need for regressive fixa- tions), was found to trigger a mismatch asymmetry (singular–plural textgreater plural–sin- gular) in Grade 5. The time course of written subject–verb agreement, the origin of agreement errors and differences between the spoken and written modalities are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
This study was designed to (1) investigate the procedure responsible for successful written subject–verb agreement, and (2) describe how it develops across grades. Students in Grades 3, 5 and 12 were asked to read noun–noun–verb sen- tences aloud (e.g., Le chien des voisins mange [The dog of the neighbors eats]) and write out the verb inflections. Some of the nouns differed in number, thus inducing attraction errors. Results showed that third graders were successful because they implemented a declarative procedure requiring regressive fixations on the subject noun while writing out the inflection. A dual-step procedure (Hupet, Schelstraete, Demaeght, & Fayol, 1996) emerged in Grade 5, and was fully efficient by Grade 12. This procedure, which couples an automatized agreement rule with a monitoring process operated within working memory (without the need for regressive fixa- tions), was found to trigger a mismatch asymmetry (singular–plural textgreater plural–sin- gular) in Grade 5. The time course of written subject–verb agreement, the origin of agreement errors and differences between the spoken and written modalities are discussed. |
Noor Z Al Dahhan; George K Georgiou; Rickie Hung; Douglas P Munoz; Rauno Parrila; John R Kirby Eye movements of university students with and without reading difficulties during naming speed tasks Journal Article In: Annals of Dyslexia, vol. 64, no. 2, pp. 137–150, 2014. @article{AlDahhan2014,
title = {Eye movements of university students with and without reading difficulties during naming speed tasks},
author = {Noor Z {Al Dahhan} and George K Georgiou and Rickie Hung and Douglas P Munoz and Rauno Parrila and John R Kirby},
doi = {10.1007/s11881-013-0090-z},
year = {2014},
date = {2014-01-01},
journal = {Annals of Dyslexia},
volume = {64},
number = {2},
pages = {137--150},
abstract = {Although naming speed (NS) has been shown to predict reading into adulthood and differentiate between adult dyslexics and controls, the question remains why NS is related to reading. To address this question, eye movement methodology was combined with three letter NS tasks (the original letter NS task by Denckla & Rudel, Cortex 10:186-202, 1974, and two more developed by Compton, The Journal of Special Education 37:81-94, 2003, with increased phonological or visual similarity of the letters). Twenty undergraduate students with reading difficulties (RD) and 27 without (NRD) were tested on letter NS tasks (eye movements were recorded during the NS tasks), phonological processing, and reading fluency. The results indicated first that the RD group was slower than the NRD group on all NS tasks with no differences between the NS tasks. In addition, the NRD group had shorter fixation durations, longer saccades, and fewer saccades and fixations than the RD group. Fixation duration and fixation count were significant predictors of reading fluency even after controlling for phonological processing measures. Taken together, these findings suggest that the NS-reading relationship is due to two factors: less able readers require more time to acquire stimulus information during fixation and they make more saccades.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Although naming speed (NS) has been shown to predict reading into adulthood and differentiate between adult dyslexics and controls, the question remains why NS is related to reading. To address this question, eye movement methodology was combined with three letter NS tasks (the original letter NS task by Denckla & Rudel, Cortex 10:186-202, 1974, and two more developed by Compton, The Journal of Special Education 37:81-94, 2003, with increased phonological or visual similarity of the letters). Twenty undergraduate students with reading difficulties (RD) and 27 without (NRD) were tested on letter NS tasks (eye movements were recorded during the NS tasks), phonological processing, and reading fluency. The results indicated first that the RD group was slower than the NRD group on all NS tasks with no differences between the NS tasks. In addition, the NRD group had shorter fixation durations, longer saccades, and fewer saccades and fixations than the RD group. Fixation duration and fixation count were significant predictors of reading fluency even after controlling for phonological processing measures. Taken together, these findings suggest that the NS-reading relationship is due to two factors: less able readers require more time to acquire stimulus information during fixation and they make more saccades. |
Noor Z Al Dahhan; John R Kirby; Donald C Brien; Douglas P Munoz Eye movements and articulations during a letter naming speed task: Children with and without Dyslexia Journal Article In: Journal of Learning Disabilities, vol. 50, no. 3, pp. 275–285, 2017. @article{AlDahhan2017,
title = {Eye movements and articulations during a letter naming speed task: Children with and without Dyslexia},
author = {Noor Z {Al Dahhan} and John R Kirby and Donald C Brien and Douglas P Munoz},
doi = {10.1177/0022219415618502},
year = {2017},
date = {2017-01-01},
journal = {Journal of Learning Disabilities},
volume = {50},
number = {3},
pages = {275--285},
abstract = {Abstract Naming speed (NS) refers to how quickly and accurately participants name a set of familiar stimuli (e.g., letters). NS is an established predictor of reading ability, but controversy remains over why it is related to reading. We used three techniques (stimulus manipulations to emphasize phonological and/or visual aspects, decomposition of NS times into pause and articulation components, and analysis of eye movements during task performance) with three groups of participants (children with dyslexia, ages 9–10; chronological-age [CA] controls, ages 9–10; reading-level [RL] controls, ages 6–7) to examine NS and the NS–reading relationship. Results indicated (a) for all groups, increasing visual similarity of the letters decreased letter naming efficiency and increased naming errors, saccades, regressions (rapid eye movements back to letters already fixated), pause times, and fixation durations; (b) children with dyslexia performed like RL controls and were less efficient, had longer articulation times, pause times, fixation durations, and made more errors and regressions than CA controls; and (c) pause time and fixation duration were the most powerful predictors of reading. We conclude that NS is related to reading via fixation durations and pause times: Longer fixation durations and pause times reflect the greater amount of time needed to acquire visual/orthographic information from stimuli and prepare the correct response.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Abstract Naming speed (NS) refers to how quickly and accurately participants name a set of familiar stimuli (e.g., letters). NS is an established predictor of reading ability, but controversy remains over why it is related to reading. We used three techniques (stimulus manipulations to emphasize phonological and/or visual aspects, decomposition of NS times into pause and articulation components, and analysis of eye movements during task performance) with three groups of participants (children with dyslexia, ages 9–10; chronological-age [CA] controls, ages 9–10; reading-level [RL] controls, ages 6–7) to examine NS and the NS–reading relationship. Results indicated (a) for all groups, increasing visual similarity of the letters decreased letter naming efficiency and increased naming errors, saccades, regressions (rapid eye movements back to letters already fixated), pause times, and fixation durations; (b) children with dyslexia performed like RL controls and were less efficient, had longer articulation times, pause times, fixation durations, and made more errors and regressions than CA controls; and (c) pause time and fixation duration were the most powerful predictors of reading. We conclude that NS is related to reading via fixation durations and pause times: Longer fixation durations and pause times reflect the greater amount of time needed to acquire visual/orthographic information from stimuli and prepare the correct response. |
Noor Z Al Dahhan; John R Kirby; Ying Chen; Donald C Brien; Douglas P Munoz Examining the neural and cognitive processes that underlie reading through naming speed tasks Journal Article In: European Journal of Neuroscience, vol. 51, no. 11, pp. 2277–2298, 2020. @article{AlDahhan2020,
title = {Examining the neural and cognitive processes that underlie reading through naming speed tasks},
author = {Noor Z {Al Dahhan} and John R Kirby and Ying Chen and Donald C Brien and Douglas P Munoz},
doi = {10.1111/ejn.14673},
year = {2020},
date = {2020-01-01},
journal = {European Journal of Neuroscience},
volume = {51},
number = {11},
pages = {2277--2298},
abstract = {We combined fMRI with eye tracking and speech recording to examine the neural and cognitive mechanisms that underlie reading. To simplify the study of the complex processes involved during reading, we used naming speed (NS) tasks (also known as rapid automatized naming or RAN) as a focus for this study, in which average reading right-handed adults named sets of stimuli (letters or objects) as quickly and accurately as possible. Due to the possibility of spoken output during fMRI studies creating motion artifacts, we employed both an overt session and a covert session. When comparing the two sessions, there were no significant differences in behavioral performance, sensorimotor activation (except for regions involved in the motor aspects of speech production) or activation in regions within the left-hemisphere-dominant neural reading network. This established that differences found between the tasks within the reading network were not attributed to speech production motion artifacts or sensorimotor processes. Both behavioral and neuroimaging measures showed that letter naming was a more automatic and efficient task than object naming. Furthermore, specific manipulations to the NS tasks to make the stimuli more visually and/or phonologically similar differentially activated the reading network in the left hemisphere associated with phonological, orthographic and orthographic-to-phonological processing, but not articulatory/motor processing related to speech production. These findings further our understanding of the underlying neural processes that support reading by examining how activation within the reading network differs with both task performance and task characteristics.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
We combined fMRI with eye tracking and speech recording to examine the neural and cognitive mechanisms that underlie reading. To simplify the study of the complex processes involved during reading, we used naming speed (NS) tasks (also known as rapid automatized naming or RAN) as a focus for this study, in which average reading right-handed adults named sets of stimuli (letters or objects) as quickly and accurately as possible. Due to the possibility of spoken output during fMRI studies creating motion artifacts, we employed both an overt session and a covert session. When comparing the two sessions, there were no significant differences in behavioral performance, sensorimotor activation (except for regions involved in the motor aspects of speech production) or activation in regions within the left-hemisphere-dominant neural reading network. This established that differences found between the tasks within the reading network were not attributed to speech production motion artifacts or sensorimotor processes. Both behavioral and neuroimaging measures showed that letter naming was a more automatic and efficient task than object naming. Furthermore, specific manipulations to the NS tasks to make the stimuli more visually and/or phonologically similar differentially activated the reading network in the left hemisphere associated with phonological, orthographic and orthographic-to-phonological processing, but not articulatory/motor processing related to speech production. These findings further our understanding of the underlying neural processes that support reading by examining how activation within the reading network differs with both task performance and task characteristics. |
Agnès Alsius; Rachel V Wayne; Martin Paré; Kevin G Munhall High visual resolution matters in audiovisual speech perception, but only for some Journal Article In: Attention, Perception, and Psychophysics, vol. 78, no. 5, pp. 1472–1487, 2016. @article{Alsius2016,
title = {High visual resolution matters in audiovisual speech perception, but only for some},
author = {Agn{è}s Alsius and Rachel V Wayne and Martin Paré and Kevin G Munhall},
doi = {10.3758/s13414-016-1109-4},
year = {2016},
date = {2016-01-01},
journal = {Attention, Perception, and Psychophysics},
volume = {78},
number = {5},
pages = {1472--1487},
publisher = {Attention, Perception, & Psychophysics},
abstract = {The basis for individual differences in the degree to which visual speech input enhances comprehension of acoustically degraded speech is largely unknown. Previous research indicates that fine facial detail is not critical for visual enhancement when auditory information is available; however, these studies did not examine individual differences in ability to make use of fine facial detail in relation to audiovisual speech perception ability. Here, we compare participants based on their ability to benefit from visual speech information in the presence of an auditory signal degraded with noise, modulating the resolution of the visual signal through low-pass spatial frequency filtering and monitoring gaze behavior. Participants who benefited most from the addition of visual information (high visual gain) were more adversely affected by the removal of high spatial frequency information, compared to participants with low visual gain, for materials with both poor and rich contextual cues (i.e., words and sentences, respectively). Differences as a function of gaze behavior between participants with the highest and lowest visual gains were observed only for words, with participants with the highest visual gain fixating longer on the mouth region. Our results indicate that the individual variance in audiovisual speech in noise performance can be accounted for, in part, by better use of fine facial detail information extracted from the visual signal and increased fixation on mouth regions for short stimuli. Thus, for some, audiovisual speech perception may suffer when the visual input (in addition to the auditory signal) is less than perfect.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The basis for individual differences in the degree to which visual speech input enhances comprehension of acoustically degraded speech is largely unknown. Previous research indicates that fine facial detail is not critical for visual enhancement when auditory information is available; however, these studies did not examine individual differences in ability to make use of fine facial detail in relation to audiovisual speech perception ability. Here, we compare participants based on their ability to benefit from visual speech information in the presence of an auditory signal degraded with noise, modulating the resolution of the visual signal through low-pass spatial frequency filtering and monitoring gaze behavior. Participants who benefited most from the addition of visual information (high visual gain) were more adversely affected by the removal of high spatial frequency information, compared to participants with low visual gain, for materials with both poor and rich contextual cues (i.e., words and sentences, respectively). Differences as a function of gaze behavior between participants with the highest and lowest visual gains were observed only for words, with participants with the highest visual gain fixating longer on the mouth region. Our results indicate that the individual variance in audiovisual speech in noise performance can be accounted for, in part, by better use of fine facial detail information extracted from the visual signal and increased fixation on mouth regions for short stimuli. Thus, for some, audiovisual speech perception may suffer when the visual input (in addition to the auditory signal) is less than perfect. |
Gerry T M Altmann; Yuki Kamide Incremental interpretation at verbs: Restricting the domain of subsequent reference Journal Article In: Cognition, vol. 73, no. 3, pp. 247–264, 1999. @article{Altmann1999,
title = {Incremental interpretation at verbs: Restricting the domain of subsequent reference},
author = {Gerry T M Altmann and Yuki Kamide},
doi = {10.1016/S0010-0277(99)00059-1},
year = {1999},
date = {1999-01-01},
journal = {Cognition},
volume = {73},
number = {3},
pages = {247--264},
abstract = {Participants' eye movements were recorded as they inspected a semi-realistic visual scene showing a boy, a cake, and various distractor objects. Whilst viewing this scene, they heard sentences such as 'the boy will move the cake' or 'the boy will eat the cake'. The cake was the only edible object portrayed in the scene. In each of two experiments, the onset of saccadic eye movements to the target object (the cake) was significantly later in the move condition than in the eat condition; saccades to the target were launched after the onset of the spoken word cake in the move condition, but before its onset in the eat condition. The results suggest that information at the verb can be used to restrict the domain within the context to which subsequent reference will be made by the (as yet unencountered) post-verbal grammatical object. The data support a hypothesis in which sentence processing is driven by the predictive relationships between verbs, their syntactic arguments, and the real-world contexts in which they occur.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Participants' eye movements were recorded as they inspected a semi-realistic visual scene showing a boy, a cake, and various distractor objects. Whilst viewing this scene, they heard sentences such as 'the boy will move the cake' or 'the boy will eat the cake'. The cake was the only edible object portrayed in the scene. In each of two experiments, the onset of saccadic eye movements to the target object (the cake) was significantly later in the move condition than in the eat condition; saccades to the target were launched after the onset of the spoken word cake in the move condition, but before its onset in the eat condition. The results suggest that information at the verb can be used to restrict the domain within the context to which subsequent reference will be made by the (as yet unencountered) post-verbal grammatical object. The data support a hypothesis in which sentence processing is driven by the predictive relationships between verbs, their syntactic arguments, and the real-world contexts in which they occur. |
Gerry T M Altmann Language-mediated eye movements in the absence of a visual world: The 'blank screen paradigm' Journal Article In: Cognition, vol. 93, no. 2, pp. B79–87, 2004. @article{Altmann2004a,
title = {Language-mediated eye movements in the absence of a visual world: The 'blank screen paradigm'},
author = {Gerry T M Altmann},
doi = {10.1016/j.cognition.2004.02.005},
year = {2004},
date = {2004-01-01},
journal = {Cognition},
volume = {93},
number = {2},
pages = {B79--87},
abstract = {The 'visual world paradigm' typically involves presenting participants with a visual scene and recording eye movements as they either hear an instruction to manipulate objects in the scene or as they listen to a description of what may happen to those objects. In this study, participants heard each target sentence only after the corresponding visual scene had been displayed and then removed. For a scene depicting a man, a woman, a cake, and a newspaper, the eyes were subsequently directed, during 'eat' in 'the man will eat the cake', towards where the cake had previously been located even though the screen had been blank for over 2 s. The rapidity of these movements mirrored the anticipatory eye movements observed in previous studies [Cognition 73 (1999) 247; J. Mem. Lang. 49 (2003) 133]. Thus, anticipatory eye movements are not dependent on a concurrent visual scene, but are dependent on a mental record of the scene that is independent of whether the visual scene is still present.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The 'visual world paradigm' typically involves presenting participants with a visual scene and recording eye movements as they either hear an instruction to manipulate objects in the scene or as they listen to a description of what may happen to those objects. In this study, participants heard each target sentence only after the corresponding visual scene had been displayed and then removed. For a scene depicting a man, a woman, a cake, and a newspaper, the eyes were subsequently directed, during 'eat' in 'the man will eat the cake', towards where the cake had previously been located even though the screen had been blank for over 2 s. The rapidity of these movements mirrored the anticipatory eye movements observed in previous studies [Cognition 73 (1999) 247; J. Mem. Lang. 49 (2003) 133]. Thus, anticipatory eye movements are not dependent on a concurrent visual scene, but are dependent on a mental record of the scene that is independent of whether the visual scene is still present. |
Gerry T M Altmann; Yuki Kamide The real-time mediation of visual attention by language and world knowledge: Linking anticipatory (and other) eye movements to linguistic processing Journal Article In: Journal of Memory and Language, vol. 57, no. 4, pp. 502–518, 2007. @article{Altmann2007,
title = {The real-time mediation of visual attention by language and world knowledge: Linking anticipatory (and other) eye movements to linguistic processing},
author = {Gerry T M Altmann and Yuki Kamide},
doi = {10.1016/j.jml.2006.12.004},
year = {2007},
date = {2007-01-01},
journal = {Journal of Memory and Language},
volume = {57},
number = {4},
pages = {502--518},
abstract = {Two experiments explored the representational basis for anticipatory eye movements. Participants heard 'the man will drink ...' or 'the man has drunk ...' (Experiment 1) or 'the man will drink all of ...' or 'the man has drunk all of ...' (Experiment 2). They viewed a concurrent scene depicting a full glass of beer and an empty wine glass (amongst other things). There were more saccades towards the empty wine glass in the past tensed conditions than in the future tense conditions; the converse pattern obtained for looks towards the full glass of beer. We argue that these anticipatory eye movements reflect sensitivity to objects' affordances, and develop an account of the linkage between language processing and visual attention that can account not only for looks towards named objects, but also for those cases (including anticipatory eye movements) where attention is directed towards objects that are not being named.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Two experiments explored the representational basis for anticipatory eye movements. Participants heard 'the man will drink ...' or 'the man has drunk ...' (Experiment 1) or 'the man will drink all of ...' or 'the man has drunk all of ...' (Experiment 2). They viewed a concurrent scene depicting a full glass of beer and an empty wine glass (amongst other things). There were more saccades towards the empty wine glass in the past tensed conditions than in the future tense conditions; the converse pattern obtained for looks towards the full glass of beer. We argue that these anticipatory eye movements reflect sensitivity to objects' affordances, and develop an account of the linkage between language processing and visual attention that can account not only for looks towards named objects, but also for those cases (including anticipatory eye movements) where attention is directed towards objects that are not being named. |
Gerry T M Altmann; Yuki Kamide Discourse-mediation of the mapping between language and the visual world: Eye movements and mental representation Journal Article In: Cognition, vol. 111, no. 1, pp. 55–71, 2009. @article{Altmann2009,
title = {Discourse-mediation of the mapping between language and the visual world: Eye movements and mental representation},
author = {Gerry T M Altmann and Yuki Kamide},
doi = {10.1016/j.cognition.2008.12.005},
year = {2009},
date = {2009-01-01},
journal = {Cognition},
volume = {111},
number = {1},
pages = {55--71},
publisher = {Elsevier B.V.},
abstract = {Two experiments explored the mapping between language and mental representations of visual scenes. In both experiments, participants viewed, for example, a scene depicting a woman, a wine glass and bottle on the floor, an empty table, and various other objects. In Experiment 1, participants concurrently heard either 'The woman will put the glass on the table' or 'The woman is too lazy to put the glass on the table'. Subsequently, with the scene unchanged, participants heard that the woman 'will pick up the bottle, and pour the wine carefully into the glass.' Experiment 2 was identical except that the scene was removed before the onset of the spoken language. In both cases, eye movements after 'pour' (anticipating the glass) and at 'glass' reflected the language-determined position of the glass, as either on the floor, or moved onto the table, even though the concurrent (Experiment 1) or prior (Experiment 2) scene showed the glass in its unmoved position on the floor. Language-mediated eye movements thus reflect the real-time mapping of language onto dynamically updateable event-based representations of concurrently or previously seen objects (and their locations).},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Two experiments explored the mapping between language and mental representations of visual scenes. In both experiments, participants viewed, for example, a scene depicting a woman, a wine glass and bottle on the floor, an empty table, and various other objects. In Experiment 1, participants concurrently heard either 'The woman will put the glass on the table' or 'The woman is too lazy to put the glass on the table'. Subsequently, with the scene unchanged, participants heard that the woman 'will pick up the bottle, and pour the wine carefully into the glass.' Experiment 2 was identical except that the scene was removed before the onset of the spoken language. In both cases, eye movements after 'pour' (anticipating the glass) and at 'glass' reflected the language-determined position of the glass, as either on the floor, or moved onto the table, even though the concurrent (Experiment 1) or prior (Experiment 2) scene showed the glass in its unmoved position on the floor. Language-mediated eye movements thus reflect the real-time mapping of language onto dynamically updateable event-based representations of concurrently or previously seen objects (and their locations). |
Gerry T M Altmann Language can mediate eye movement control within 100milliseconds, regardless of whether there is anything to move the eyes to Journal Article In: Acta Psychologica, vol. 137, no. 2, pp. 190–200, 2011. @article{Altmann2011,
title = {Language can mediate eye movement control within 100milliseconds, regardless of whether there is anything to move the eyes to},
author = {Gerry T M Altmann},
doi = {10.1016/j.actpsy.2010.09.009},
year = {2011},
date = {2011-01-01},
journal = {Acta Psychologica},
volume = {137},
number = {2},
pages = {190--200},
publisher = {Elsevier B.V.},
abstract = {The delay between the signal to move the eyes, and the execution of the corresponding eye movement, is variable, and skewed; with an early peak followed by a considerable tail. This skewed distribution renders the answer to the question "What is the delay between language input and saccade execution?" problematic; for a given task, there is no single number, only a distribution of numbers. Here, two previously published studies are reanalysed, whose designs enable us to answer, instead, the question: How long does it take, as the language unfolds, for the oculomotor system to demonstrate sensitivity to the distinction between "signal" (eye movements due to the unfolding language) and "noise" (eye movements due to extraneous factors)? In two studies, participants heard either 'the man. .' or 'the girl. .', and the distribution of launch times towards the concurrently, or previously, depicted man in response to these two inputs was calculated. In both cases, the earliest discrimination between signal and noise occurred at around 100. ms. This rapid interplay between language and oculomotor control is most likely due to cancellation of about-to-be executed saccades towards objects (or their episodic trace) that mismatch the earliest phonological moments of the unfolding word.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The delay between the signal to move the eyes, and the execution of the corresponding eye movement, is variable, and skewed; with an early peak followed by a considerable tail. This skewed distribution renders the answer to the question "What is the delay between language input and saccade execution?" problematic; for a given task, there is no single number, only a distribution of numbers. Here, two previously published studies are reanalysed, whose designs enable us to answer, instead, the question: How long does it take, as the language unfolds, for the oculomotor system to demonstrate sensitivity to the distinction between "signal" (eye movements due to the unfolding language) and "noise" (eye movements due to extraneous factors)? In two studies, participants heard either 'the man. .' or 'the girl. .', and the distribution of launch times towards the concurrently, or previously, depicted man in response to these two inputs was calculated. In both cases, the earliest discrimination between signal and noise occurred at around 100. ms. This rapid interplay between language and oculomotor control is most likely due to cancellation of about-to-be executed saccades towards objects (or their episodic trace) that mismatch the earliest phonological moments of the unfolding word. |
Noor Al-Zanoon; Michael Dambacher; Victor Kuperman Evidence for a global oculomotor program in reading Journal Article In: Psychological Research, vol. 81, no. 4, pp. 863–877, 2017. @article{AlZanoon2017,
title = {Evidence for a global oculomotor program in reading},
author = {Noor Al-Zanoon and Michael Dambacher and Victor Kuperman},
doi = {10.1007/s00426-016-0786-x},
year = {2017},
date = {2017-01-01},
journal = {Psychological Research},
volume = {81},
number = {4},
pages = {863--877},
publisher = {Springer Berlin Heidelberg},
abstract = {Recent corpus studies of eye-movements in reading revealed a substantial increase in saccade amplitudes and fixation durations as the eyes move over the first words of a sentence. This start-up effect suggests a global oculomotor program, which operates on the level of an entire line, in addition to the well-established local programs operating within the visual span. The present study investigates the nature of this global program experimentally and examines whether the start-up effect is predicated on generic visual or specific linguistic characteristics and whether it is mainly reflected in saccade amplitudes, fixation durations or both measures. Eye movements were recorded while 38 participants read (a) normal sentences, (b) sequences of randomly shuffled words and (c) sequences of z-strings. The stimuli were, therefore, similar in their visual features, but varied in the amount of syntactic and lexical information. Further, the stimuli were composed of words or strings that either varied naturally in length (Nonequal condition) or were all restricted to a specific length within a sentence (Equal). The latter condition constrained the variability of saccades and served to dissociate effects of word position in line on saccade amplitudes and fixation durations. A robust start-up effect emerged in saccade amplitudes in all Nonequal stimuli, and-in an attenuated form-in Equal sentences. A start-up effect in single fixation durations was observed in Nonequal and Equal normal sentences, but not in z-strings. These findings support the notion of a global oculomotor program in reading particularly for the spatial characteristics of motor planning, which rely on visual rather than linguistic information.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Recent corpus studies of eye-movements in reading revealed a substantial increase in saccade amplitudes and fixation durations as the eyes move over the first words of a sentence. This start-up effect suggests a global oculomotor program, which operates on the level of an entire line, in addition to the well-established local programs operating within the visual span. The present study investigates the nature of this global program experimentally and examines whether the start-up effect is predicated on generic visual or specific linguistic characteristics and whether it is mainly reflected in saccade amplitudes, fixation durations or both measures. Eye movements were recorded while 38 participants read (a) normal sentences, (b) sequences of randomly shuffled words and (c) sequences of z-strings. The stimuli were, therefore, similar in their visual features, but varied in the amount of syntactic and lexical information. Further, the stimuli were composed of words or strings that either varied naturally in length (Nonequal condition) or were all restricted to a specific length within a sentence (Equal). The latter condition constrained the variability of saccades and served to dissociate effects of word position in line on saccade amplitudes and fixation durations. A robust start-up effect emerged in saccade amplitudes in all Nonequal stimuli, and-in an attenuated form-in Equal sentences. A start-up effect in single fixation durations was observed in Nonequal and Equal normal sentences, but not in z-strings. These findings support the notion of a global oculomotor program in reading particularly for the spatial characteristics of motor planning, which rely on visual rather than linguistic information. |
Simona Amenta; Marco Marelli; Davide Crepaldi The fruitless effort of growing a fruitless tree: Early morpho-orthographic and morpho-semantic effects in sentence reading Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 41, no. 5, pp. 1587–1596, 2015. @article{Amenta2015,
title = {The fruitless effort of growing a fruitless tree: Early morpho-orthographic and morpho-semantic effects in sentence reading},
author = {Simona Amenta and Marco Marelli and Davide Crepaldi},
doi = {10.1037/xlm0000104},
year = {2015},
date = {2015-01-01},
journal = {Journal of Experimental Psychology: Learning, Memory, and Cognition},
volume = {41},
number = {5},
pages = {1587--1596},
abstract = {In this eye-tracking study, we investigated how semantics inform morphological analysis at the early stages of visual word identification in sentence reading. We exploited a feature of several derived Italian words, that is, that they can be read in a "morphologically transparent" way or in a "morphologically opaque" way according to the sentence context to which they belong. This way, each target word was embedded in a sentence eliciting either its transparent or opaque interpretation. We analyzed whether the effect of stem frequency changes according to whether the (very same) word is read as a genuine derivation (transparent context) versus as a pseudoderived word (opaque context). Analysis of the first fixation durations revealed a stem-word frequency effect in both opaque and transparent contexts, thus showing that stems were accessed whether or not they contributed to word meaning, that is, word decomposition is indeed blind to semantics. However, while the stem-word frequency effect was facilitatory in the transparent context, it was inhibitory in the opaque context, thus showing an early involvement of semantic representations. This pattern of data is revealed by words with short suffixes. These results indicate that derived and pseudoderived words are segmented into their constituent morphemes also in natural reading; however, this blind-to-semantics process activates morpheme representations that are semantically connoted.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
In this eye-tracking study, we investigated how semantics inform morphological analysis at the early stages of visual word identification in sentence reading. We exploited a feature of several derived Italian words, that is, that they can be read in a "morphologically transparent" way or in a "morphologically opaque" way according to the sentence context to which they belong. This way, each target word was embedded in a sentence eliciting either its transparent or opaque interpretation. We analyzed whether the effect of stem frequency changes according to whether the (very same) word is read as a genuine derivation (transparent context) versus as a pseudoderived word (opaque context). Analysis of the first fixation durations revealed a stem-word frequency effect in both opaque and transparent contexts, thus showing that stems were accessed whether or not they contributed to word meaning, that is, word decomposition is indeed blind to semantics. However, while the stem-word frequency effect was facilitatory in the transparent context, it was inhibitory in the opaque context, thus showing an early involvement of semantic representations. This pattern of data is revealed by words with short suffixes. These results indicate that derived and pseudoderived words are segmented into their constituent morphemes also in natural reading; however, this blind-to-semantics process activates morpheme representations that are semantically connoted. |
Richard Andersson; Fernanda Ferreira; John M Henderson I see what you're saying: The integration of complex speech and scenes during language comprehension Journal Article In: Acta Psychologica, vol. 137, no. 2, pp. 208–216, 2011. @article{Andersson2011,
title = {I see what you're saying: The integration of complex speech and scenes during language comprehension},
author = {Richard Andersson and Fernanda Ferreira and John M Henderson},
doi = {10.1016/j.actpsy.2011.01.007},
year = {2011},
date = {2011-01-01},
journal = {Acta Psychologica},
volume = {137},
number = {2},
pages = {208--216},
publisher = {Elsevier B.V.},
abstract = {The effect of language-driven eye movements in a visual scene with concurrent speech was examined using complex linguistic stimuli and complex scenes. The processing demands were manipulated using speech rate and the temporal distance between mentioned objects. This experiment differs from previous research by using complex photographic scenes, three-sentence utterances and mentioning four target objects. The main finding was that objects that are more slowly mentioned, more evenly placed and isolated in the speech stream are more likely to be fixated after having been mentioned and are fixated faster. Surprisingly, even objects mentioned in the most demanding conditions still show an effect of language-driven eye-movements. This supports research using concurrent speech and visual scenes, and shows that the behavior of matching visual and linguistic information is likely to generalize to language situations of high information load.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The effect of language-driven eye movements in a visual scene with concurrent speech was examined using complex linguistic stimuli and complex scenes. The processing demands were manipulated using speech rate and the temporal distance between mentioned objects. This experiment differs from previous research by using complex photographic scenes, three-sentence utterances and mentioning four target objects. The main finding was that objects that are more slowly mentioned, more evenly placed and isolated in the speech stream are more likely to be fixated after having been mentioned and are fixated faster. Surprisingly, even objects mentioned in the most demanding conditions still show an effect of language-driven eye-movements. This supports research using concurrent speech and visual scenes, and shows that the behavior of matching visual and linguistic information is likely to generalize to language situations of high information load. |
Sally Andrews; Brett Miller; Keith Rayner Eye movements and morphological segmentation of compound words: There is a mouse in mousetrap Journal Article In: European Journal of Cognitive Psychology, vol. 16, no. 1-2, pp. 285–311, 2004. @article{Andrews2004,
title = {Eye movements and morphological segmentation of compound words: There is a mouse in mousetrap},
author = {Sally Andrews and Brett Miller and Keith Rayner},
doi = {10.1080/09541440340000123},
year = {2004},
date = {2004-01-01},
journal = {European Journal of Cognitive Psychology},
volume = {16},
number = {1-2},
pages = {285--311},
abstract = {In two experiments, readers' eye movements were monitored as they read sentences containing compound words. In Experiment 1, the frequency of the first and second morpheme was manipulated in compound words of low whole word frequency. Experiment 2 compared pairs of low frequency compounds with high and low frequency first morphemes but identical second morphemes that were embedded in the same sentence frames. The results showed significant effects of the frequency of both morphemes on gaze duration and total fixation time on the compound words. Regression analyses revealed an influence of whole word frequency on the same measures. The results suggest that morphemic constituents of compound words are activated in the course of retrieving the representation of the whole compound word. The fact that the frequency effects were not confined to fixations on the morphemic constituents themselves implies that saccadic eye movements are implemented before morphemic retrieval has been completed. The results highlight the importance of developing more precise models of the perceptual processes underlying reading and how they interact with the processes involved in lexical retrieval and comprehension.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
In two experiments, readers' eye movements were monitored as they read sentences containing compound words. In Experiment 1, the frequency of the first and second morpheme was manipulated in compound words of low whole word frequency. Experiment 2 compared pairs of low frequency compounds with high and low frequency first morphemes but identical second morphemes that were embedded in the same sentence frames. The results showed significant effects of the frequency of both morphemes on gaze duration and total fixation time on the compound words. Regression analyses revealed an influence of whole word frequency on the same measures. The results suggest that morphemic constituents of compound words are activated in the course of retrieving the representation of the whole compound word. The fact that the frequency effects were not confined to fixations on the morphemic constituents themselves implies that saccadic eye movements are implemented before morphemic retrieval has been completed. The results highlight the importance of developing more precise models of the perceptual processes underlying reading and how they interact with the processes involved in lexical retrieval and comprehension. |
Sally Andrews; Aaron Veldre Wrapping up sentence comprehension: The role of task demands and individual differences Journal Article In: Scientific Studies of Reading, pp. 1–18, 2020. @article{Andrews2020,
title = {Wrapping up sentence comprehension: The role of task demands and individual differences},
author = {Sally Andrews and Aaron Veldre},
doi = {10.1080/10888438.2020.1817028},
year = {2020},
date = {2020-01-01},
journal = {Scientific Studies of Reading},
pages = {1--18},
publisher = {Routledge},
abstract = {This study used wrap-up effects on eye movements to assess the relationship between online reading behavior and comprehension. Participants, assessed on measures of reading, vocabulary, and spelling, read short passages that manipulated whether a syntactic boundary was unmarked by punctuation, weakly marked by a comma, or strongly marked by a period. Comprehension demands were manipulated by presenting questions after either 25% or 100% of passages. Wrap-up effects at punctuation boundaries manifested principally in rereading of earlier text and were more marked in lower proficiency readers. High comprehension load was associated with longer total reading time but had little impact on wrap-up effects. The relationship between eye movements and comprehension accuracy suggested that poor comprehension was associated with a shallower reading strategy under low comprehension demands. The implications of these findings for understanding how the processes involved in self-regulating comprehension are modulated by reading proficiency and comprehension goals are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
This study used wrap-up effects on eye movements to assess the relationship between online reading behavior and comprehension. Participants, assessed on measures of reading, vocabulary, and spelling, read short passages that manipulated whether a syntactic boundary was unmarked by punctuation, weakly marked by a comma, or strongly marked by a period. Comprehension demands were manipulated by presenting questions after either 25% or 100% of passages. Wrap-up effects at punctuation boundaries manifested principally in rereading of earlier text and were more marked in lower proficiency readers. High comprehension load was associated with longer total reading time but had little impact on wrap-up effects. The relationship between eye movements and comprehension accuracy suggested that poor comprehension was associated with a shallower reading strategy under low comprehension demands. The implications of these findings for understanding how the processes involved in self-regulating comprehension are modulated by reading proficiency and comprehension goals are discussed. |
Bernhard Angele; Timothy J Slattery; Jinmian Yang; Reinhold Kliegl; Keith Rayner Parafoveal processing in reading: Manipulating n+1 and n+2 previews simultaneously Journal Article In: Visual Cognition, vol. 16, no. 6, pp. 697–707, 2008. @article{Angele2008,
title = {Parafoveal processing in reading: Manipulating n+1 and n+2 previews simultaneously},
author = {Bernhard Angele and Timothy J Slattery and Jinmian Yang and Reinhold Kliegl and Keith Rayner},
doi = {10.1080/13506280802009704},
year = {2008},
date = {2008-01-01},
journal = {Visual Cognition},
volume = {16},
number = {6},
pages = {697--707},
abstract = {The boundary paradigm (Rayner, 1975) with a novel preview manipulation was used to examine the extent of parafoveal processing of words to the right of fixation. Words n + 1 and n + 2 had either correct or incorrect previews prior to fixation (prior to crossing the boundary location). In addition, the manipulation utilized either a high or low frequency word in word n + 1 location on the assumption that it would be more likely that n + 2 preview effects could be obtained when word n + 1 was high frequency. The primary findings were that there was no evidence for a preview benefit for word n + 2 and no evidence for parafoveal-on-foveal effects when word n + 1 is at least four letters long. We discuss implications for models of eye-movement control in reading.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The boundary paradigm (Rayner, 1975) with a novel preview manipulation was used to examine the extent of parafoveal processing of words to the right of fixation. Words n + 1 and n + 2 had either correct or incorrect previews prior to fixation (prior to crossing the boundary location). In addition, the manipulation utilized either a high or low frequency word in word n + 1 location on the assumption that it would be more likely that n + 2 preview effects could be obtained when word n + 1 was high frequency. The primary findings were that there was no evidence for a preview benefit for word n + 2 and no evidence for parafoveal-on-foveal effects when word n + 1 is at least four letters long. We discuss implications for models of eye-movement control in reading. |
Bernhard Angele; Keith Rayner Parafoveal processing of word n + 2 during reading: Do the preceding words matter? Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 37, no. 4, pp. 1210–1220, 2011. @article{Angele2011,
title = {Parafoveal processing of word n + 2 during reading: Do the preceding words matter?},
author = {Bernhard Angele and Keith Rayner},
doi = {10.1037/a0023096},
year = {2011},
date = {2011-01-01},
journal = {Journal of Experimental Psychology: Human Perception and Performance},
volume = {37},
number = {4},
pages = {1210--1220},
abstract = {We used the boundary paradigm (Rayner, 1975) to test two hypotheses that might explain why no conclusive evidence has been found for the existence of n + 2 preprocessing effects. In Experiment 1, we tested whether parafoveal processing of the second word to the right of fixation (n + 2) takes place only when the preceding word (n + 1) is very short (Angele, Slattery, Yang, Kliegl, & Rayner, 2008); word n + 1 was always a three-letter word. Before crossing the boundary, preview for both words n + 1 and n + 2 was either incorrect or correct. In a third condition, only the preview for word n + 1 was incorrect. In Experiment 2, we tested whether word frequency of the preboundary word (n) had an influence on the presence of preview benefit and parafoveal-on-foveal effects. Additionally, Experiment 2 contained a condition in which only preview of n + 2 was incorrect. Our findings suggest that effects of parafoveal n + 2 preprocessing are not modulated by either n + 1 word length or n frequency. Furthermore, we did not observe any evidence of parafoveal lexical preprocessing of word n + 2 in either experiment.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
We used the boundary paradigm (Rayner, 1975) to test two hypotheses that might explain why no conclusive evidence has been found for the existence of n + 2 preprocessing effects. In Experiment 1, we tested whether parafoveal processing of the second word to the right of fixation (n + 2) takes place only when the preceding word (n + 1) is very short (Angele, Slattery, Yang, Kliegl, & Rayner, 2008); word n + 1 was always a three-letter word. Before crossing the boundary, preview for both words n + 1 and n + 2 was either incorrect or correct. In a third condition, only the preview for word n + 1 was incorrect. In Experiment 2, we tested whether word frequency of the preboundary word (n) had an influence on the presence of preview benefit and parafoveal-on-foveal effects. Additionally, Experiment 2 contained a condition in which only preview of n + 2 was incorrect. Our findings suggest that effects of parafoveal n + 2 preprocessing are not modulated by either n + 1 word length or n frequency. Furthermore, we did not observe any evidence of parafoveal lexical preprocessing of word n + 2 in either experiment. |
Bernhard Angele; Keith Rayner Processing the in the parafovea: Are articles skipped automatically? Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 39, no. 2, pp. 649–662, 2013. @article{Angele2013,
title = {Processing the in the parafovea: Are articles skipped automatically?},
author = {Bernhard Angele and Keith Rayner},
doi = {10.1037/a0029294},
year = {2013},
date = {2013-01-01},
journal = {Journal of Experimental Psychology: Learning, Memory, and Cognition},
volume = {39},
number = {2},
pages = {649--662},
abstract = {One of the words that readers of English skip most often is the definite article the. Most accounts of reading assume that in order for a reader to skip a word, it must have received some lexical processing. The definite article is skipped so regularly, however, that the oculomotor system might have learned to skip the letter string t-h-e automatically. We tested whether skipping of articles in English is sensitive to context information or whether it is truly automatic in the sense that any occurrence of the letter string the will trigger a skip. This was done using the gaze-contingent boundary paradigm (Rayner, 1975) to provide readers with false parafoveal previews of the article the. All experimental sentences contained a short target verb, the preview of which could be correct (i.e., identical to the actual subsequent word in the sentence; e.g., ace), a nonword (tda), or an infelicitous article preview (the). Our results indicated that readers tended to skip the infelicitous the previews frequently, suggesting that, in many cases, they seemed to be unable to detect the syntactic anomaly in the preview and based their skipping decision solely on the orthographic properties of the article. However, there was some evidence that readers sometimes detected the anomaly, as they also showed increased skipping of the pretarget word in the the preview condition.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
One of the words that readers of English skip most often is the definite article the. Most accounts of reading assume that in order for a reader to skip a word, it must have received some lexical processing. The definite article is skipped so regularly, however, that the oculomotor system might have learned to skip the letter string t-h-e automatically. We tested whether skipping of articles in English is sensitive to context information or whether it is truly automatic in the sense that any occurrence of the letter string the will trigger a skip. This was done using the gaze-contingent boundary paradigm (Rayner, 1975) to provide readers with false parafoveal previews of the article the. All experimental sentences contained a short target verb, the preview of which could be correct (i.e., identical to the actual subsequent word in the sentence; e.g., ace), a nonword (tda), or an infelicitous article preview (the). Our results indicated that readers tended to skip the infelicitous the previews frequently, suggesting that, in many cases, they seemed to be unable to detect the syntactic anomaly in the preview and based their skipping decision solely on the orthographic properties of the article. However, there was some evidence that readers sometimes detected the anomaly, as they also showed increased skipping of the pretarget word in the the preview condition. |
Bernhard Angele; Keith Rayner Eye movements and parafoveal preview of compound words: Does morpheme order matter? Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 3, pp. 505–526, 2013. @article{Angele2013a,
title = {Eye movements and parafoveal preview of compound words: Does morpheme order matter?},
author = {Bernhard Angele and Keith Rayner},
doi = {10.1080/17470218.2011.644572},
year = {2013},
date = {2013-01-01},
journal = {Quarterly Journal of Experimental Psychology},
volume = {66},
number = {3},
pages = {505--526},
abstract = {Recently, there has been considerable debate about whether readers can identify multiple words in parallel or whether they are limited to a serial mode of word identification, processing one word at a time (see, e.g., Reichle, Liversedge, Pollatsek, & Rayner, 2009). Similar questions can be applied to bimorphemic compound words: Do readers identify all the constituents of a compound word in parallel, and does it matter which of the morphemes is identified first? We asked subjects to read compound words embedded in sentences while monitoring their eye movements. Using the boundary paradigm (Rayner, 1975), we manipulated the preview that subjects received of the compound word before they fixated it. In particular, the morpheme order of the preview was either normal (cowboy) or reversed (boycow). Additionally, we manipulated the preview availability for each of the morphemes separately. Preview was thus available for the first morpheme only (cowtxg), for the second morpheme only (enzboy), or for neither of the morphemes (enztxg). We report three major findings: First, there was an effect of morpheme order on gaze durations measured on the compound word, indicating that, as expected, readers obtained a greater preview benefit when the preview presented the morphemes in the correct order than when their order was reversed. Second, gaze durations on the compound word were influenced not only by preview availability for the first, but also by that for the second morpheme. Finally, and most importantly, the results show that readers are able to extract some morpheme information even from a reverse order preview. In summary, readers obtain preview benefit from both constituents of a short compound word, even when the preview does not reflect the correct morpheme order.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Recently, there has been considerable debate about whether readers can identify multiple words in parallel or whether they are limited to a serial mode of word identification, processing one word at a time (see, e.g., Reichle, Liversedge, Pollatsek, & Rayner, 2009). Similar questions can be applied to bimorphemic compound words: Do readers identify all the constituents of a compound word in parallel, and does it matter which of the morphemes is identified first? We asked subjects to read compound words embedded in sentences while monitoring their eye movements. Using the boundary paradigm (Rayner, 1975), we manipulated the preview that subjects received of the compound word before they fixated it. In particular, the morpheme order of the preview was either normal (cowboy) or reversed (boycow). Additionally, we manipulated the preview availability for each of the morphemes separately. Preview was thus available for the first morpheme only (cowtxg), for the second morpheme only (enzboy), or for neither of the morphemes (enztxg). We report three major findings: First, there was an effect of morpheme order on gaze durations measured on the compound word, indicating that, as expected, readers obtained a greater preview benefit when the preview presented the morphemes in the correct order than when their order was reversed. Second, gaze durations on the compound word were influenced not only by preview availability for the first, but also by that for the second morpheme. Finally, and most importantly, the results show that readers are able to extract some morpheme information even from a reverse order preview. In summary, readers obtain preview benefit from both constituents of a short compound word, even when the preview does not reflect the correct morpheme order. |
Bernhard Angele; Abby E Laishley; Keith Rayner; Simon P Liversedge The effect of high- and low-frequency previews and sentential fit on word skipping during reading Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 40, no. 4, pp. 1181–1203, 2014. @article{Angele2014,
title = {The effect of high- and low-frequency previews and sentential fit on word skipping during reading},
author = {Bernhard Angele and Abby E Laishley and Keith Rayner and Simon P Liversedge},
doi = {10.1037/a0036396},
year = {2014},
date = {2014-01-01},
journal = {Journal of Experimental Psychology: Learning, Memory, and Cognition},
volume = {40},
number = {4},
pages = {1181--1203},
abstract = {In a previous gaze-contingent boundary experiment, Angele and Rayner (2013) found that readers are likely to skip a word that appears to be the definite article the even when syntactic constraints do not allow for articles to occur in that position. In the present study, we investigated whether the word frequency of the preview of a 3-letter target word influences a reader's decision to fixate or skip that word. We found that the word frequency rather than the felicitousness (syntactic fit) of the preview affected how often the upcoming word was skipped. These results indicate that visual information about the upcoming word trumps information from the sentence context when it comes to making a skipping decision. Skipping parafoveal instances of the therefore may simply be an extreme case of skipping high-frequency words.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
In a previous gaze-contingent boundary experiment, Angele and Rayner (2013) found that readers are likely to skip a word that appears to be the definite article the even when syntactic constraints do not allow for articles to occur in that position. In the present study, we investigated whether the word frequency of the preview of a 3-letter target word influences a reader's decision to fixate or skip that word. We found that the word frequency rather than the felicitousness (syntactic fit) of the preview affected how often the upcoming word was skipped. These results indicate that visual information about the upcoming word trumps information from the sentence context when it comes to making a skipping decision. Skipping parafoveal instances of the therefore may simply be an extreme case of skipping high-frequency words. |
Bernhard Angele; Elizabeth R Schotter; Timothy J Slattery; Tara L Tenenbaum; Klinton Bicknell; Keith Rayner Do successor effects in reading reflect lexical parafoveal processing? Evidence from corpus-based and experimental eye movement data Journal Article In: Journal of Memory and Language, vol. 79-80, pp. 76–96, 2015. @article{Angele2015,
title = {Do successor effects in reading reflect lexical parafoveal processing? Evidence from corpus-based and experimental eye movement data},
author = {Bernhard Angele and Elizabeth R Schotter and Timothy J Slattery and Tara L Tenenbaum and Klinton Bicknell and Keith Rayner},
doi = {10.1016/j.jml.2014.11.003},
year = {2015},
date = {2015-01-01},
journal = {Journal of Memory and Language},
volume = {79-80},
pages = {76--96},
publisher = {Elsevier Inc.},
abstract = {In the past, most research on eye movements during reading involved a limited number of subjects reading sentences with specific experimental manipulations on target words. Such experiments usually only analyzed eye-movements measures on and around the target word. Recently, some researchers have started collecting larger data sets involving large and diverse groups of subjects reading large numbers of sentences, enabling them to consider a larger number of influences and study larger and more representative subject groups. In such corpus studies, most of the words in a sentence are analyzed. The complexity of the design of corpus studies and the many potentially uncontrolled influences in such studies pose new issues concerning the analysis methods and interpretability of the data. In particular, several corpus studies of reading have found an effect of successor word (n+. 1) frequency on current word (n) fixation times, while studies employing experimental manipulations tend not to. The general interpretation of corpus studies suggests that readers obtain parafoveal lexical information from the upcoming word before they have finished identifying the current word, while the experimental manipulations shed doubt on this claim. In the present study, we combined a corpus analysis approach with an experimental manipulation (i.e., a parafoveal modification of the moving mask technique, Rayner & Bertera, 1979), so that, either (a) word n+. 1, (b) word n+. 2, (c) both words, or (d) neither word was masked. We found that denying preview for either or both parafoveal words increased average fixation times. Furthermore, we found successor effects similar to those reported in the corpus studies. Importantly, these successor effects were found even when the parafoveal word was masked, suggesting that apparent successor frequency effects may be due to causes that are unrelated to lexical parafoveal preprocessing. We discuss the implications of this finding both for parallel and serial accounts of word identification and for the interpretability of large correlational studies of word identification in reading in general.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
In the past, most research on eye movements during reading involved a limited number of subjects reading sentences with specific experimental manipulations on target words. Such experiments usually only analyzed eye-movements measures on and around the target word. Recently, some researchers have started collecting larger data sets involving large and diverse groups of subjects reading large numbers of sentences, enabling them to consider a larger number of influences and study larger and more representative subject groups. In such corpus studies, most of the words in a sentence are analyzed. The complexity of the design of corpus studies and the many potentially uncontrolled influences in such studies pose new issues concerning the analysis methods and interpretability of the data. In particular, several corpus studies of reading have found an effect of successor word (n+. 1) frequency on current word (n) fixation times, while studies employing experimental manipulations tend not to. The general interpretation of corpus studies suggests that readers obtain parafoveal lexical information from the upcoming word before they have finished identifying the current word, while the experimental manipulations shed doubt on this claim. In the present study, we combined a corpus analysis approach with an experimental manipulation (i.e., a parafoveal modification of the moving mask technique, Rayner & Bertera, 1979), so that, either (a) word n+. 1, (b) word n+. 2, (c) both words, or (d) neither word was masked. We found that denying preview for either or both parafoveal words increased average fixation times. Furthermore, we found successor effects similar to those reported in the corpus studies. Importantly, these successor effects were found even when the parafoveal word was masked, suggesting that apparent successor frequency effects may be due to causes that are unrelated to lexical parafoveal preprocessing. We discuss the implications of this finding both for parallel and serial accounts of word identification and for the interpretability of large correlational studies of word identification in reading in general. |
Bernhard Angele; Timothy J Slattery; Keith Rayner Two stages of parafoveal processing during reading: Evidence from a display change detection task Journal Article In: Psychonomic Bulletin & Review, vol. 23, no. 4, pp. 1241–1249, 2016. @article{Angele2016,
title = {Two stages of parafoveal processing during reading: Evidence from a display change detection task},
author = {Bernhard Angele and Timothy J Slattery and Keith Rayner},
doi = {10.3758/s13423-015-0995-0},
year = {2016},
date = {2016-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {23},
number = {4},
pages = {1241--1249},
publisher = {Psychonomic Bulletin & Review},
abstract = {We used a display change detection paradigm (Slattery, Angele, & Rayner Human Perception and Performance, 37, 1924–1938 2011) to investigate whether display change detection uses orthographic regularity and whether detection is affected by the processing difficulty of the word preceding the boundary that triggers the display change. Subjects were significantly more sensitive to display changes when the change was from a nonwordlike preview than when the change was from a wordlike preview, but the preview benefit effect on the target word was not affected by whether the preview was wordlike or nonwordlike. Additionally, we did not find any influence of preboundary word frequency on display change detection performance. Our results suggest that display change detection and lexical processing do not use the same cognitive mechanisms. We propose that parafoveal processing takes place in two stages: an early, orthography-based, preattentional stage, and a late, attention-dependent lexical access stage.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
We used a display change detection paradigm (Slattery, Angele, & Rayner Human Perception and Performance, 37, 1924–1938 2011) to investigate whether display change detection uses orthographic regularity and whether detection is affected by the processing difficulty of the word preceding the boundary that triggers the display change. Subjects were significantly more sensitive to display changes when the change was from a nonwordlike preview than when the change was from a wordlike preview, but the preview benefit effect on the target word was not affected by whether the preview was wordlike or nonwordlike. Additionally, we did not find any influence of preboundary word frequency on display change detection performance. Our results suggest that display change detection and lexical processing do not use the same cognitive mechanisms. We propose that parafoveal processing takes place in two stages: an early, orthography-based, preattentional stage, and a late, attention-dependent lexical access stage. |
Benjamin Anible; Paul Twitchell; Gabriel S Waters; Paola E Dussias; Pilar Piñar; Jill P Morford Sensitivity to verb bias in American Sign Language-English bilinguals Journal Article In: Journal of Deaf Studies and Deaf Education, vol. 20, no. 3, pp. 215–228, 2015. @article{Anible2015,
title = {Sensitivity to verb bias in American Sign Language-English bilinguals},
author = {Benjamin Anible and Paul Twitchell and Gabriel S Waters and Paola E Dussias and Pilar Pi{ñ}ar and Jill P Morford},
doi = {10.1093/deafed/env007},
year = {2015},
date = {2015-01-01},
journal = {Journal of Deaf Studies and Deaf Education},
volume = {20},
number = {3},
pages = {215--228},
abstract = {Native speakers of English are sensitive to the likelihood that a verb will appear in a specific subcategorization frame, known as verb bias. Readers rely on verb bias to help them resolve temporary ambiguity in sentence comprehension. We investigate whether deaf sign–print bilinguals who have acquired English syntactic knowledge primarily through print exposure show sensitivity to English verb biases in both production and comprehension. We first elicited sentence continuations for 100 English verbs as an offline production measure of sensitivity to verb bias. We then collected eye movement records to examine whether deaf bilinguals' online parsing decisions are influenced by English verb bias. The results indicate that exposure to a second language primarily via print is sufficient to influence use of implicit frequency-based characteristics of a language in production and also to inform parsing decisions in comprehension for some, but not all, verbs.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Native speakers of English are sensitive to the likelihood that a verb will appear in a specific subcategorization frame, known as verb bias. Readers rely on verb bias to help them resolve temporary ambiguity in sentence comprehension. We investigate whether deaf sign–print bilinguals who have acquired English syntactic knowledge primarily through print exposure show sensitivity to English verb biases in both production and comprehension. We first elicited sentence continuations for 100 English verbs as an offline production measure of sensitivity to verb bias. We then collected eye movement records to examine whether deaf bilinguals' online parsing decisions are influenced by English verb bias. The results indicate that exposure to a second language primarily via print is sufficient to influence use of implicit frequency-based characteristics of a language in production and also to inform parsing decisions in comprehension for some, but not all, verbs. |
Jens K Apel; John M Henderson; Fernanda Ferreira Targeting regressions: Do readers pay attention to the left? Journal Article In: Psychonomic Bulletin & Review, vol. 19, no. 6, pp. 1108–1113, 2012. @article{Apel2012a,
title = {Targeting regressions: Do readers pay attention to the left?},
author = {Jens K Apel and John M Henderson and Fernanda Ferreira},
doi = {10.3758/s13423-012-0291-1},
year = {2012},
date = {2012-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {19},
number = {6},
pages = {1108--1113},
abstract = {The perceptual span during normal reading extends approximately 14 to 15 characters to the right and three to four characters to the left of a current fixation. In the present study, we investigated whether the perceptual span extends farther than three to four characters to the left immediately before readers execute a regression. We used a display-change paradigm in which we masked words beyond the three-to-four-character range to the left of a fixation. We hypothesized that if reading behavior was affected by this manipulation before regressions but not before progressions, we would have evidence that the perceptual span extends farther left before leftward eye movements. We observed significantly shorter regressive saccades and longer fixation and gaze durations in the masked condition when a regression was executed. Forward saccades were entirely unaffected by the manipulations. We concluded that the perceptual span during reading changes, depending on the direction of a following saccade.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The perceptual span during normal reading extends approximately 14 to 15 characters to the right and three to four characters to the left of a current fixation. In the present study, we investigated whether the perceptual span extends farther than three to four characters to the left immediately before readers execute a regression. We used a display-change paradigm in which we masked words beyond the three-to-four-character range to the left of a fixation. We hypothesized that if reading behavior was affected by this manipulation before regressions but not before progressions, we would have evidence that the perceptual span extends farther left before leftward eye movements. We observed significantly shorter regressive saccades and longer fixation and gaze durations in the masked condition when a regression was executed. Forward saccades were entirely unaffected by the manipulations. We concluded that the perceptual span during reading changes, depending on the direction of a following saccade. |
Keith S Apfelbaum; Sheila E Blumstein; Bob Mcmurray Semantic priming is affected by real-time phonological competition: Evidence for continuous cascading systems Journal Article In: Psychonomic Bulletin & Review, vol. 18, no. 1, pp. 141–149, 2011. @article{Apfelbaum2011,
title = {Semantic priming is affected by real-time phonological competition: Evidence for continuous cascading systems},
author = {Keith S Apfelbaum and Sheila E Blumstein and Bob Mcmurray},
doi = {10.3758/s13423-010-0039-8},
year = {2011},
date = {2011-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {18},
number = {1},
pages = {141--149},
abstract = {Lexical-semantic access is affected by the phonological structure of the lexicon. What is less clear is whether such effects are the result of continuous activation between lexical form and semantic processing or whether they arise from a more modular system in which the timing of accessing lexical form determines the timing of semantic activation. This study examined this issue using the visual world paradigm by investigating the time course of semantic priming as a function of the number of phonological competitors. Critical trials consisted of high or low density auditory targets (e.g., horse) and a visual display containing a target, a semantically related object (e.g., saddle), and two phonologically and semantically unrelated objects (e.g., chimney, bikini). Results showed greater magnitude of priming for semantically related objects of low than of high density words, and no differences for high and low density word targets in the time course of looks to the word semantically related to the target. This pattern of results is consistent with models of cascading activation, which predict that lexical activation has continuous effects on the level of semantic activation, with no delays in the onset of semantic activation for phonologically competing words.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Lexical-semantic access is affected by the phonological structure of the lexicon. What is less clear is whether such effects are the result of continuous activation between lexical form and semantic processing or whether they arise from a more modular system in which the timing of accessing lexical form determines the timing of semantic activation. This study examined this issue using the visual world paradigm by investigating the time course of semantic priming as a function of the number of phonological competitors. Critical trials consisted of high or low density auditory targets (e.g., horse) and a visual display containing a target, a semantically related object (e.g., saddle), and two phonologically and semantically unrelated objects (e.g., chimney, bikini). Results showed greater magnitude of priming for semantically related objects of low than of high density words, and no differences for high and low density word targets in the time course of looks to the word semantically related to the target. This pattern of results is consistent with models of cascading activation, which predict that lexical activation has continuous effects on the level of semantic activation, with no delays in the onset of semantic activation for phonologically competing words. |
Manabu Arai; Roger P G van Gompel; Christoph Scheepers Priming ditransitive structures in comprehension Journal Article In: Cognitive Psychology, vol. 54, no. 3, pp. 218–250, 2007. @article{Arai2007,
title = {Priming ditransitive structures in comprehension},
author = {Manabu Arai and Roger P G van Gompel and Christoph Scheepers},
doi = {10.1016/j.cogpsych.2006.07.001},
year = {2007},
date = {2007-01-01},
journal = {Cognitive Psychology},
volume = {54},
number = {3},
pages = {218--250},
abstract = {Many studies have shown evidence for syntactic priming during language production (e.g., Bock, 1986). It is often assumed that comprehension and production share similar mechanisms and that priming also occurs during comprehension (e.g., Pickering & Garrod, 2004). Research investigating priming during comprehension (e.g., Branigan, Pickering, & McLean, 2005; Scheepers & Crocker, 2004) has mainly focused on syntactic ambiguities that are very different from the meaning-equivalent structures used in production research. In two experiments, we investigated whether priming during comprehension occurs in ditransitive sentences similar to those used in production research. When the verb was repeated between prime and target, we observed a priming effect similar to that in production. However, we observed no evidence for priming when the verbs were different. Thus, priming during comprehension occurs for very similar structures as priming during production, but in contrast to production, the priming effect is completely lexically dependent.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Many studies have shown evidence for syntactic priming during language production (e.g., Bock, 1986). It is often assumed that comprehension and production share similar mechanisms and that priming also occurs during comprehension (e.g., Pickering & Garrod, 2004). Research investigating priming during comprehension (e.g., Branigan, Pickering, & McLean, 2005; Scheepers & Crocker, 2004) has mainly focused on syntactic ambiguities that are very different from the meaning-equivalent structures used in production research. In two experiments, we investigated whether priming during comprehension occurs in ditransitive sentences similar to those used in production research. When the verb was repeated between prime and target, we observed a priming effect similar to that in production. However, we observed no evidence for priming when the verbs were different. Thus, priming during comprehension occurs for very similar structures as priming during production, but in contrast to production, the priming effect is completely lexically dependent. |
Manabu Arai; Frank Keller The use of verb-specific information for prediction in sentence processing Journal Article In: Language and Cognitive Processes, vol. 28, no. 4, pp. 525–560, 2013. @article{Arai2013,
title = {The use of verb-specific information for prediction in sentence processing},
author = {Manabu Arai and Frank Keller},
doi = {10.1080/01690965.2012.658072},
year = {2013},
date = {2013-01-01},
journal = {Language and Cognitive Processes},
volume = {28},
number = {4},
pages = {525--560},
abstract = {Recent research has shown that language comprehenders make predictions about upcoming linguistic information. These studies demonstrate that the processor not only analyses the input that it received but also predicts upcoming unseen elements. Two visual world experiments were conducted to examine the type of syntactic information this prediction process has access to. Experiment 1 examined whether the verb's subcategorization information is used for predicting a direct object, by comparing transitive verbs (e.g., punish) to intransitive verbs (e.g., disagree). Experiment 2 examined whether verb frequency information is used for predicting a reduced relative clause by contrasting verbs that are infrequent in the past participle form (e.g., watch) with ones that are frequent in that form (e.g., record). Both experiments showed that comprehenders used lexically specific syntactic information to predict upcoming syntactic structure; this information can be used to avoid garden paths in certain cases, as Experiment 2 demonstrated.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Recent research has shown that language comprehenders make predictions about upcoming linguistic information. These studies demonstrate that the processor not only analyses the input that it received but also predicts upcoming unseen elements. Two visual world experiments were conducted to examine the type of syntactic information this prediction process has access to. Experiment 1 examined whether the verb's subcategorization information is used for predicting a direct object, by comparing transitive verbs (e.g., punish) to intransitive verbs (e.g., disagree). Experiment 2 examined whether verb frequency information is used for predicting a reduced relative clause by contrasting verbs that are infrequent in the past participle form (e.g., watch) with ones that are frequent in that form (e.g., record). Both experiments showed that comprehenders used lexically specific syntactic information to predict upcoming syntactic structure; this information can be used to avoid garden paths in certain cases, as Experiment 2 demonstrated. |
Manabu Arai; Reiko Mazuka The development of Japanese passive syntax as indexed by structural priming in comprehension Journal Article In: Quarterly Journal of Experimental Psychology, vol. 67, no. 1, pp. 60–78, 2014. @article{Arai2014,
title = {The development of Japanese passive syntax as indexed by structural priming in comprehension},
author = {Manabu Arai and Reiko Mazuka},
doi = {10.1080/17470218.2013.790454},
year = {2014},
date = {2014-01-01},
journal = {Quarterly Journal of Experimental Psychology},
volume = {67},
number = {1},
pages = {60--78},
abstract = {A number of previous studies reported a phenomenon of syntactic priming with young children as evidence for cognitive representations required for processing syntactic structures. However, it remains unclear how syntactic priming reflects children's grammatical competence. The current study investigated structural priming of the Japanese passive structure with 5- and 6-year-old children in a visual-world setting. Our results showed a priming effect as anticipatory eye movements to an upcoming referent in these children but the effect was significantly stronger in magnitude in 6-year-olds than in 5-year-olds. Consistently, the responses to comprehension questions revealed that 6-year-olds produced a greater number of correct answers and more answers using the passive structure than 5-year-olds. We also tested adult participants who showed even stronger priming than the children. The results together revealed that language users with the greater linguistic competence with the passives exhibited stronger priming, demonstrating a tight relationship between the effect of priming and the development of grammatical competence. Furthermore, we found that the magnitude of the priming effect decreased over time. We interpret these results in the light of an error-based learning account. Our results also provided evidence for prehead as well as head-independent priming.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
A number of previous studies reported a phenomenon of syntactic priming with young children as evidence for cognitive representations required for processing syntactic structures. However, it remains unclear how syntactic priming reflects children's grammatical competence. The current study investigated structural priming of the Japanese passive structure with 5- and 6-year-old children in a visual-world setting. Our results showed a priming effect as anticipatory eye movements to an upcoming referent in these children but the effect was significantly stronger in magnitude in 6-year-olds than in 5-year-olds. Consistently, the responses to comprehension questions revealed that 6-year-olds produced a greater number of correct answers and more answers using the passive structure than 5-year-olds. We also tested adult participants who showed even stronger priming than the children. The results together revealed that language users with the greater linguistic competence with the passives exhibited stronger priming, demonstrating a tight relationship between the effect of priming and the development of grammatical competence. Furthermore, we found that the magnitude of the priming effect decreased over time. We interpret these results in the light of an error-based learning account. Our results also provided evidence for prehead as well as head-independent priming. |
Manabu Arai; Chie Nakamura; Reiko Mazuka Predicting the unbeaten path through syntactic priming Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 41, no. 2, pp. 482–500, 2015. @article{Arai2015,
title = {Predicting the unbeaten path through syntactic priming},
author = {Manabu Arai and Chie Nakamura and Reiko Mazuka},
doi = {10.1037/a0038389},
year = {2015},
date = {2015-01-01},
journal = {Journal of Experimental Psychology: Learning, Memory, and Cognition},
volume = {41},
number = {2},
pages = {482--500},
abstract = {A number of previous studies showed that comprehenders make use of lexically based constraints such as subcategorization frequency in processing structurally ambiguous sentences. One piece of such evidence is lexically specific syntactic priming in comprehension; following the costly processing of a temporarily ambiguous sentence, comprehenders experience less processing difficulty with the same structure with the same verb in subsequent processing. In previous studies using a reading paradigm, however, the effect was observed at or following disambiguating information and it is not known whether a priming effect affects only the process ofresolving structural ambiguity following disambiguating input or it also affects the process before ambiguity is resolved. Using a visual world paradigm, the current study addressed this issue with Japanese relative clause sentences. Our results demonstrated that after experiencing the relative clause structure, comprehenders were more likely to predict the usually dispreferred structure immediately upon hearing the same verb. No compatible effect, in contrast, was observed on hearing a different verb. Our results are consistent with the constraint-based lexicalist view, which assumes the parallel activation of possible structural analyses at the verb. Our study demonstrated that an experience of a dispreferred structure activates the structural information in a lexically specific manner, leading comprehenders to predict another instance of the same structure on encountering the same verb.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
A number of previous studies showed that comprehenders make use of lexically based constraints such as subcategorization frequency in processing structurally ambiguous sentences. One piece of such evidence is lexically specific syntactic priming in comprehension; following the costly processing of a temporarily ambiguous sentence, comprehenders experience less processing difficulty with the same structure with the same verb in subsequent processing. In previous studies using a reading paradigm, however, the effect was observed at or following disambiguating information and it is not known whether a priming effect affects only the process ofresolving structural ambiguity following disambiguating input or it also affects the process before ambiguity is resolved. Using a visual world paradigm, the current study addressed this issue with Japanese relative clause sentences. Our results demonstrated that after experiencing the relative clause structure, comprehenders were more likely to predict the usually dispreferred structure immediately upon hearing the same verb. No compatible effect, in contrast, was observed on hearing a different verb. Our results are consistent with the constraint-based lexicalist view, which assumes the parallel activation of possible structural analyses at the verb. Our study demonstrated that an experience of a dispreferred structure activates the structural information in a lexically specific manner, leading comprehenders to predict another instance of the same structure on encountering the same verb. |
Manabu Arai; Chie Nakamura It's harder to break a relationship when you commit long Journal Article In: PLoS ONE, vol. 11, no. 6, pp. 1–13, 2016. @article{Arai2016,
title = {It's harder to break a relationship when you commit long},
author = {Manabu Arai and Chie Nakamura},
doi = {10.1371/journal.pone.0156482},
year = {2016},
date = {2016-01-01},
journal = {PLoS ONE},
volume = {11},
number = {6},
pages = {1--13},
abstract = {Past research has produced evidence that parsing commitments strengthen over the processing of additional linguistic elements that are consistent with the commitments and undoing strong commitments takes more time than undoing weak commitments. It remains unclear, however, whether this so-called digging-in effect is exclusively due to the length of an ambiguous region or at least partly to the extra cost of processing these additional phrases. The current study addressed this issue by testing Japanese relative clause structure, where lexical content and sentence meaning were controlled for. The results showed evidence for a digging-in effect reflecting the strengthened commitment to an incorrect analysis caused by the processing of additional adjuncts. Our study provides strong support for the dynamical, self-organizing models of sentence processing but poses a problem for other models including serial two-stage models as well as frequency-based probabilistic models such as the surprisal theory.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Past research has produced evidence that parsing commitments strengthen over the processing of additional linguistic elements that are consistent with the commitments and undoing strong commitments takes more time than undoing weak commitments. It remains unclear, however, whether this so-called digging-in effect is exclusively due to the length of an ambiguous region or at least partly to the extra cost of processing these additional phrases. The current study addressed this issue by testing Japanese relative clause structure, where lexical content and sentence meaning were controlled for. The results showed evidence for a digging-in effect reflecting the strengthened commitment to an incorrect analysis caused by the processing of additional adjuncts. Our study provides strong support for the dynamical, self-organizing models of sentence processing but poses a problem for other models including serial two-stage models as well as frequency-based probabilistic models such as the surprisal theory. |
Susana Araújo; Falk Huettig; Antje Meyer What underlies the deficit in rapid automatized naming (RAN) in adults with dyslexia? Evidence from eye movements Journal Article In: Scientific Studies of Reading, pp. 1–16, 2020. @article{Araujo2020,
title = {What underlies the deficit in rapid automatized naming (RAN) in adults with dyslexia? Evidence from eye movements},
author = {Susana Ara{ú}jo and Falk Huettig and Antje Meyer},
doi = {10.1080/10888438.2020.1867863},
year = {2020},
date = {2020-01-01},
journal = {Scientific Studies of Reading},
pages = {1--16},
publisher = {Routledge},
abstract = {This eye-tracking study explored how phonological encoding and speech production planning for successive words are coordinated in adult readers with dyslexia (N = 22) and control readers (N = 25) during rapid automatized naming (RAN). Using an object-RAN task, we orthogonally manipulated the word-form frequency and phonological neighborhood density of the object names and assessed the effects on speech and eye movements and their temporal coordination. In both groups, there was a significant interaction between word frequency and neighborhood density: shorter fixations for dense than for sparse neighborhoods were observed for low- but not for high-frequency words. This finding does not suggest a specific difficulty in lexical phonological access in dyslexia. However, in readers with dyslexia only, these lexical effects percolated to the late processing stages, indicated by longer offset eye-speech lags. We close by discussing potential reasons for this finding, including suboptimal specification of phonological representations and deficits in attention control or in multi-item coordination.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
This eye-tracking study explored how phonological encoding and speech production planning for successive words are coordinated in adult readers with dyslexia (N = 22) and control readers (N = 25) during rapid automatized naming (RAN). Using an object-RAN task, we orthogonally manipulated the word-form frequency and phonological neighborhood density of the object names and assessed the effects on speech and eye movements and their temporal coordination. In both groups, there was a significant interaction between word frequency and neighborhood density: shorter fixations for dense than for sparse neighborhoods were observed for low- but not for high-frequency words. This finding does not suggest a specific difficulty in lexical phonological access in dyslexia. However, in readers with dyslexia only, these lexical effects percolated to the late processing stages, indicated by longer offset eye-speech lags. We close by discussing potential reasons for this finding, including suboptimal specification of phonological representations and deficits in attention control or in multi-item coordination. |
John Archibald Second language phonology as redeployment of L1 phonological knowledge Journal Article In: Canadian Journal of Linguistics / La revue canadienne de linguistique, vol. 50, no. 1, pp. 285–314, 2005. @article{Archibald2005,
title = {Second language phonology as redeployment of L1 phonological knowledge},
author = {John Archibald},
doi = {10.1353/cjl.2007.0000},
year = {2005},
date = {2005-01-01},
journal = {Canadian Journal of Linguistics / La revue canadienne de linguistique},
volume = {50},
number = {1},
pages = {285--314},
abstract = {This article presents research showing that second language (L2) learners do not have deficient representations and they are capable of acquiring structures that are absent from their first language (L1). The Redeployment Hypothesis—which claims that L2 phonologies include novel representations created via redeployment of L1 phonological components—is consistent with data from several domains, including acquisition of phonological features, syllable structure, moraic structure, and metrical structure. Moreover, it is shown that input prominence plays a role in L2 acquisition, and that language learners are sensitive to robust phonetic cues. Finally, studies done on interlingual homographs and homophones argue for non-selective access to the bilingual lexicon, suggesting that the language processing capacity is always engaged.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
This article presents research showing that second language (L2) learners do not have deficient representations and they are capable of acquiring structures that are absent from their first language (L1). The Redeployment Hypothesis—which claims that L2 phonologies include novel representations created via redeployment of L1 phonological components—is consistent with data from several domains, including acquisition of phonological features, syllable structure, moraic structure, and metrical structure. Moreover, it is shown that input prominence plays a role in L2 acquisition, and that language learners are sensitive to robust phonetic cues. Finally, studies done on interlingual homographs and homophones argue for non-selective access to the bilingual lexicon, suggesting that the language processing capacity is always engaged. |
Scott P Ardoin; Katherine S Binder; Andrea M Zawoyski; Tori E Foster; Leslie A Blevins Using eye-tracking procedures to evaluate generalization effects: Practicing target words during repeated readings within versus across texts Journal Article In: School Psychology Review, vol. 42, no. 4, pp. 477–495, 2013. @article{Ardoin2013,
title = {Using eye-tracking procedures to evaluate generalization effects: Practicing target words during repeated readings within versus across texts},
author = {Scott P Ardoin and Katherine S Binder and Andrea M Zawoyski and Tori E Foster and Leslie A Blevins},
year = {2013},
date = {2013-01-01},
journal = {School Psychology Review},
volume = {42},
number = {4},
pages = {477--495},
abstract = {Repeated readings is a frequently studied and recommended intervention for improving reading fluency. Typically, researchers investigate generalization of repeated readings interventions by assessing students' accuracy and rate on researcher-developed high word overlap passages. Unfortunately, this methodology may mask intervention effects given that the dependent measure is reflective of time spent by students reading both practiced and unpracticed words. Eye-tracking procedures have the potential to overcome this limitation. The current study examined the eye movements of participants who were (a) not provided with any intervention (n = 28), (b) provided with repeated readings on a single passage containing a set of target words (n = 28), or (c) provided the opportunity to read four different passages each containing the same set of target words (n = 28). Students' reading of a novel passage containing the target words provides evidence to support recommendations that schools use repeated readings. Copyright 2013 by the National Association of School Psychologists.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Repeated readings is a frequently studied and recommended intervention for improving reading fluency. Typically, researchers investigate generalization of repeated readings interventions by assessing students' accuracy and rate on researcher-developed high word overlap passages. Unfortunately, this methodology may mask intervention effects given that the dependent measure is reflective of time spent by students reading both practiced and unpracticed words. Eye-tracking procedures have the potential to overcome this limitation. The current study examined the eye movements of participants who were (a) not provided with any intervention (n = 28), (b) provided with repeated readings on a single passage containing a set of target words (n = 28), or (c) provided the opportunity to read four different passages each containing the same set of target words (n = 28). Students' reading of a novel passage containing the target words provides evidence to support recommendations that schools use repeated readings. Copyright 2013 by the National Association of School Psychologists. |
Scott P Ardoin; Katherine S Binder; Tori E Foster; Andrea M Zawoyski Repeated versus wide reading: A randomized control design study examining the impact of fluency interventions on underlying reading behavior Journal Article In: Journal of School Psychology, vol. 59, pp. 13–38, 2016. @article{Ardoin2016,
title = {Repeated versus wide reading: A randomized control design study examining the impact of fluency interventions on underlying reading behavior},
author = {Scott P Ardoin and Katherine S Binder and Tori E Foster and Andrea M Zawoyski},
doi = {10.1016/j.jsp.2016.09.002},
year = {2016},
date = {2016-01-01},
journal = {Journal of School Psychology},
volume = {59},
pages = {13--38},
publisher = {Society for the Study of School Psychology},
abstract = {Repeated readings (RR) has garnered much attention as an evidence based intervention designed to improve all components of reading fluency (rate, accuracy, prosody, and comprehension). Despite this attention, there is not an abundance of research comparing its effectiveness to other potential interventions. The current study presents the findings from a randomized control trial study involving the assignment of 168 second grade students to a RR, wide reading (WR), or business as usual condition. Intervention students were provided with 9–10 weeks of intervention with sessions occurring four times per week. Pre- and post-testing were conducted using Woodcock-Johnson III reading achievement measures (Woodcock, McGrew, & Mather, 2001, curriculum-based measurement (CBM) probes, measures of prosody, and measures of students' eye movements when reading. Changes in fluency were also monitored using weekly CBM progress monitoring procedures. Data were collected on the amount of time students spent reading and the number of words read by students during each intervention session. Results indicate substantial gains made by students across conditions, with some measures indicating greater gains by students in the two intervention conditions. Analyses do not indicate that RR was superior to WR. In addition to expanding the RR literature, this study greatly expands research evaluating changes in reading behaviors that occur with improvements in reading fluency. Implications regarding whether schools should provide more opportunities to repeatedly practice the same text (i.e., RR) or practice a wide range of text (i.e., WR) are provided.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Repeated readings (RR) has garnered much attention as an evidence based intervention designed to improve all components of reading fluency (rate, accuracy, prosody, and comprehension). Despite this attention, there is not an abundance of research comparing its effectiveness to other potential interventions. The current study presents the findings from a randomized control trial study involving the assignment of 168 second grade students to a RR, wide reading (WR), or business as usual condition. Intervention students were provided with 9–10 weeks of intervention with sessions occurring four times per week. Pre- and post-testing were conducted using Woodcock-Johnson III reading achievement measures (Woodcock, McGrew, & Mather, 2001, curriculum-based measurement (CBM) probes, measures of prosody, and measures of students' eye movements when reading. Changes in fluency were also monitored using weekly CBM progress monitoring procedures. Data were collected on the amount of time students spent reading and the number of words read by students during each intervention session. Results indicate substantial gains made by students across conditions, with some measures indicating greater gains by students in the two intervention conditions. Analyses do not indicate that RR was superior to WR. In addition to expanding the RR literature, this study greatly expands research evaluating changes in reading behaviors that occur with improvements in reading fluency. Implications regarding whether schools should provide more opportunities to repeatedly practice the same text (i.e., RR) or practice a wide range of text (i.e., WR) are provided. |
Scott P Ardoin; Katherine S Binder; Andrea M Zawoyski; Tori E Foster Examining the maintenance and generalization effects of repeated practice: A comparison of three interventions Journal Article In: Journal of School Psychology, vol. 68, pp. 1–18, 2018. @article{Ardoin2018,
title = {Examining the maintenance and generalization effects of repeated practice: A comparison of three interventions},
author = {Scott P Ardoin and Katherine S Binder and Andrea M Zawoyski and Tori E Foster},
doi = {10.1016/j.jsp.2017.12.002},
year = {2018},
date = {2018-01-01},
journal = {Journal of School Psychology},
volume = {68},
pages = {1--18},
abstract = {Repeated reading (RR) procedures are consistent with the procedures recommended by Haring and Eaton's (1978) Instructional Hierarchy (IH) for promoting students' fluent responding to newly learned stimuli. It is therefore not surprising that an extensive body of literature exists, which supports RR as an effective practice for promoting students' reading fluency of practiced passages. Less clear, however, is the extent to which RR helps students read the words practiced in an intervention passage when those same words are presented in a new passage. The current study employed randomized control design procedures to examine the maintenance and generalization effects of three interventions that were designed based upon Haring and Eaton's (1978) IH. Across four days, students either practiced reading (a) the same passage seven times (RR+RR), (b) one passage four times and three passages each once (RR+Guided Wide Reading [GWR]), or (c) seven passages each once (GWR+GWR). Students participated in the study across 2 weeks, with intervention being provided on a different passage set each week. All passages practiced within a week, regardless of condition, contained four target low frequency and four high frequency words. Across the 130 students for whom data were analyzed, results indicated that increased opportunities to practice words led to greater maintenance effects when passages were read seven days later but revealed minimal differences across conditions in students' reading of target words presented within a generalization passage.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Repeated reading (RR) procedures are consistent with the procedures recommended by Haring and Eaton's (1978) Instructional Hierarchy (IH) for promoting students' fluent responding to newly learned stimuli. It is therefore not surprising that an extensive body of literature exists, which supports RR as an effective practice for promoting students' reading fluency of practiced passages. Less clear, however, is the extent to which RR helps students read the words practiced in an intervention passage when those same words are presented in a new passage. The current study employed randomized control design procedures to examine the maintenance and generalization effects of three interventions that were designed based upon Haring and Eaton's (1978) IH. Across four days, students either practiced reading (a) the same passage seven times (RR+RR), (b) one passage four times and three passages each once (RR+Guided Wide Reading [GWR]), or (c) seven passages each once (GWR+GWR). Students participated in the study across 2 weeks, with intervention being provided on a different passage set each week. All passages practiced within a week, regardless of condition, contained four target low frequency and four high frequency words. Across the 130 students for whom data were analyzed, results indicated that increased opportunities to practice words led to greater maintenance effects when passages were read seven days later but revealed minimal differences across conditions in students' reading of target words presented within a generalization passage. |
Scott P Ardoin; Katherine S Binder; Andrea M Zawoyski; Eloise Nimocks; Tori E Foster Measuring the behavior of reading comprehension test takers: What do they do, and should they do it? Journal Article In: Reading Research Quarterly, vol. 54, no. 4, pp. 507–529, 2019. @article{Ardoin2019,
title = {Measuring the behavior of reading comprehension test takers: What do they do, and should they do it?},
author = {Scott P Ardoin and Katherine S Binder and Andrea M Zawoyski and Eloise Nimocks and Tori E Foster},
doi = {10.1002/rrq.246},
year = {2019},
date = {2019-01-01},
journal = {Reading Research Quarterly},
volume = {54},
number = {4},
pages = {507--529},
abstract = {The authors sought to further the understanding of reading processes and their links to comprehension using two reading tasks for elementary-grade students. One hundred sixty-six students in grades 2–5 were randomly assigned to one of two conditions: reading with questions presented concurrently with text or reading with questions presented after reading the text (with the text unavailable when answering questions). Eye movement data suggested different processes for each task: Rereading occurred and more time was spent on higher level processing measures in the with-text condition, and in particular, those who did not reread had more accurate answers than those who engaged in rereading. Measurement of students' precision in returning directly to the portion of the passage with information corresponding to a question also predicted students' response accuracy.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The authors sought to further the understanding of reading processes and their links to comprehension using two reading tasks for elementary-grade students. One hundred sixty-six students in grades 2–5 were randomly assigned to one of two conditions: reading with questions presented concurrently with text or reading with questions presented after reading the text (with the text unavailable when answering questions). Eye movement data suggested different processes for each task: Rereading occurred and more time was spent on higher level processing measures in the with-text condition, and in particular, those who did not reread had more accurate answers than those who engaged in rereading. Measurement of students' precision in returning directly to the portion of the passage with information corresponding to a question also predicted students' response accuracy. |
Nathan Arnett; Matthew Wagers Subject encodings and retrieval interference Journal Article In: Journal of Memory and Language, vol. 93, pp. 22–54, 2017. @article{Arnett2017,
title = {Subject encodings and retrieval interference},
author = {Nathan Arnett and Matthew Wagers},
doi = {10.1016/j.jml.2016.07.005},
year = {2017},
date = {2017-01-01},
journal = {Journal of Memory and Language},
volume = {93},
pages = {22--54},
abstract = {Interference has been identified as a cause of processing difficulty in linguistic dependencies, such as the subject-verb relation (Van Dyke and Lewis, 2003). However, while mounting evidence implicates retrieval interference in sentence processing, the nature of the retrieval cues involved - and thus the source of difficulty - remains largely unexplored. Three experiments used self-paced reading and eyetracking to examine the ways in which the retrieval cues provided at a verb characterize subjects. Syntactic theory has identified a number of properties correlated with subjecthood, both phrase-structural and thematic. Findings replicate and extend previous findings of interference at a verb from additional subjects, but indicate that retrieval outcomes are relativized to the syntactic domain in which the retrieval occurs. One, the cues distinguish between thematic subjects in verbal and nominal domains. Two, within the verbal domain, retrieval is sensitive to abstract syntactic properties associated with subjects and their clauses. We argue that the processing at a verb requires cue-driven retrieval, and that the retrieval cues utilize abstract grammatical properties which may reflect parser expectations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Interference has been identified as a cause of processing difficulty in linguistic dependencies, such as the subject-verb relation (Van Dyke and Lewis, 2003). However, while mounting evidence implicates retrieval interference in sentence processing, the nature of the retrieval cues involved - and thus the source of difficulty - remains largely unexplored. Three experiments used self-paced reading and eyetracking to examine the ways in which the retrieval cues provided at a verb characterize subjects. Syntactic theory has identified a number of properties correlated with subjecthood, both phrase-structural and thematic. Findings replicate and extend previous findings of interference at a verb from additional subjects, but indicate that retrieval outcomes are relativized to the syntactic domain in which the retrieval occurs. One, the cues distinguish between thematic subjects in verbal and nominal domains. Two, within the verbal domain, retrieval is sensitive to abstract syntactic properties associated with subjects and their clauses. We argue that the processing at a verb requires cue-driven retrieval, and that the retrieval cues utilize abstract grammatical properties which may reflect parser expectations. |
Anja Arnhold; Vincent Porretta; Aoju Chen; Saskia A J M Verstegen; Ivy Mok; Juhani Järvikivi (Mis) understanding your native language: Regional accent impedes processing of information status Journal Article In: Psychonomic Bulletin & Review, vol. 27, no. 4, pp. 801–808, 2020. @article{Arnhold2020,
title = {(Mis) understanding your native language: Regional accent impedes processing of information status},
author = {Anja Arnhold and Vincent Porretta and Aoju Chen and Saskia A J M Verstegen and Ivy Mok and Juhani Järvikivi},
doi = {10.3758/s13423-020-01731-w},
year = {2020},
date = {2020-01-01},
journal = {Psychonomic Bulletin & Review},
volume = {27},
number = {4},
pages = {801--808},
publisher = {Psychonomic Bulletin & Review},
abstract = {Native-speaker listeners constantly predict upcoming units of speech as part of language processing, using various cues. However, this process is impeded in second-language listeners, as well as when the speaker has an unfamiliar accent. Whereas previous research has largely concentrated on the pronunciation of individual segments in foreign-accented speech, we show that regional accent impedes higher levels of language processing, making native listeners' processing resemble that of second-language listeners. In Experiment 1, 42 native speakers of Canadian English followed instructions spoken in British English to move objects on a screen while their eye movements were tracked. Native listeners use prosodic cues to information status to disambiguate between two possible referents, a new and a previously mentioned one, before they have heard the complete word. By contrast, the Canadian participants, similarly to second-language speakers, were not able to make full use of prosodic cues in the way native British listeners do. In Experiment 2, 19 native speakers of Canadian English rated the British English instructions used in Experiment 1, as well as the same instructions spoken by a Canadian imitating the British English prosody. While information status had no effect for the Canadian imitations, the original stimuli received higher ratings when prosodic realization and information status of the referent matched than for mismatches, suggesting a native-like competence in these offline ratings. These findings underline the importance of expanding psycholinguistic models of second language/dialect processing and representation to include both prosody and regional variation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Native-speaker listeners constantly predict upcoming units of speech as part of language processing, using various cues. However, this process is impeded in second-language listeners, as well as when the speaker has an unfamiliar accent. Whereas previous research has largely concentrated on the pronunciation of individual segments in foreign-accented speech, we show that regional accent impedes higher levels of language processing, making native listeners' processing resemble that of second-language listeners. In Experiment 1, 42 native speakers of Canadian English followed instructions spoken in British English to move objects on a screen while their eye movements were tracked. Native listeners use prosodic cues to information status to disambiguate between two possible referents, a new and a previously mentioned one, before they have heard the complete word. By contrast, the Canadian participants, similarly to second-language speakers, were not able to make full use of prosodic cues in the way native British listeners do. In Experiment 2, 19 native speakers of Canadian English rated the British English instructions used in Experiment 1, as well as the same instructions spoken by a Canadian imitating the British English prosody. While information status had no effect for the Canadian imitations, the original stimuli received higher ratings when prosodic realization and information status of the referent matched than for mismatches, suggesting a native-like competence in these offline ratings. These findings underline the importance of expanding psycholinguistic models of second language/dialect processing and representation to include both prosody and regional variation. |
Jennifer E Arnold; Carla L.Hudson Kam; Michael K Tanenhaus If you say thee uh you are describing something hard: The on-line attribution of disfluency during reference comprehension Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 33, no. 5, pp. 914–930, 2007. @article{Arnold2007,
title = {If you say thee uh you are describing something hard: The on-line attribution of disfluency during reference comprehension},
author = {Jennifer E Arnold and Carla L.Hudson Kam and Michael K Tanenhaus},
doi = {10.1037/0278-7393.33.5.914},
year = {2007},
date = {2007-01-01},
journal = {Journal of Experimental Psychology: Learning, Memory, and Cognition},
volume = {33},
number = {5},
pages = {914--930},
abstract = {Eye-tracking and gating experiments examined reference comprehension with fluent (Click on the red. . .) and disfluent (Click on [pause] thee uh red . . .) instructions while listeners viewed displays with 2 familiar (e.g., ice cream cones) and 2 unfamiliar objects (e.g., squiggly shapes). Disfluent instructions made unfamiliar objects more expected, which influenced listeners' on-line hypotheses from the onset of the color word. The unfamiliarity bias was sharply reduced by instructions that the speaker had object agnosia, and thus difficulty naming familiar objects (Experiment 2), but was not affected by intermittent sources of speaker distraction (beeps and construction noises; Experiments 3). The authors conclude that listeners can make situation-specific inferences about likely sources of disfluency, but there are some limitations to these attributions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Eye-tracking and gating experiments examined reference comprehension with fluent (Click on the red. . .) and disfluent (Click on [pause] thee uh red . . .) instructions while listeners viewed displays with 2 familiar (e.g., ice cream cones) and 2 unfamiliar objects (e.g., squiggly shapes). Disfluent instructions made unfamiliar objects more expected, which influenced listeners' on-line hypotheses from the onset of the color word. The unfamiliarity bias was sharply reduced by instructions that the speaker had object agnosia, and thus difficulty naming familiar objects (Experiment 2), but was not affected by intermittent sources of speaker distraction (beeps and construction noises; Experiments 3). The authors conclude that listeners can make situation-specific inferences about likely sources of disfluency, but there are some limitations to these attributions. |
Jennifer E Arnold THE BACON not the bacon: How children and adults understand accented and unaccented noun phrases Journal Article In: Cognition, vol. 108, no. 1, pp. 69–99, 2008. @article{Arnold2008,
title = {THE BACON not the bacon: How children and adults understand accented and unaccented noun phrases},
author = {Jennifer E Arnold},
doi = {10.1016/j.cognition.2008.01.001},
year = {2008},
date = {2008-01-01},
journal = {Cognition},
volume = {108},
number = {1},
pages = {69--99},
abstract = {Two eye-tracking experiments examine whether adults and 4- and 5-year-old children use the presence or absence of accenting to guide their interpretation of noun phrases (e.g., the bacon) with respect to the discourse context. Unaccented nouns tend to refer to contextually accessible referents, while accented variants tend to be used for less accessible entities. Experiment 1 confirms that accenting is informative for adults, who show a bias toward previously-mentioned objects beginning 300 ms after the onset of unaccented nouns and pronouns. But contrary to findings in the literature, accented words produced no observable bias. In Experiment 2, 4 and 5 year olds were also biased toward previously-mentioned objects with unaccented nouns and pronouns. This builds on findings of limits on children's on-line reference comprehension [Arnold, J. E., Brown-Schmidt, S., & Trueswell, J. C. (2007). Children's use of gender and order-of-mention during pronoun comprehension. Language and Cognitive Processes], showing that children's interpretation of unaccented nouns and pronouns is constrained in contexts with one single highly accessible object.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Two eye-tracking experiments examine whether adults and 4- and 5-year-old children use the presence or absence of accenting to guide their interpretation of noun phrases (e.g., the bacon) with respect to the discourse context. Unaccented nouns tend to refer to contextually accessible referents, while accented variants tend to be used for less accessible entities. Experiment 1 confirms that accenting is informative for adults, who show a bias toward previously-mentioned objects beginning 300 ms after the onset of unaccented nouns and pronouns. But contrary to findings in the literature, accented words produced no observable bias. In Experiment 2, 4 and 5 year olds were also biased toward previously-mentioned objects with unaccented nouns and pronouns. This builds on findings of limits on children's on-line reference comprehension [Arnold, J. E., Brown-Schmidt, S., & Trueswell, J. C. (2007). Children's use of gender and order-of-mention during pronoun comprehension. Language and Cognitive Processes], showing that children's interpretation of unaccented nouns and pronouns is constrained in contexts with one single highly accessible object. |
Jennifer E Arnold; Shin Yi C Lao Put in last position something previously unmentioned: Word order effects on referential expectancy and reference comprehension Journal Article In: Language and Cognitive Processes, vol. 23, no. 2, pp. 282–295, 2008. @article{Arnold2008a,
title = {Put in last position something previously unmentioned: Word order effects on referential expectancy and reference comprehension},
author = {Jennifer E Arnold and Shin Yi C Lao},
doi = {10.1080/01690960701536805},
year = {2008},
date = {2008-01-01},
journal = {Language and Cognitive Processes},
volume = {23},
number = {2},
pages = {282--295},
abstract = {Research has shown that the comprehension of definite referring expressions (e.g., "the triangle") tends to be faster for "given" (previously mentioned) referents, compared with new referents. This has been attributed to the presence of given information in the consciousness of discourse participants (e.g., Chafe, 1994) suggesting that given is always more accessible. By contrast, we find a bias toward new referents during the on-line comprehension of the direct object in heavy-NP-shifted word orders, e.g., "Put on the star the...." This order tends to be used for new direct objects; canonical unshifted orders are more common with given direct objects. Thus, word order provides probabilistic information about the givenness or newness of the direct object. Results from eyetracking and gating experiments show that the traditional given bias only occurs with unshifted orders; with heavy-NP-shifted orders, comprehenders expect the object to be new, and comprehension for new referents is facilitated. (Contains 2 figures and 3 tables.)},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Research has shown that the comprehension of definite referring expressions (e.g., "the triangle") tends to be faster for "given" (previously mentioned) referents, compared with new referents. This has been attributed to the presence of given information in the consciousness of discourse participants (e.g., Chafe, 1994) suggesting that given is always more accessible. By contrast, we find a bias toward new referents during the on-line comprehension of the direct object in heavy-NP-shifted word orders, e.g., "Put on the star the...." This order tends to be used for new direct objects; canonical unshifted orders are more common with given direct objects. Thus, word order provides probabilistic information about the givenness or newness of the direct object. Results from eyetracking and gating experiments show that the traditional given bias only occurs with unshifted orders; with heavy-NP-shifted orders, comprehenders expect the object to be new, and comprehension for new referents is facilitated. (Contains 2 figures and 3 tables.) |
Jennifer E Arnold Women and men have different discourse biases for pronoun interpretation Journal Article In: Discourse Processes, vol. 52, no. 2, pp. 77–110, 2015. @article{Arnold2015a,
title = {Women and men have different discourse biases for pronoun interpretation},
author = {Jennifer E Arnold},
doi = {10.1080/0163853X.2014.946847},
year = {2015},
date = {2015-01-01},
journal = {Discourse Processes},
volume = {52},
number = {2},
pages = {77--110},
publisher = {Taylor & Francis},
abstract = {Two experiments examine how men and women interpret pronouns in discourse. Adults are known to show a strong “first-mention bias”: When two characters are mentioned (Michael played with William . . . ), comprehenders tend to interpret subsequent pronouns as coreferential with the first of the two characters and to find pronouns more natural than names for reference to the first character. However, this bias is not absolute. Experiment 1 demonstrates a stronger first-mention bias for women than men in their naturalness ratings for short stories. Experiment 2 monitors eye movements during story comprehension and finds that women are more likely than men to consider the first-mentioned character as the pronoun referent. These findings reveal the first known gender difference in reference processing and reinforce the view that reference processing is driven by more than the discourse context alone.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Two experiments examine how men and women interpret pronouns in discourse. Adults are known to show a strong “first-mention bias”: When two characters are mentioned (Michael played with William . . . ), comprehenders tend to interpret subsequent pronouns as coreferential with the first of the two characters and to find pronouns more natural than names for reference to the first character. However, this bias is not absolute. Experiment 1 demonstrates a stronger first-mention bias for women than men in their naturalness ratings for short stories. Experiment 2 monitors eye movements during story comprehension and finds that women are more likely than men to consider the first-mentioned character as the pronoun referent. These findings reveal the first known gender difference in reference processing and reinforce the view that reference processing is driven by more than the discourse context alone. |
Vahid Aryadoust; Bee Hoon Ang Exploring the frontiers of eye tracking research in language studies: A novel co-citation scientometric review Journal Article In: Computer Assisted Language Learning, 2019. @article{Aryadoust2019,
title = {Exploring the frontiers of eye tracking research in language studies: A novel co-citation scientometric review},
author = {Vahid Aryadoust and Bee Hoon Ang},
doi = {10.1080/09588221.2019.1647251},
year = {2019},
date = {2019-01-01},
journal = {Computer Assisted Language Learning},
publisher = {Routledge},
abstract = {Eye tracking technology has become an increasingly popular methodology in language studies. Using data from 27 journals in language sciences indexed in the Social Science Citation Index and/or Scopus, we conducted an in-depth scientometric analysis of 341 research publications together with their 14,866 references between 1994 and 2018. We identified a number of countries, researchers, universities, and institutes with large numbers of publications in eye tracking research in language studies. We further discovered a mixed multitude of connected research trends that have shaped the nature and development of eye tracking research. Specifically, a document co-citation analysis revealed a number of major research clusters, their key topics, connections, and bursts (sudden citation surges). For example, the foci of clusters #0 through #5 were found to be perceptual learning, regressive eye movement(s), attributive adjective(s), stereotypical gender, discourse processing, and bilingual adult(s). The content of all the major clusters was closely examined and synthesized in the form of an in-depth review. Finally, we grounded the findings within a data-driven theory of scientific revolution and discussed how the observed patterns have contributed to the emergence of new trends. As the first scientometric investigation of eye tracking research in language studies, the present study offers several implications for future research that are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Eye tracking technology has become an increasingly popular methodology in language studies. Using data from 27 journals in language sciences indexed in the Social Science Citation Index and/or Scopus, we conducted an in-depth scientometric analysis of 341 research publications together with their 14,866 references between 1994 and 2018. We identified a number of countries, researchers, universities, and institutes with large numbers of publications in eye tracking research in language studies. We further discovered a mixed multitude of connected research trends that have shaped the nature and development of eye tracking research. Specifically, a document co-citation analysis revealed a number of major research clusters, their key topics, connections, and bursts (sudden citation surges). For example, the foci of clusters #0 through #5 were found to be perceptual learning, regressive eye movement(s), attributive adjective(s), stereotypical gender, discourse processing, and bilingual adult(s). The content of all the major clusters was closely examined and synthesized in the form of an in-depth review. Finally, we grounded the findings within a data-driven theory of scientific revolution and discussed how the observed patterns have contributed to the emergence of new trends. As the first scientometric investigation of eye tracking research in language studies, the present study offers several implications for future research that are discussed. |
Jane Ashby; Jinmian Yang; Kris H C Evans; Keith Rayner Eye movements and the perceptual span in silent and oral reading Journal Article In: Attention, Perception, and Psychophysics, vol. 74, no. 4, pp. 634–640, 2012. @article{Ashby2012,
title = {Eye movements and the perceptual span in silent and oral reading},
author = {Jane Ashby and Jinmian Yang and Kris H C Evans and Keith Rayner},
doi = {10.3758/s13414-012-0277-0},
year = {2012},
date = {2012-01-01},
journal = {Attention, Perception, and Psychophysics},
volume = {74},
number = {4},
pages = {634--640},
abstract = {Previous research has examined parafoveal processing during silent reading, but little is known about the role of these processes in oral reading. Given that masking parafoveal information slows down silent reading, we asked whether a similar effect also occurs in oral reading. To investigate the role of parafoveal processing in silent and oral reading, we manipulated the parafoveal information available to readers by changing the size of a gaze-contingent moving window. Participants read silently and orally in a one-word window and a three-word window condition as we monitored their eye movements. The lack of parafoveal information slowed reading speed in both oral and silent reading. However, the effects of parafoveal information were larger in silent reading than in oral reading, because of different effects of preview information on both when the eyes move and how often. Parafoveal information benefitted silent reading for faster readers more than for slower readers.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Previous research has examined parafoveal processing during silent reading, but little is known about the role of these processes in oral reading. Given that masking parafoveal information slows down silent reading, we asked whether a similar effect also occurs in oral reading. To investigate the role of parafoveal processing in silent and oral reading, we manipulated the parafoveal information available to readers by changing the size of a gaze-contingent moving window. Participants read silently and orally in a one-word window and a three-word window condition as we monitored their eye movements. The lack of parafoveal information slowed reading speed in both oral and silent reading. However, the effects of parafoveal information were larger in silent reading than in oral reading, because of different effects of preview information on both when the eyes move and how often. Parafoveal information benefitted silent reading for faster readers more than for slower readers. |
Jane Ashby; Heather Dix; Morgan Bontrager Phonemic awareness contributes to text reading fluency: Evidence from eye movements. Journal Article In: School Psychology Review, vol. 42, no. 2, pp. 157–170, 2013. @article{Ashby2013,
title = {Phonemic awareness contributes to text reading fluency: Evidence from eye movements.},
author = {Jane Ashby and Heather Dix and Morgan Bontrager},
doi = {10.1017/CBO9781107415324.004},
year = {2013},
date = {2013-01-01},
journal = {School Psychology Review},
volume = {42},
number = {2},
pages = {157--170},
abstract = {Although phonemic awareness is a known predictor of early decoding and word recognition, less is known about relationships between phonemic awareness and text reading fluency. This longitudinal study is the first to inves-tigate this relationship by measuring eye movements during picture matching tasks and during silent sentence reading. Time spent looking at the correct target during phonemic awareness and receptive spelling tasks gauged the efficiency of phonological and orthographic processes. Children's eye movements during sen-tence reading provided a direct measure of silent reading fluency for compre-hended text. Results indicate that children who processed the phonemic awareness targets more slowly in Grade 2 tended to be slower readers in Grade 3. Processing difficulty during a receptive spelling task was related to reading fluency within Grade 2. Findings suggest that inefficient phonemic processing contributes to poor silent reading fluency after second grade.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Although phonemic awareness is a known predictor of early decoding and word recognition, less is known about relationships between phonemic awareness and text reading fluency. This longitudinal study is the first to inves-tigate this relationship by measuring eye movements during picture matching tasks and during silent sentence reading. Time spent looking at the correct target during phonemic awareness and receptive spelling tasks gauged the efficiency of phonological and orthographic processes. Children's eye movements during sen-tence reading provided a direct measure of silent reading fluency for compre-hended text. Results indicate that children who processed the phonemic awareness targets more slowly in Grade 2 tended to be slower readers in Grade 3. Processing difficulty during a receptive spelling task was related to reading fluency within Grade 2. Findings suggest that inefficient phonemic processing contributes to poor silent reading fluency after second grade. |
Emily Atkinson; Matthew W Wagers; Jeffrey Lidz; Colin Phillips; Akira Omaki Developing incrementality in filler-gap dependency processing Journal Article In: Cognition, vol. 179, pp. 132–149, 2018. @article{Atkinson2018,
title = {Developing incrementality in filler-gap dependency processing},
author = {Emily Atkinson and Matthew W Wagers and Jeffrey Lidz and Colin Phillips and Akira Omaki},
doi = {10.1016/j.cognition.2018.05.022},
year = {2018},
date = {2018-01-01},
journal = {Cognition},
volume = {179},
pages = {132--149},
abstract = {Much work has demonstrated that children are able to use bottom-up linguistic cues to incrementally interpret sentences, but there is little understanding of the extent to which children's comprehension mechanisms are guided by top-down linguistic information that can be learned from distributional regularities in the input. Using a visual world eye tracking experiment and a corpus analysis, the current study investigates whether 5- and 6-year-old children incrementally assign interpretations to temporarily ambiguous wh-questions like What was Emily eating the cake with __? In the visual world eye-tracking experiment, adults demonstrated evidence for active dependency formation at the earliest region (i.e., the verb region), while 6-year-old children demonstrated a spill-over effect of this bias in the subsequent NP region. No evidence for this bias was found in 5-year-olds, although the speed of arrival at the ultimately correct instrument interpretation appears to be modulated by the vocabulary size. These results suggest that adult-like active formation of filler-gap dependencies begins to emerge around age 6. The corpus analysis of filler-gap dependency structures in adult corpora and child corpora demonstrate that the distributional regularities in either corpora are equally in favor of early, incremental completion of filler-gap dependencies, suggesting that the distributional information in the input is either not relevant to this incremental bias, or that 5-year-old children are somehow unable to recruit this information in real-time comprehension. Taken together, these findings shed light on the origin of the incremental processing bias in filler-gap dependency processing, as well as on the role of language experience and cognitive constraints in the development of incremental sentence processing mechanisms.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Much work has demonstrated that children are able to use bottom-up linguistic cues to incrementally interpret sentences, but there is little understanding of the extent to which children's comprehension mechanisms are guided by top-down linguistic information that can be learned from distributional regularities in the input. Using a visual world eye tracking experiment and a corpus analysis, the current study investigates whether 5- and 6-year-old children incrementally assign interpretations to temporarily ambiguous wh-questions like What was Emily eating the cake with __? In the visual world eye-tracking experiment, adults demonstrated evidence for active dependency formation at the earliest region (i.e., the verb region), while 6-year-old children demonstrated a spill-over effect of this bias in the subsequent NP region. No evidence for this bias was found in 5-year-olds, although the speed of arrival at the ultimately correct instrument interpretation appears to be modulated by the vocabulary size. These results suggest that adult-like active formation of filler-gap dependencies begins to emerge around age 6. The corpus analysis of filler-gap dependency structures in adult corpora and child corpora demonstrate that the distributional regularities in either corpora are equally in favor of early, incremental completion of filler-gap dependencies, suggesting that the distributional information in the input is either not relevant to this incremental bias, or that 5-year-old children are somehow unable to recruit this information in real-time comprehension. Taken together, these findings shed light on the origin of the incremental processing bias in filler-gap dependency processing, as well as on the role of language experience and cognitive constraints in the development of incremental sentence processing mechanisms. |
Sheena K Au-Yeung; Johanna K Kaakinen; Simon P Liversedge; Valerie Benson Processing of written irony in autism spectrum disorder: An eye-movement study Journal Article In: Autism Research, vol. 8, no. 6, pp. 749–760, 2015. @article{AuYeung2015,
title = {Processing of written irony in autism spectrum disorder: An eye-movement study},
author = {Sheena K Au-Yeung and Johanna K Kaakinen and Simon P Liversedge and Valerie Benson},
doi = {10.1002/aur.1490},
year = {2015},
date = {2015-01-01},
journal = {Autism Research},
volume = {8},
number = {6},
pages = {749--760},
abstract = {Previous research has suggested that individuals with Autism Spectrum Disorders (ASD) have difficulties understanding others communicative intent and with using contextual information to correctly interpret irony. We recorded the eye movements of typically developing (TD) adults ASD adults when they read statements that could either be interpreted as ironic or non-ironic depending on the context of the passage. Participants with ASD performed as well as TD controls in their comprehension accuracy for speaker's statements in both ironic and non-ironic conditions. Eye movement data showed that for both participant groups, total reading times were longer for the critical region containing the speaker's statement and a subsequent sentence restating the context in the ironic condition compared to the non-ironic condition. The results suggest that more effortful processing is required in both ASD and TD participants for ironic compared with literal non-ironic statements, and that individuals with ASD were able to use contextual information to infer a non-literal interpretation of ironic text. Individuals with ASD, however, spent more time overall than TD controls rereading the passages, to a similar degree across both ironic and non-ironic conditions, suggesting that they either take longer to construct a coherent discourse representation of the text, or that they take longer to make the decision that their representation of the text is reasonable based on their knowledge of the world.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Previous research has suggested that individuals with Autism Spectrum Disorders (ASD) have difficulties understanding others communicative intent and with using contextual information to correctly interpret irony. We recorded the eye movements of typically developing (TD) adults ASD adults when they read statements that could either be interpreted as ironic or non-ironic depending on the context of the passage. Participants with ASD performed as well as TD controls in their comprehension accuracy for speaker's statements in both ironic and non-ironic conditions. Eye movement data showed that for both participant groups, total reading times were longer for the critical region containing the speaker's statement and a subsequent sentence restating the context in the ironic condition compared to the non-ironic condition. The results suggest that more effortful processing is required in both ASD and TD participants for ironic compared with literal non-ironic statements, and that individuals with ASD were able to use contextual information to infer a non-literal interpretation of ironic text. Individuals with ASD, however, spent more time overall than TD controls rereading the passages, to a similar degree across both ironic and non-ironic conditions, suggesting that they either take longer to construct a coherent discourse representation of the text, or that they take longer to make the decision that their representation of the text is reasonable based on their knowledge of the world. |
Emma L Axelsson; Rachelle L Dawson; Sharon Y Yim; Tashfia Quddus Mine, mine, mine: Self-reference and children's retention of novel words Journal Article In: Frontiers in Psychology, vol. 9, pp. 1–9, 2018. @article{Axelsson2018,
title = {Mine, mine, mine: Self-reference and children's retention of novel words},
author = {Emma L Axelsson and Rachelle L Dawson and Sharon Y Yim and Tashfia Quddus},
doi = {10.3389/fpsyg.2018.00958},
year = {2018},
date = {2018-01-01},
journal = {Frontiers in Psychology},
volume = {9},
pages = {1--9},
abstract = {Adults demonstrate enhanced memory for words encoded as belonging to themselves compared to those belonging to another. Known as the self-reference effect, there is evidence for the effect in children as young as three. Toddlers are efficient in linking novel words to novel objects, but have difficulties retaining multiple word-object associations. The aim here was to investigate the self-reference ownership paradigm on 3-year-old children's retention of novel words. Following exposure to each of four novel word-object pairings, children were told that objects either belonged to them or another character. Children demonstrated significantly higher immediate retention of self-referenced compared to other-referenced items. Retention was also tested 4 h later and the following morning. Retention for self- and other-referenced words was significantly higher than chance at both delayed time points, but the difference between the self- and other-referenced words was no longer significant. The findings suggest that when it comes to toddlers' retention of multiple novel words there is an initial memory enhancing effect for self- compared to other-referenced items, but the difference diminishes over time. Children's looking times during the self-reference presentations were positively associated with retention of self-referenced words 4 h later. Looking times during the other-reference presentations were positively associated with proportional looking at other-referenced items during immediate retention testing. The findings have implications for children's memory for novel words and future studies could test children's explicit memories for the ownership manipulation itself and whether the effect is superior to other forms of memory supports such as ostensive naming.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Adults demonstrate enhanced memory for words encoded as belonging to themselves compared to those belonging to another. Known as the self-reference effect, there is evidence for the effect in children as young as three. Toddlers are efficient in linking novel words to novel objects, but have difficulties retaining multiple word-object associations. The aim here was to investigate the self-reference ownership paradigm on 3-year-old children's retention of novel words. Following exposure to each of four novel word-object pairings, children were told that objects either belonged to them or another character. Children demonstrated significantly higher immediate retention of self-referenced compared to other-referenced items. Retention was also tested 4 h later and the following morning. Retention for self- and other-referenced words was significantly higher than chance at both delayed time points, but the difference between the self- and other-referenced words was no longer significant. The findings suggest that when it comes to toddlers' retention of multiple novel words there is an initial memory enhancing effect for self- compared to other-referenced items, but the difference diminishes over time. Children's looking times during the self-reference presentations were positively associated with retention of self-referenced words 4 h later. Looking times during the other-reference presentations were positively associated with proportional looking at other-referenced items during immediate retention testing. The findings have implications for children's memory for novel words and future studies could test children's explicit memories for the ownership manipulation itself and whether the effect is superior to other forms of memory supports such as ostensive naming. |
Nicole D Ayasse; Arthur Wingfield A tipping point in listening effort: Effects of linguistic complexity and age-related hearing loss on sentence comprehension Journal Article In: Trends in Hearing, vol. 22, 2018. @article{Ayasse2018,
title = {A tipping point in listening effort: Effects of linguistic complexity and age-related hearing loss on sentence comprehension},
author = {Nicole D Ayasse and Arthur Wingfield},
doi = {10.1177/2331216518790907},
year = {2018},
date = {2018-01-01},
journal = {Trends in Hearing},
volume = {22},
abstract = {In recent years, there has been a growing interest in the relationship between effort and performance. Early formulations implied that, as the challenge of a task increases, individuals will exert more effort, with resultant maintenance of stable performance. We report an experiment in which normal-hearing young adults, normal-hearing older adults, and older adults with age-related mild-to-moderate hearing loss were tested for comprehension of recorded sentences that varied the comprehension challenge in two ways. First, sentences were constructed that expressed their meaning either with a simpler subject-relative syntactic structure or a more computationally demanding object-relative structure. Second, for each sentence type, an adjectival phrase was inserted that created either a short or long gap in the sentence between the agent performing an action and the action being performed. The measurement of pupil dilation as an index of processing effort showed effort to increase with task difficulty until a difficulty tipping point was reached. Beyond this point, the measurement of pupil size revealed a commitment of effort by the two groups of older adults who failed to keep pace with task demands as evidenced by reduced comprehension accuracy. We take these pupillometry data as revealing a complex relationship between task difficulty, effort, and performance that might not otherwise appear from task performance alone.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
In recent years, there has been a growing interest in the relationship between effort and performance. Early formulations implied that, as the challenge of a task increases, individuals will exert more effort, with resultant maintenance of stable performance. We report an experiment in which normal-hearing young adults, normal-hearing older adults, and older adults with age-related mild-to-moderate hearing loss were tested for comprehension of recorded sentences that varied the comprehension challenge in two ways. First, sentences were constructed that expressed their meaning either with a simpler subject-relative syntactic structure or a more computationally demanding object-relative structure. Second, for each sentence type, an adjectival phrase was inserted that created either a short or long gap in the sentence between the agent performing an action and the action being performed. The measurement of pupil dilation as an index of processing effort showed effort to increase with task difficulty until a difficulty tipping point was reached. Beyond this point, the measurement of pupil size revealed a commitment of effort by the two groups of older adults who failed to keep pace with task demands as evidenced by reduced comprehension accuracy. We take these pupillometry data as revealing a complex relationship between task difficulty, effort, and performance that might not otherwise appear from task performance alone. |
Nicolai D Ayasse; Arthur Wingfield The two sides of linguistic context: Eye-tracking as a measure of semantic competition in spoken word recognition among younger and older adults Journal Article In: Frontiers in Human Neuroscience, vol. 14, pp. 1–11, 2020. @article{Ayasse2020a,
title = {The two sides of linguistic context: Eye-tracking as a measure of semantic competition in spoken word recognition among younger and older adults},
author = {Nicolai D Ayasse and Arthur Wingfield},
doi = {10.3389/fnhum.2020.00132},
year = {2020},
date = {2020-01-01},
journal = {Frontiers in Human Neuroscience},
volume = {14},
pages = {1--11},
abstract = {Studies of spoken word recognition have reliably shown that both younger and older adults' recognition of acoustically degraded words is facilitated by the presence of a linguistic context. Against this benefit, older adults' word recognition can be differentially hampered by interference from other words that could also fit the context. These prior studies have primarily used off-line response measures such as the signal-to-noise ratio needed for a target word to be correctly identified. Less clear is the locus of these effects; whether facilitation and interference have their influence primarily during response selection, or whether their effects begin to operate even before a sentence-final target word has been uttered. This question was addressed by tracking 20 younger and 20 older adults' eye fixations on a visually presented target word that corresponded to the final word of a contextually constraining or neutral sentence, accompanied by a second word on the computer screen that in some cases could also fit the sentence context. Growth curve analysis of the time-course of eye-gaze on a target word showed facilitation and inhibition effects begin to appear even as a spoken sentence is unfolding in time. Consistent with an age-related inhibition deficit, older adults' word recognition was slowed by the presence of a semantic competitor to a degree not observed for younger adults, with this effect operating early in the recognition process.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Studies of spoken word recognition have reliably shown that both younger and older adults' recognition of acoustically degraded words is facilitated by the presence of a linguistic context. Against this benefit, older adults' word recognition can be differentially hampered by interference from other words that could also fit the context. These prior studies have primarily used off-line response measures such as the signal-to-noise ratio needed for a target word to be correctly identified. Less clear is the locus of these effects; whether facilitation and interference have their influence primarily during response selection, or whether their effects begin to operate even before a sentence-final target word has been uttered. This question was addressed by tracking 20 younger and 20 older adults' eye fixations on a visually presented target word that corresponded to the final word of a contextually constraining or neutral sentence, accompanied by a second word on the computer screen that in some cases could also fit the sentence context. Growth curve analysis of the time-course of eye-gaze on a target word showed facilitation and inhibition effects begin to appear even as a spoken sentence is unfolding in time. Consistent with an age-related inhibition deficit, older adults' word recognition was slowed by the presence of a semantic competitor to a degree not observed for younger adults, with this effect operating early in the recognition process. |
Mireille Babineau; Alex de Carvalho; John Trueswell; Anne Christophe Familiar words can serve as a semantic seed for syntactic bootstrapping Journal Article In: Developmental Science, vol. 24, pp. 1–12, 2020. @article{Babineau2020,
title = {Familiar words can serve as a semantic seed for syntactic bootstrapping},
author = {Mireille Babineau and Alex de Carvalho and John Trueswell and Anne Christophe},
doi = {10.1111/desc.13010},
year = {2020},
date = {2020-01-01},
journal = {Developmental Science},
volume = {24},
pages = {1--12},
abstract = {Young children can exploit the syntactic context of a novel word to narrow down its probable meaning. But how do they learn which contexts are linked to which semantic features in the first place? We investigate if 3- to 4-year-old children (n = 60) can learn about a syntactic context from tracking its use with only a few familiar words. After watching a 5-min training video in which a novel function word (i.e., ‘ko') replaced either personal pronouns or articles, children were able to infer semantic properties for novel words co-occurring with the newly learned function word (i.e., objects vs. actions). These findings implicate a mechanism by which a distributional analysis, associated with a small vocabulary of known words, could be sufficient to identify some properties associated with specific syntactic contexts.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Young children can exploit the syntactic context of a novel word to narrow down its probable meaning. But how do they learn which contexts are linked to which semantic features in the first place? We investigate if 3- to 4-year-old children (n = 60) can learn about a syntactic context from tracking its use with only a few familiar words. After watching a 5-min training video in which a novel function word (i.e., ‘ko') replaced either personal pronouns or articles, children were able to infer semantic properties for novel words co-occurring with the newly learned function word (i.e., objects vs. actions). These findings implicate a mechanism by which a distributional analysis, associated with a small vocabulary of known words, could be sufficient to identify some properties associated with specific syntactic contexts. |
Julia Bahnmueller; Stefan Huber; Hans-Christoph Nuerk; Silke M Göbel; Korbinian Moeller Processing multi-digit numbers: a translingual eye-tracking study Journal Article In: Psychological Research, vol. 80, no. 3, pp. 422–433, 2016. @article{Bahnmueller2016,
title = {Processing multi-digit numbers: a translingual eye-tracking study},
author = {Julia Bahnmueller and Stefan Huber and Hans-Christoph Nuerk and Silke M Göbel and Korbinian Moeller},
doi = {10.1007/s00426-015-0729-y},
year = {2016},
date = {2016-01-01},
journal = {Psychological Research},
volume = {80},
number = {3},
pages = {422--433},
abstract = {The present study aimed at investigating the underlying cognitive processes and language specificities of three-digit number processing. More specifically, it was intended to clarify whether the single digits of three-digit numbers are processed in parallel and/or sequentially and whether processing strategies are influenced by the inversion of number words with respect to the Arabic digits [e.g., 43: dreiundvierzig (“three and forty”)] and/or by differences in reading behavior of the respective first language. Therefore, English- and German-speaking adults had to complete a three-digit number comparison task while their eye-fixation behavior was recorded. Replicating previous results, reliable hundred-decade-compatibility effects (e.g., 742_896: hundred-decade compatible because 7 textless 8 and 4 textless 9; 362_517: hundred-decade incompatible because 3 textless 5 but 6 textgreater 1) for English- as well as hundred-unit-compatibility effects for English- and German-speaking participants were observed, indicating parallel processing strategies. While no indices of partial sequential processing were found for the English-speaking group, about half of the German-speaking participants showed an inverse hundred-decade-compatibility effect accompanied by longer inspection time on the hundred digit indicating additional sequential processes. Thereby, the present data revealed that in transition from two- to higher multi-digit numbers, the homogeneity of underlying processing strategies varies between language groups. The regular German orthography (allowing for letter-by-letter reading) and its associated more sequential reading behavior may have promoted sequential processing strategies in multi-digit number processing. Furthermore, these results indicated that the inversion of number words alone is not sufficient to explain all observed language differences in three-digit number processing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The present study aimed at investigating the underlying cognitive processes and language specificities of three-digit number processing. More specifically, it was intended to clarify whether the single digits of three-digit numbers are processed in parallel and/or sequentially and whether processing strategies are influenced by the inversion of number words with respect to the Arabic digits [e.g., 43: dreiundvierzig (“three and forty”)] and/or by differences in reading behavior of the respective first language. Therefore, English- and German-speaking adults had to complete a three-digit number comparison task while their eye-fixation behavior was recorded. Replicating previous results, reliable hundred-decade-compatibility effects (e.g., 742_896: hundred-decade compatible because 7 textless 8 and 4 textless 9; 362_517: hundred-decade incompatible because 3 textless 5 but 6 textgreater 1) for English- as well as hundred-unit-compatibility effects for English- and German-speaking participants were observed, indicating parallel processing strategies. While no indices of partial sequential processing were found for the English-speaking group, about half of the German-speaking participants showed an inverse hundred-decade-compatibility effect accompanied by longer inspection time on the hundred digit indicating additional sequential processes. Thereby, the present data revealed that in transition from two- to higher multi-digit numbers, the homogeneity of underlying processing strategies varies between language groups. The regular German orthography (allowing for letter-by-letter reading) and its associated more sequential reading behavior may have promoted sequential processing strategies in multi-digit number processing. Furthermore, these results indicated that the inversion of number words alone is not sufficient to explain all observed language differences in three-digit number processing. |
Xuejun Bai; Guoli Yan; Simon P Liversedge; Chuanli Zang; Keith Rayner Reading spaced and unspaced Chinese text: Evidence from eye movements Journal Article In: Journal of Experimental Psychology: Human Perception and Performance, vol. 34, no. 5, pp. 1277–1287, 2008. @article{Bai2008,
title = {Reading spaced and unspaced Chinese text: Evidence from eye movements},
author = {Xuejun Bai and Guoli Yan and Simon P Liversedge and Chuanli Zang and Keith Rayner},
doi = {10.1037/0096-1523.34.5.1277},
year = {2008},
date = {2008-01-01},
journal = {Journal of Experimental Psychology: Human Perception and Performance},
volume = {34},
number = {5},
pages = {1277--1287},
abstract = {Native Chinese readers' eye movements were monitored as they read text that did or did not demark word boundary information. In Experiment 1, sentences had 4 types of spacing: normal unspaced text, text with spaces between words, text with spaces between characters that yielded nonwords, and finally text with spaces between every character. The authors investigated whether the introduction of spaces into unspaced Chinese text facilitates reading and whether the word or, alternatively, the character is a unit of information that is of primary importance in Chinese reading. Global and local measures indicated that sentences with unfamiliar word spaced format were as easy to read as visually familiar unspaced text. Nonword spacing and a space between every character produced longer reading times. In Experiment 2, highlighting was used to create analogous conditions: normal Chinese text, highlighting that marked words, highlighting that yielded nonwords, and highlighting that marked each character. The data from both experiments clearly indicated that words, and not individual characters, are the unit of primary importance in Chinese reading.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Native Chinese readers' eye movements were monitored as they read text that did or did not demark word boundary information. In Experiment 1, sentences had 4 types of spacing: normal unspaced text, text with spaces between words, text with spaces between characters that yielded nonwords, and finally text with spaces between every character. The authors investigated whether the introduction of spaces into unspaced Chinese text facilitates reading and whether the word or, alternatively, the character is a unit of information that is of primary importance in Chinese reading. Global and local measures indicated that sentences with unfamiliar word spaced format were as easy to read as visually familiar unspaced text. Nonword spacing and a space between every character produced longer reading times. In Experiment 2, highlighting was used to create analogous conditions: normal Chinese text, highlighting that marked words, highlighting that yielded nonwords, and highlighting that marked each character. The data from both experiments clearly indicated that words, and not individual characters, are the unit of primary importance in Chinese reading. |
Xuejun Bai; Feifei Liang; Hazel I Blythe; Chuanli Zang; Guoli Yan; Simon P Liversedge Interword spacing effects on the acquisition of new vocabulary for readers of Chinese as a second language Journal Article In: Journal of Research in Reading, vol. 36, pp. S4–S17, 2013. @article{Bai2013,
title = {Interword spacing effects on the acquisition of new vocabulary for readers of Chinese as a second language},
author = {Xuejun Bai and Feifei Liang and Hazel I Blythe and Chuanli Zang and Guoli Yan and Simon P Liversedge},
doi = {10.1111/j.1467-9817.2013.01554.x},
year = {2013},
date = {2013-01-01},
journal = {Journal of Research in Reading},
volume = {36},
pages = {S4--S17},
abstract = {We examined whether interword spacing would facilitate acquisition of new vocabulary for second language learners of Chinese. Participants' eye movements were measured as they read new vocabulary embedded in sentences during a learning session and a test session. In the learning session, participants read sentences in traditional unspaced format and half-read sentences with interword spacing. In the test session, all participants read unspaced sentences. Participants in the spaced learning group read the target words more quickly than those in the unspaced learning group. This benefit was maintained at test, indicating that the manipulation enhanced learning of the novel words and was not a transient effect limited to occasions when interword spacing was present in the printed text. The insertion of interword spaces may allow readers to form a more fully specified representation of the novel word, or to strengthen connections between representations of the constituent characters and the multi-character word.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
We examined whether interword spacing would facilitate acquisition of new vocabulary for second language learners of Chinese. Participants' eye movements were measured as they read new vocabulary embedded in sentences during a learning session and a test session. In the learning session, participants read sentences in traditional unspaced format and half-read sentences with interword spacing. In the test session, all participants read unspaced sentences. Participants in the spaced learning group read the target words more quickly than those in the unspaced learning group. This benefit was maintained at test, indicating that the manipulation enhanced learning of the novel words and was not a transient effect limited to occasions when interword spacing was present in the printed text. The insertion of interword spaces may allow readers to form a more fully specified representation of the novel word, or to strengthen connections between representations of the constituent characters and the multi-character word. |
Iske Bakker-Marshall; Atsuko Takashima; Jan-Mathijs Schoffelen; Janet G van Hell; Gabriele Janzen; James M McQueen Theta-band oscillations in the middle temporal gyrus reflect novel word consolidation Journal Article In: Journal of Cognitive Neuroscience, vol. 30, no. 5, pp. 621–633, 2018. @article{BakkerMarshall2018,
title = {Theta-band oscillations in the middle temporal gyrus reflect novel word consolidation},
author = {Iske Bakker-Marshall and Atsuko Takashima and Jan-Mathijs Schoffelen and Janet G van Hell and Gabriele Janzen and James M McQueen},
doi = {10.1162/jocn},
year = {2018},
date = {2018-01-01},
journal = {Journal of Cognitive Neuroscience},
volume = {30},
number = {5},
pages = {621--633},
abstract = {Like many other types of memory formation, novel word learning benefits from an offline consolidation period after the initial encoding phase. A previous EEG study has shown that retrieval of novel words elicited more word-like-induced electrophysiological brain activity in the theta band after consolidation [Bakker, I., Takashima, A., van Hell, J. G., Janzen, G., & McQueen, J. M. Changes in theta and beta oscillations as signatures of novel word consolidation. Journal of Cognitive Neuroscience, 27, 1286–1297, 2015]. This suggests that theta-band oscillations play a role in lexicalization, but it has not been demonstrated that this effect is directly caused by the formation of lexical representations. This study used magnetoencephalography to localize the theta consolidation effect to the left posterior middle temporal gyrus (pMTG), a region known to be involved in lexical storage. Both untrained novel words and words learned immediately before test elicited lower theta power during retrieval than existing words in this region. After a 24-hr consolidation period, the difference between novel and existing words decreased significantly, most strongly in the left pMTG. The magnitude of the decrease after consolidation correlated with an increase in behavioral competition effects between novel words and existing words with similar spelling, reflecting functional integration into the mental lexicon. These results thus provide new evidence that consolidation aids the development of lexical representations mediated by the left pMTG. Theta synchronizationmay enable lexical access by facilitating the simultaneous activation of distributed semantic, phonological, and orthographic representations that are bound together in the pMTG.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Like many other types of memory formation, novel word learning benefits from an offline consolidation period after the initial encoding phase. A previous EEG study has shown that retrieval of novel words elicited more word-like-induced electrophysiological brain activity in the theta band after consolidation [Bakker, I., Takashima, A., van Hell, J. G., Janzen, G., & McQueen, J. M. Changes in theta and beta oscillations as signatures of novel word consolidation. Journal of Cognitive Neuroscience, 27, 1286–1297, 2015]. This suggests that theta-band oscillations play a role in lexicalization, but it has not been demonstrated that this effect is directly caused by the formation of lexical representations. This study used magnetoencephalography to localize the theta consolidation effect to the left posterior middle temporal gyrus (pMTG), a region known to be involved in lexical storage. Both untrained novel words and words learned immediately before test elicited lower theta power during retrieval than existing words in this region. After a 24-hr consolidation period, the difference between novel and existing words decreased significantly, most strongly in the left pMTG. The magnitude of the decrease after consolidation correlated with an increase in behavioral competition effects between novel words and existing words with similar spelling, reflecting functional integration into the mental lexicon. These results thus provide new evidence that consolidation aids the development of lexical representations mediated by the left pMTG. Theta synchronizationmay enable lexical access by facilitating the simultaneous activation of distributed semantic, phonological, and orthographic representations that are bound together in the pMTG. |
Laura Winther Balling; Johannes Kizach Effects of surprisal and locality on Danish sentence processing: An eye-tracking investigation Journal Article In: Journal of Psycholinguistic Research, vol. 46, no. 5, pp. 1119–1136, 2017. @article{Balling2017,
title = {Effects of surprisal and locality on Danish sentence processing: An eye-tracking investigation},
author = {Laura Winther Balling and Johannes Kizach},
doi = {10.1007/s10936-017-9482-2},
year = {2017},
date = {2017-01-01},
journal = {Journal of Psycholinguistic Research},
volume = {46},
number = {5},
pages = {1119--1136},
publisher = {Springer US},
abstract = {An eye-tracking experiment in Danish investigates two dominant accounts of sentence processing: locality-based theories that predict a processing advantage for sentences where the distance between the major syntactic heads is minimized, and the surprisal theory which predicts that processing time increases with big changes in the relative entropy of possible parses, sometimes leading to anti-locality effects. We consider both lexicalised surprisal, expressed in conditional trigram probabilities, and syntactic surprisal expressed in the manipulation of the expectedness of the second NP in Danish constructions with two postverbal NP-objects. An eye-tracking experiment showed a clear advantage for local syntactic relations, with only a marginal effect of lexicalised surprisal and no effect of syntactic surprisal. We conclude that surprisal has a relatively marginal effect, which may be clearest for verbs in verb-final languages, while locality is a robust predictor of sentence processing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
An eye-tracking experiment in Danish investigates two dominant accounts of sentence processing: locality-based theories that predict a processing advantage for sentences where the distance between the major syntactic heads is minimized, and the surprisal theory which predicts that processing time increases with big changes in the relative entropy of possible parses, sometimes leading to anti-locality effects. We consider both lexicalised surprisal, expressed in conditional trigram probabilities, and syntactic surprisal expressed in the manipulation of the expectedness of the second NP in Danish constructions with two postverbal NP-objects. An eye-tracking experiment showed a clear advantage for local syntactic relations, with only a marginal effect of lexicalised surprisal and no effect of syntactic surprisal. We conclude that surprisal has a relatively marginal effect, which may be clearest for verbs in verb-final languages, while locality is a robust predictor of sentence processing. |
Chiara Banfi; Ferenc Kemény; Melanie Gangl; Gerd Schulte-Körne; Kristina Moll; Karin Landerl Visuo-spatial cueing in children with differential reading and spelling profiles Journal Article In: PLoS ONE, vol. 12, no. 7, pp. e0180358, 2017. @article{Banfi2017,
title = {Visuo-spatial cueing in children with differential reading and spelling profiles},
author = {Chiara Banfi and Ferenc Kemény and Melanie Gangl and Gerd Schulte-Körne and Kristina Moll and Karin Landerl},
doi = {10.1371/journal.pone.0180358},
year = {2017},
date = {2017-01-01},
journal = {PLoS ONE},
volume = {12},
number = {7},
pages = {e0180358},
abstract = {Dyslexia has been claimed to be causally related to deficits in visuo-spatial attention. In particular, inefficient shifting of visual attention during spatial cueing paradigms is assumed to be associated with problems in graphemic parsing during sublexical reading. The current study investigated visuo-spatial attention performance in an exogenous cueing paradigm in a large sample (N = 191) of third and fourth graders with different reading and spelling profiles (controls, isolated reading deficit, isolated spelling deficit, combined deficit in reading and spelling). Once individual variability in reaction times was taken into account by means of z-transformation, a cueing deficit (i.e. no significant difference between valid and invalid trials) was found for children with combined deficits in reading and spelling. However, poor readers without spelling problems showed a cueing effect comparable to controls, but exhibited a particularly strong right-over-left advantage (position effect). Isolated poor spellers showed a significant cueing effect, but no position effect. While we replicated earlier findings of a reduced cueing effect among poor nonword readers (indicating deficits in sublexical processing), we also found a reduced cueing effect among children with particularly poor orthographic spelling (indicating deficits in lexical processing). Thus, earlier claims of a specific association with nonword reading could not be confirmed. Controlling for ADHD-symptoms reported in a parental questionnaire did not impact on the statistical analysis, indicating that cueing deficits are not caused by more general attentional limitations. Between 31 and 48% of participants in the three reading and/or spelling deficit groups as well as 32% of the control group showed reduced spatial cueing. These findings indicate a significant, but moderate association between certain aspects of visuo-spatial attention and subcomponents of written language processing, the causal status of which is yet unclear.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Dyslexia has been claimed to be causally related to deficits in visuo-spatial attention. In particular, inefficient shifting of visual attention during spatial cueing paradigms is assumed to be associated with problems in graphemic parsing during sublexical reading. The current study investigated visuo-spatial attention performance in an exogenous cueing paradigm in a large sample (N = 191) of third and fourth graders with different reading and spelling profiles (controls, isolated reading deficit, isolated spelling deficit, combined deficit in reading and spelling). Once individual variability in reaction times was taken into account by means of z-transformation, a cueing deficit (i.e. no significant difference between valid and invalid trials) was found for children with combined deficits in reading and spelling. However, poor readers without spelling problems showed a cueing effect comparable to controls, but exhibited a particularly strong right-over-left advantage (position effect). Isolated poor spellers showed a significant cueing effect, but no position effect. While we replicated earlier findings of a reduced cueing effect among poor nonword readers (indicating deficits in sublexical processing), we also found a reduced cueing effect among children with particularly poor orthographic spelling (indicating deficits in lexical processing). Thus, earlier claims of a specific association with nonword reading could not be confirmed. Controlling for ADHD-symptoms reported in a parental questionnaire did not impact on the statistical analysis, indicating that cueing deficits are not caused by more general attentional limitations. Between 31 and 48% of participants in the three reading and/or spelling deficit groups as well as 32% of the control group showed reduced spatial cueing. These findings indicate a significant, but moderate association between certain aspects of visuo-spatial attention and subcomponents of written language processing, the causal status of which is yet unclear. |
Chiara Banfi; Ferenc Kemény; Melanie Gangl; Gerd Schulte-Körne; Kristina Moll; Karin Landerl Visual attention span performance in German-speaking children with differential reading and spelling profiles: No evidence of group differences Journal Article In: PLoS ONE, vol. 13, no. 6, pp. e0198903, 2018. @article{Banfi2018,
title = {Visual attention span performance in German-speaking children with differential reading and spelling profiles: No evidence of group differences},
author = {Chiara Banfi and Ferenc Kemény and Melanie Gangl and Gerd Schulte-Körne and Kristina Moll and Karin Landerl},
doi = {10.1371/journal.pone.0198903},
year = {2018},
date = {2018-01-01},
journal = {PLoS ONE},
volume = {13},
number = {6},
pages = {e0198903},
abstract = {An impairment in the visual attention span (VAS) has been suggested to hamper reading performance of individuals with dyslexia. It is not clear, however, if the very nature of the deficit is visual or verbal and, importantly, if it affects spelling skills as well. The current study investigated VAS by means of forced choice tasks with letters and symbols in a sample of third and fourth graders with age-adequate reading and spelling skills (n= 43), a typical dyslexia profile with combined reading and spelling deficits (n= 26) and isolated spelling deficits (n= 32). The task was devised to contain low phonological short-term memory load and to overcome the limitations of oral reports. Notably, eye-movements were monitored to control that children fixated the center of the display when stimuli were presented. Results yielded no main effect of group as well as no group-related interactions, thus showing that children with dyslexia and isolated spelling deficits did not manifest a VAS deficit for letters or symbols once certain methodological aspects were controlled for. The present results could not replicate previous evidence for the involvement of VAS in reading and dyslexia.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
An impairment in the visual attention span (VAS) has been suggested to hamper reading performance of individuals with dyslexia. It is not clear, however, if the very nature of the deficit is visual or verbal and, importantly, if it affects spelling skills as well. The current study investigated VAS by means of forced choice tasks with letters and symbols in a sample of third and fourth graders with age-adequate reading and spelling skills (n= 43), a typical dyslexia profile with combined reading and spelling deficits (n= 26) and isolated spelling deficits (n= 32). The task was devised to contain low phonological short-term memory load and to overcome the limitations of oral reports. Notably, eye-movements were monitored to control that children fixated the center of the display when stimuli were presented. Results yielded no main effect of group as well as no group-related interactions, thus showing that children with dyslexia and isolated spelling deficits did not manifest a VAS deficit for letters or symbols once certain methodological aspects were controlled for. The present results could not replicate previous evidence for the involvement of VAS in reading and dyslexia. |
Briony Banks; Emma Gowen; Kevin J Munro; Patti Adank Audiovisual cues benefit recognition of accented speech in noise but not perceptual adaptation Journal Article In: Frontiers in Human Neuroscience, vol. 9, pp. 1–13, 2015. @article{Banks2015,
title = {Audiovisual cues benefit recognition of accented speech in noise but not perceptual adaptation},
author = {Briony Banks and Emma Gowen and Kevin J Munro and Patti Adank},
doi = {10.3389/fnhum.2015.00422},
year = {2015},
date = {2015-01-01},
journal = {Frontiers in Human Neuroscience},
volume = {9},
pages = {1--13},
abstract = {Perceptual adaptation allows humans to recognize different varieties of accented speech. We investigated whether perceptual adaptation to accented speech is facilitated if listeners can see a speaker's facial and mouth movements. In Study 1, participants listened to sentences in a novel accent and underwent a period of training with audiovisual or audio-only speech cues, presented in quiet or in background noise. A control group also underwent training with visual-only (speech-reading) cues. We observed no significant difference in perceptual adaptation between any of the groups. To address a number of remaining questions, we carried out a second study using a different accent, speaker and experimental design, in which participants listened to sentences in a non-native (Japanese) accent with audiovisual or audio-only cues, without separate training. Participants' eye gaze was recorded to verify that they looked at the speaker's face during audiovisual trials. Recognition accuracy was significantly better for audiovisual than for audio-only stimuli; however, no statistical difference in perceptual adaptation was observed between the two modalities. Furthermore, Bayesian analysis suggested that the data supported the null hypothesis. Our results suggest that although the availability of visual speech cues may be immediately beneficial for recognition of unfamiliar accented speech in noise, it does not improve perceptual adaptation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Perceptual adaptation allows humans to recognize different varieties of accented speech. We investigated whether perceptual adaptation to accented speech is facilitated if listeners can see a speaker's facial and mouth movements. In Study 1, participants listened to sentences in a novel accent and underwent a period of training with audiovisual or audio-only speech cues, presented in quiet or in background noise. A control group also underwent training with visual-only (speech-reading) cues. We observed no significant difference in perceptual adaptation between any of the groups. To address a number of remaining questions, we carried out a second study using a different accent, speaker and experimental design, in which participants listened to sentences in a non-native (Japanese) accent with audiovisual or audio-only cues, without separate training. Participants' eye gaze was recorded to verify that they looked at the speaker's face during audiovisual trials. Recognition accuracy was significantly better for audiovisual than for audio-only stimuli; however, no statistical difference in perceptual adaptation was observed between the two modalities. Furthermore, Bayesian analysis suggested that the data supported the null hypothesis. Our results suggest that although the availability of visual speech cues may be immediately beneficial for recognition of unfamiliar accented speech in noise, it does not improve perceptual adaptation. |
Carol J Y Bao; Cristina Rubino; Alisdair J G Taylor; Jason J S Barton The effects of homonymous hemianopia in experimental studies of alexia Journal Article In: Neuropsychologia, vol. 70, pp. 156–164, 2015. @article{Bao2015a,
title = {The effects of homonymous hemianopia in experimental studies of alexia},
author = {Carol J Y Bao and Cristina Rubino and Alisdair J G Taylor and Jason J S Barton},
doi = {10.1016/j.neuropsychologia.2015.02.026},
year = {2015},
date = {2015-01-01},
journal = {Neuropsychologia},
volume = {70},
pages = {156--164},
publisher = {Elsevier},
abstract = {Pure alexia is characterized by an increased word-length effect in reading. However, this disorder is usually accompanied by right homonymous hemianopia, which itself can cause a mildly increased word-length effect. Some alexic studies have used hemianopic patients with modest word-length effects: it is not clear (a) whether they had pure alexia and (b) if not, whether their results could be explained by the field defect. Our goal was to determine if impairments in visual processing claimed to be related to alexia could be replicated in homonymous hemianopia alone. Twelve healthy subjects performed five experiments used in two prior studies of alexia, under both normal and simulated hemianopic conditions, using a gaze-contingent display generated by an eye-tracker. We replicated the increased word-length effect for reading time with right homonymous hemianopia, and showed a similar effect for a lexical decision task. Simulated hemianopia impaired scanning accuracy for letter or number strings, and slowed object part processing, though the effect of configuration was not greater under hemianopic viewing. Hemianopia impaired the identification of words whose letters appeared and disappeared sequentially on the screen, with better performance on a cumulative presentation in which the letters remained on the screen. The reporting of trigrams was less accurate with hemianopia, though syllabic structure did not influence the results. We conclude that some impairments that have been attributed to the processing defects underlying alexia may actually be due to right homonymous hemianopia. Our results underline the importance of considering the contribution of accompanying low-level visual impairments when studying high-level processes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Pure alexia is characterized by an increased word-length effect in reading. However, this disorder is usually accompanied by right homonymous hemianopia, which itself can cause a mildly increased word-length effect. Some alexic studies have used hemianopic patients with modest word-length effects: it is not clear (a) whether they had pure alexia and (b) if not, whether their results could be explained by the field defect. Our goal was to determine if impairments in visual processing claimed to be related to alexia could be replicated in homonymous hemianopia alone. Twelve healthy subjects performed five experiments used in two prior studies of alexia, under both normal and simulated hemianopic conditions, using a gaze-contingent display generated by an eye-tracker. We replicated the increased word-length effect for reading time with right homonymous hemianopia, and showed a similar effect for a lexical decision task. Simulated hemianopia impaired scanning accuracy for letter or number strings, and slowed object part processing, though the effect of configuration was not greater under hemianopic viewing. Hemianopia impaired the identification of words whose letters appeared and disappeared sequentially on the screen, with better performance on a cumulative presentation in which the letters remained on the screen. The reporting of trigrams was less accurate with hemianopia, though syllabic structure did not influence the results. We conclude that some impairments that have been attributed to the processing defects underlying alexia may actually be due to right homonymous hemianopia. Our results underline the importance of considering the contribution of accompanying low-level visual impairments when studying high-level processes. |
M S Baptista; C Bohn; Reinhold Kliegl; Ralf Engbert; Jürgen Kurths Reconstruction of eye movements during blinks Journal Article In: Chaos, vol. 18, no. 1, pp. 1–15, 2008. @article{Baptista2008,
title = {Reconstruction of eye movements during blinks},
author = {M S Baptista and C Bohn and Reinhold Kliegl and Ralf Engbert and Jürgen Kurths},
doi = {10.1063/1.2890843},
year = {2008},
date = {2008-01-01},
journal = {Chaos},
volume = {18},
number = {1},
pages = {1--15},
abstract = {In eye movement research in reading, the amount of data plays a crucial role for the validation of results. A methodological problem for the analysis of the eye movement in reading are blinks, when readers close their eyes. Blinking rate increases with increasing reading time, resulting in high data losses, especially for older adults or reading impaired subjects. We present a method, based on the symbolic sequence dynamics of the eye movements, that reconstructs the horizontal position of the eyes while the reader blinks. The method makes use of an observed fact that the movements of the eyes before closing or after opening contain information about the eyes movements during blinks. Test results indicate that our reconstruction method is superior to methods that use simpler interpolation approaches. In addition, analyses of the reconstructed data show no significant deviation from the usual behavior observed in readers.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
In eye movement research in reading, the amount of data plays a crucial role for the validation of results. A methodological problem for the analysis of the eye movement in reading are blinks, when readers close their eyes. Blinking rate increases with increasing reading time, resulting in high data losses, especially for older adults or reading impaired subjects. We present a method, based on the symbolic sequence dynamics of the eye movements, that reconstructs the horizontal position of the eyes while the reader blinks. The method makes use of an observed fact that the movements of the eyes before closing or after opening contain information about the eyes movements during blinks. Test results indicate that our reconstruction method is superior to methods that use simpler interpolation approaches. In addition, analyses of the reconstructed data show no significant deviation from the usual behavior observed in readers. |
Ellen Gurman Bard; Robin L Hill; Mary Ellen Foster; Manabu Arai Tuning accessibility of referring expressions in situated dialogue Journal Article In: Language, Cognition and Neuroscience, vol. 29, no. 8, pp. 928–949, 2014. @article{Bard2014,
title = {Tuning accessibility of referring expressions in situated dialogue},
author = {Ellen Gurman Bard and Robin L Hill and Mary Ellen Foster and Manabu Arai},
doi = {10.1080/23273798.2014.895845},
year = {2014},
date = {2014-01-01},
journal = {Language, Cognition and Neuroscience},
volume = {29},
number = {8},
pages = {928--949},
abstract = {Accessibility theory associates more complex referring expressions with less accessible referents. Felicitous referring expressions should reflect accessibility from the addressee's perspective, which may be difficult for speakers to assess incrementally. If mechanisms shared by perception and production help interlocutors align internal representations, then dyads with different roles and different things to say should profit less from alignment. We examined introductory mentions of on-screen shapes within a joint task for effects of access to the addressee's attention, of players' actions and of speakers' roles. Only speakers' actions affected the form of referring expression and only different role dyads made egocentric use of actions hidden from listeners. Analysis of players' gaze around referring expressions confirmed this pattern; only same role dyads coordinated attention as the accessibility theory predicts. The results are discussed within a model distributing collaborative effort under the cons...},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Accessibility theory associates more complex referring expressions with less accessible referents. Felicitous referring expressions should reflect accessibility from the addressee's perspective, which may be difficult for speakers to assess incrementally. If mechanisms shared by perception and production help interlocutors align internal representations, then dyads with different roles and different things to say should profit less from alignment. We examined introductory mentions of on-screen shapes within a joint task for effects of access to the addressee's attention, of players' actions and of speakers' roles. Only speakers' actions affected the form of referring expression and only different role dyads made egocentric use of actions hidden from listeners. Analysis of players' gaze around referring expressions confirmed this pattern; only same role dyads coordinated attention as the accessibility theory predicts. The results are discussed within a model distributing collaborative effort under the cons... |
Adrienne E Barnes; Young Suk Kim Low-skilled adult readers look like typically developing child readers: A comparison of reading skills and eye movement behavior Journal Article In: Reading and Writing, vol. 29, no. 9, pp. 1889–1914, 2016. @article{Barnes2016,
title = {Low-skilled adult readers look like typically developing child readers: A comparison of reading skills and eye movement behavior},
author = {Adrienne E Barnes and Young Suk Kim},
doi = {10.1007/s11145-016-9657-5},
year = {2016},
date = {2016-01-01},
journal = {Reading and Writing},
volume = {29},
number = {9},
pages = {1889--1914},
publisher = {Springer Netherlands},
abstract = {The paper documents 41 European case histories that describe the seismogenic response of crystalline and sedimentary rocks to fluid injection. It is part of an on-going study to identify factors that have a bearing on the seismic hazard associated with fluid injection. The data generally support the view that injection in sedimentary rocks tends to be less seismogenic than in crystalline rocks. In both cases, the presence of faults near the wells that allow pressures to penetrate significant distances vertically and laterally can be expected to increase the risk of producing felt events. All cases of injection into crystalline rocks produce seismic events, albeit usually of non-damaging magnitudes, and all crystalline rock masses were found to be critically stressed, regardless of the strength of their seismogenic responses to injection. Thus, these data suggest that criticality of stress, whilst a necessary condition for producing earthquakes that would disturb (or be felt by) the local population, is not a sufficient condition. The data considered here are not fully consistent with the concept that injection into deeper crystalline formations tends to produce larger magnitude events. The data are too few to evaluate the combined effect of depth and injected fluid volume on the size of the largest events. Injection at sites with low natural seismicity, defined by the expectation that the local peak ground acceleration has less than a 10% chance of exceeding 0.07 g in 50 years, has not produced felt events. Although the database is limited, this suggests that low natural seismicity, corresponding to hazard levels at or below 0.07 g, may be a useful indicator of a low propensity for fluid injection to produce felt or damaging events. However, higher values do not necessarily imply a high propensity.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The paper documents 41 European case histories that describe the seismogenic response of crystalline and sedimentary rocks to fluid injection. It is part of an on-going study to identify factors that have a bearing on the seismic hazard associated with fluid injection. The data generally support the view that injection in sedimentary rocks tends to be less seismogenic than in crystalline rocks. In both cases, the presence of faults near the wells that allow pressures to penetrate significant distances vertically and laterally can be expected to increase the risk of producing felt events. All cases of injection into crystalline rocks produce seismic events, albeit usually of non-damaging magnitudes, and all crystalline rock masses were found to be critically stressed, regardless of the strength of their seismogenic responses to injection. Thus, these data suggest that criticality of stress, whilst a necessary condition for producing earthquakes that would disturb (or be felt by) the local population, is not a sufficient condition. The data considered here are not fully consistent with the concept that injection into deeper crystalline formations tends to produce larger magnitude events. The data are too few to evaluate the combined effect of depth and injected fluid volume on the size of the largest events. Injection at sites with low natural seismicity, defined by the expectation that the local peak ground acceleration has less than a 10% chance of exceeding 0.07 g in 50 years, has not produced felt events. Although the database is limited, this suggests that low natural seismicity, corresponding to hazard levels at or below 0.07 g, may be a useful indicator of a low propensity for fluid injection to produce felt or damaging events. However, higher values do not necessarily imply a high propensity. |
Adrienne E Barnes; Young Suk Kim; Elizabeth L Tighe; Christian Vorstius Readers in adult basic education: Component skills, eye movements, and fluency Journal Article In: Journal of Learning Disabilities, vol. 50, no. 2, pp. 180–194, 2017. @article{Barnes2017,
title = {Readers in adult basic education: Component skills, eye movements, and fluency},
author = {Adrienne E Barnes and Young Suk Kim and Elizabeth L Tighe and Christian Vorstius},
doi = {10.1177/0022219415609187},
year = {2017},
date = {2017-01-01},
journal = {Journal of Learning Disabilities},
volume = {50},
number = {2},
pages = {180--194},
abstract = {The present study explored the reading skills of a sample of 48 adults enrolled in a basic education program in northern Florida, United States. Previous research has reported on reading component skills for students in adult education settings, but little is known about eye movement patterns or their relation to reading skills for this population. In this study, reading component skills including decoding, language comprehension, and reading fluency are reported, as are eye movement variables for connected-text oral reading. Eye movement comparisons between individuals with higher and lower oral reading fluency revealed within- and between-subject effects for word frequency and word length as well as group and word frequency interactions. Bivariate correlations indicated strong relations between component skills of reading, eye movement measures, and both the Test of Adult Basic Education (Reading subtest) and the Woodcock-Johnson III Diagnostic Reading Battery Passage Comprehension assessments. Regression analyses revealed the utility of decoding, language comprehension, and lexical activation time for predicting achievement on both the Woodcock Johnson III Passage Comprehension and the Test of Adult Basic Education Reading Comprehension.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The present study explored the reading skills of a sample of 48 adults enrolled in a basic education program in northern Florida, United States. Previous research has reported on reading component skills for students in adult education settings, but little is known about eye movement patterns or their relation to reading skills for this population. In this study, reading component skills including decoding, language comprehension, and reading fluency are reported, as are eye movement variables for connected-text oral reading. Eye movement comparisons between individuals with higher and lower oral reading fluency revealed within- and between-subject effects for word frequency and word length as well as group and word frequency interactions. Bivariate correlations indicated strong relations between component skills of reading, eye movement measures, and both the Test of Adult Basic Education (Reading subtest) and the Woodcock-Johnson III Diagnostic Reading Battery Passage Comprehension assessments. Regression analyses revealed the utility of decoding, language comprehension, and lexical activation time for predicting achievement on both the Woodcock Johnson III Passage Comprehension and the Test of Adult Basic Education Reading Comprehension. |
Wesley R Barnhart; Samuel Rivera; Christopher W Robinson Effects of linguistic labels on visual attention in children and young adults Journal Article In: Frontiers in Psychology, vol. 9, pp. 1–11, 2018. @article{Barnhart2018,
title = {Effects of linguistic labels on visual attention in children and young adults},
author = {Wesley R Barnhart and Samuel Rivera and Christopher W Robinson},
doi = {10.3389/fpsyg.2018.00358},
year = {2018},
date = {2018-01-01},
journal = {Frontiers in Psychology},
volume = {9},
pages = {1--11},
abstract = {Effects of linguistic labels on learning outcomes are well-established; however, developmental research examining possible mechanisms underlying these effects have provided mixed results. We used a novel paradigm where 8-year-olds and adults were simultaneously trained on three sparse categories (categories with many irrelevant or unique features and a single rule defining feature). Category members were either associated with the same label, different labels, or no labels (silent baseline). Similar to infant paradigms, participants passively viewed individual exemplars and we examined fixations to category relevant features across training. While it is well established that adults can optimize their attention in forced-choice categorization tasks without linguistic input, the present findings provide support for label induced attention optimization: simply hearing the same label associated with different exemplars was associated with increased attention to category relevant features over time, and participants continued to focus on these features on a subsequent recognition task. Participants also viewed images longer and made more fixations when images were paired with unique labels. These findings provide support for the claim that labels may facilitate categorization by directing attention to category relevant features.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Effects of linguistic labels on learning outcomes are well-established; however, developmental research examining possible mechanisms underlying these effects have provided mixed results. We used a novel paradigm where 8-year-olds and adults were simultaneously trained on three sparse categories (categories with many irrelevant or unique features and a single rule defining feature). Category members were either associated with the same label, different labels, or no labels (silent baseline). Similar to infant paradigms, participants passively viewed individual exemplars and we examined fixations to category relevant features across training. While it is well established that adults can optimize their attention in forced-choice categorization tasks without linguistic input, the present findings provide support for label induced attention optimization: simply hearing the same label associated with different exemplars was associated with increased attention to category relevant features over time, and participants continued to focus on these features on a subsequent recognition task. Participants also viewed images longer and made more fixations when images were paired with unique labels. These findings provide support for the claim that labels may facilitate categorization by directing attention to category relevant features. |
Wesley R Barnhart; Samuel Rivera; Christopher W Robinson Different patterns of modality dominance across development Journal Article In: Acta Psychologica, vol. 182, pp. 154–165, 2018. @article{Barnhart2018a,
title = {Different patterns of modality dominance across development},
author = {Wesley R Barnhart and Samuel Rivera and Christopher W Robinson},
doi = {10.1016/j.actpsy.2017.11.017},
year = {2018},
date = {2018-01-01},
journal = {Acta Psychologica},
volume = {182},
pages = {154--165},
publisher = {Elsevier},
abstract = {The present study sought to better understand how children, young adults, and older adults attend and respond to multisensory information. In Experiment 1, young adults were presented with two spoken words, two pictures, or two word-picture pairings and they had to determine if the two stimuli/pairings were exactly the same or different. Pairing the words and pictures together slowed down visual but not auditory response times and delayed the latency of first fixations, both of which are consistent with a proposed mechanism underlying auditory dominance. Experiment 2 examined the development of modality dominance in children, young adults, and older adults. Cross-modal presentation attenuated visual accuracy and slowed down visual response times in children, whereas older adults showed the opposite pattern, with cross-modal presentation attenuating auditory accuracy and slowing down auditory response times. Cross-modal presentation also delayed first fixations in children and young adults. Mechanisms underlying modality dominance and multisensory processing are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The present study sought to better understand how children, young adults, and older adults attend and respond to multisensory information. In Experiment 1, young adults were presented with two spoken words, two pictures, or two word-picture pairings and they had to determine if the two stimuli/pairings were exactly the same or different. Pairing the words and pictures together slowed down visual but not auditory response times and delayed the latency of first fixations, both of which are consistent with a proposed mechanism underlying auditory dominance. Experiment 2 examined the development of modality dominance in children, young adults, and older adults. Cross-modal presentation attenuated visual accuracy and slowed down visual response times in children, whereas older adults showed the opposite pattern, with cross-modal presentation attenuating auditory accuracy and slowing down auditory response times. Cross-modal presentation also delayed first fixations in children and young adults. Mechanisms underlying modality dominance and multisensory processing are discussed. |
Dale J Barr; Boaz Keysar Anchoring comprehension in linguistic precedents Journal Article In: Journal of Memory and Language, vol. 46, no. 2, pp. 391–418, 2002. @article{Barr2002,
title = {Anchoring comprehension in linguistic precedents},
author = {Dale J Barr and Boaz Keysar},
doi = {10.1006/jmla.2001.2815},
year = {2002},
date = {2002-01-01},
journal = {Journal of Memory and Language},
volume = {46},
number = {2},
pages = {391--418},
abstract = {Past research has shown that when speakers refer to the same referent multiple times, they tend to standardize their descriptions by establishing linguistic precedents. In three experiments, we show that listeners reduce uncertainty in comprehension by taking advantage of these precedents. We tracked listeners' eye movements in a referential communication task and found that listeners identified referents more quickly when specific precedents existed than when there were none. Furthermore, we found that listeners expected speakers to adhere to precedents even in contexts where it would lead to referential overspecification. Finally, we provide evidence that the benefits of linguistic precedents are independent of mutual knowledge - listeners were not more likely to benefit from precedents when they were mutually known than when they were not. We conclude that listeners use precedents simply because they are available, not because they are mutually known.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Past research has shown that when speakers refer to the same referent multiple times, they tend to standardize their descriptions by establishing linguistic precedents. In three experiments, we show that listeners reduce uncertainty in comprehension by taking advantage of these precedents. We tracked listeners' eye movements in a referential communication task and found that listeners identified referents more quickly when specific precedents existed than when there were none. Furthermore, we found that listeners expected speakers to adhere to precedents even in contexts where it would lead to referential overspecification. Finally, we provide evidence that the benefits of linguistic precedents are independent of mutual knowledge - listeners were not more likely to benefit from precedents when they were mutually known than when they were not. We conclude that listeners use precedents simply because they are available, not because they are mutually known. |
Dale J Barr Pragmatic expectations and linguistic evidence: Listeners anticipate but do not integrate common ground Journal Article In: Cognition, vol. 109, no. 1, pp. 18–40, 2008. @article{Barr2008,
title = {Pragmatic expectations and linguistic evidence: Listeners anticipate but do not integrate common ground},
author = {Dale J Barr},
doi = {10.1016/j.cognition.2008.07.005},
year = {2008},
date = {2008-01-01},
journal = {Cognition},
volume = {109},
number = {1},
pages = {18--40},
publisher = {Elsevier B.V.},
abstract = {When listeners search for the referent of a speaker's expression, they experience interference from privileged knowledge, knowledge outside of their 'common ground' with the speaker. Evidence is presented that this interference reflects limitations in lexical processing. In three experiments, listeners' eye movements were monitored as they searched for the target of a speaker's referring expression in a display that also contained a phonological competitor (e.g., bucket/buckle). Listeners anticipated that the speaker would refer to something in common ground, but they did not experience less interference from a competitor in privileged ground than from a matched competitor in common ground. In contrast, interference from the competitor was eliminated when it was ruled out by a semantic constraint. These findings support a view of comprehension as relying on multiple systems with distinct access to information and present a challenge for constraint-based views of common ground.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
When listeners search for the referent of a speaker's expression, they experience interference from privileged knowledge, knowledge outside of their 'common ground' with the speaker. Evidence is presented that this interference reflects limitations in lexical processing. In three experiments, listeners' eye movements were monitored as they searched for the target of a speaker's referring expression in a display that also contained a phonological competitor (e.g., bucket/buckle). Listeners anticipated that the speaker would refer to something in common ground, but they did not experience less interference from a competitor in privileged ground than from a matched competitor in common ground. In contrast, interference from the competitor was eliminated when it was ruled out by a semantic constraint. These findings support a view of comprehension as relying on multiple systems with distinct access to information and present a challenge for constraint-based views of common ground. |
Dale J Barr; Laura Jackson; Isobel Phillips Using a voice to put a name to a face: The psycholinguistics of proper name comprehension Journal Article In: Journal of Experimental Psychology: General, vol. 143, no. 1, pp. 404–413, 2014. @article{Barr2014,
title = {Using a voice to put a name to a face: The psycholinguistics of proper name comprehension},
author = {Dale J Barr and Laura Jackson and Isobel Phillips},
doi = {10.1037/a0031813},
year = {2014},
date = {2014-01-01},
journal = {Journal of Experimental Psychology: General},
volume = {143},
number = {1},
pages = {404--413},
abstract = {We propose that hearing a proper name (e.g., Kevin) in a particular voice serves as a compound memory cue that directly activates representations of a mutually known target person, often permitting reference resolution without any complex computation of shared knowledge. In a referential communication study, pairs of friends played a communication game, in which we monitored the eyes of one friend (the addressee) while he or she sought to identify the target person, in a set of four photos, on the basis of a name spoken aloud. When the name was spoken by a friend, addressees rapidly identified the target person, and this facilitation was independent of whether the friend was articulating a message he or she had designed versus one from a third party with whom the target person was not shared. Our findings suggest that the comprehension system takes advantage of regularities in the environment to minimize effortful computation about who knows what.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
We propose that hearing a proper name (e.g., Kevin) in a particular voice serves as a compound memory cue that directly activates representations of a mutually known target person, often permitting reference resolution without any complex computation of shared knowledge. In a referential communication study, pairs of friends played a communication game, in which we monitored the eyes of one friend (the addressee) while he or she sought to identify the target person, in a set of four photos, on the basis of a name spoken aloud. When the name was spoken by a friend, addressees rapidly identified the target person, and this facilitation was independent of whether the friend was articulating a message he or she had designed versus one from a third party with whom the target person was not shared. Our findings suggest that the comprehension system takes advantage of regularities in the environment to minimize effortful computation about who knows what. |
Brian Bartek; Richard L Lewis; Shravan Vasishth; Mason R Smith In search of on-line locality effects in sentence comprehension Journal Article In: Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 37, no. 5, pp. 1178–1198, 2011. @article{Bartek2011,
title = {In search of on-line locality effects in sentence comprehension},
author = {Brian Bartek and Richard L Lewis and Shravan Vasishth and Mason R Smith},
doi = {10.1037/a0024194},
year = {2011},
date = {2011-01-01},
journal = {Journal of Experimental Psychology: Learning, Memory, and Cognition},
volume = {37},
number = {5},
pages = {1178--1198},
abstract = {Many comprehension theories assert that increasing the distance between elements participating in a linguistic relation (e.g., a verb and a noun phrase argument) increases the difficulty of establishing that relation during on-line comprehension. Such locality effects are expected to increase reading times and are thought to reveal properties and limitations of the short-term memory system that supports comprehension. Despite their theoretical importance and putative ubiquity, however, evidence for on-line locality effects is quite narrow linguistically and methodologically: It is restricted almost exclusively to self-paced reading of complex structures involving a particular class of syntactic relation. We present 4 experiments (2 self-paced reading and 2 eyetracking experiments) that demonstrate locality effects in the course of establishing subject-verb dependencies; locality effects are seen even in materials that can be read quickly and easily. These locality effects are observable in the earliest possible eye-movement measures and are of much shorter duration than previously reported effects. To account for the observed empirical patterns, we outline a processing model of the adaptive control of button pressing and eye movements. This model makes progress toward the goal of eliminating linking assumptions between memory constructs and empirical measures in favor of explicit theories of the coordinated control of motor responses and parsing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Many comprehension theories assert that increasing the distance between elements participating in a linguistic relation (e.g., a verb and a noun phrase argument) increases the difficulty of establishing that relation during on-line comprehension. Such locality effects are expected to increase reading times and are thought to reveal properties and limitations of the short-term memory system that supports comprehension. Despite their theoretical importance and putative ubiquity, however, evidence for on-line locality effects is quite narrow linguistically and methodologically: It is restricted almost exclusively to self-paced reading of complex structures involving a particular class of syntactic relation. We present 4 experiments (2 self-paced reading and 2 eyetracking experiments) that demonstrate locality effects in the course of establishing subject-verb dependencies; locality effects are seen even in materials that can be read quickly and easily. These locality effects are observable in the earliest possible eye-movement measures and are of much shorter duration than previously reported effects. To account for the observed empirical patterns, we outline a processing model of the adaptive control of button pressing and eye movements. This model makes progress toward the goal of eliminating linking assumptions between memory constructs and empirical measures in favor of explicit theories of the coordinated control of motor responses and parsing. |
James Bartolotti; Viorica Marian Learning and processing of orthography-to-phonology mappings in a third language Journal Article In: International Journal of Multilingualism, pp. 1–21, 2018. @article{Bartolotti2018,
title = {Learning and processing of orthography-to-phonology mappings in a third language},
author = {James Bartolotti and Viorica Marian},
doi = {10.1080/14790718.2017.1423073},
year = {2018},
date = {2018-01-01},
journal = {International Journal of Multilingualism},
pages = {1--21},
publisher = {Taylor & Francis},
abstract = {Bilinguals' two languages are both active in parallel, and controlling co-activation is one of bilinguals' principle challenges. Trilingualism multiplies this challenge. To investigate how third language (L3) learners manage interference between languages, Spanish-English bilinguals were taught an artificial language that conflicted with English and Spanish letter-sound mappings. Interference from existing languages was higher for L3 words that were similar to L1 or L2 words, but this interference decreased over time. After mastering the L3, learners continued to experience competition from their other languages. Notably, spoken L3 words activated orthography in all three languages, causing participants to experience cross-linguistic orthographic competition in the absence of phonological overlap. Results indicate that L3 learners are able to control between-language interference from the L1 and L2. We conclude that while the transition from two languages to three presents additional challenges, bilinguals are able to successfully manage competition between languages in this new context.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Bilinguals' two languages are both active in parallel, and controlling co-activation is one of bilinguals' principle challenges. Trilingualism multiplies this challenge. To investigate how third language (L3) learners manage interference between languages, Spanish-English bilinguals were taught an artificial language that conflicted with English and Spanish letter-sound mappings. Interference from existing languages was higher for L3 words that were similar to L1 or L2 words, but this interference decreased over time. After mastering the L3, learners continued to experience competition from their other languages. Notably, spoken L3 words activated orthography in all three languages, causing participants to experience cross-linguistic orthographic competition in the absence of phonological overlap. Results indicate that L3 learners are able to control between-language interference from the L1 and L2. We conclude that while the transition from two languages to three presents additional challenges, bilinguals are able to successfully manage competition between languages in this new context. |
James Bartolotti; Scott R Schroeder; Sayuri Hayakawa; Sirada Rochanavibhata; Peiyao Chen; Viorica Marian Listening to speech and non-speech sounds activates phonological and semantic knowledge differently Journal Article In: Quarterly Journal of Experimental Psychology, vol. 73, no. 8, pp. 1135–1149, 2020. @article{Bartolotti2020,
title = {Listening to speech and non-speech sounds activates phonological and semantic knowledge differently},
author = {James Bartolotti and Scott R Schroeder and Sayuri Hayakawa and Sirada Rochanavibhata and Peiyao Chen and Viorica Marian},
doi = {10.1177/1747021820923944},
year = {2020},
date = {2020-01-01},
journal = {Quarterly Journal of Experimental Psychology},
volume = {73},
number = {8},
pages = {1135--1149},
abstract = {How does the mind process linguistic and non-linguistic sounds? The current study assessed the different ways that spoken words (e.g., “dog”) and characteristic sounds (e.g., textlessbarkingtextgreater) provide access to phonological information (e.g., word-form of “dog”) and semantic information (e.g., knowledge that a dog is associated with a leash). Using an eye-tracking paradigm, we found that listening to words prompted rapid phonological activation, which was then followed by semantic access. The opposite pattern emerged for sounds, with early semantic access followed by later retrieval of phonological information. Despite differences in the time courses of conceptual access, both words and sounds elicited robust activation of phonological and semantic knowledge. These findings inform models of auditory processing by revealing the pathways between speech and non-speech input and their corresponding word forms and concepts, which influence the speed, magnitude, and duration of linguistic and nonlinguistic activation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
How does the mind process linguistic and non-linguistic sounds? The current study assessed the different ways that spoken words (e.g., “dog”) and characteristic sounds (e.g., textlessbarkingtextgreater) provide access to phonological information (e.g., word-form of “dog”) and semantic information (e.g., knowledge that a dog is associated with a leash). Using an eye-tracking paradigm, we found that listening to words prompted rapid phonological activation, which was then followed by semantic access. The opposite pattern emerged for sounds, with early semantic access followed by later retrieval of phonological information. Despite differences in the time courses of conceptual access, both words and sounds elicited robust activation of phonological and semantic knowledge. These findings inform models of auditory processing by revealing the pathways between speech and non-speech input and their corresponding word forms and concepts, which influence the speed, magnitude, and duration of linguistic and nonlinguistic activation. |
Mahsa Barzy; Jo Black; David Williams; Heather J Ferguson Autistic adults anticipate and integrate meaning based on the speaker's voice: Evidence from eye-tracking and event-related potentials Journal Article In: Journal of Experimental Psychology: General, vol. 149, no. 6, pp. 1097–1115, 2019. @article{Barzy2019,
title = {Autistic adults anticipate and integrate meaning based on the speaker's voice: Evidence from eye-tracking and event-related potentials},
author = {Mahsa Barzy and Jo Black and David Williams and Heather J Ferguson},
doi = {10.1037/xge0000705},
year = {2019},
date = {2019-01-01},
journal = {Journal of Experimental Psychology: General},
volume = {149},
number = {6},
pages = {1097--1115},
abstract = {Typically developing (TD) individuals rapidly integrate information about a speaker and their intended meaning while processing sentences online. We examined whether the same processes are activated in autistic adults and tested their timecourse in 2 preregistered experiments. Experiment 1 employed the visual world paradigm. Participants listened to sentences where the speaker's voice and message were either consistent or inconsistent (e.g., "When we go shopping, I usually look for my favorite wine," spoken by an adult or a child), and concurrently viewed visual scenes including consistent and inconsistent objects (e.g., wine and sweets). All participants were slower to select the mentioned object in the inconsistent condition. Importantly, eye movements showed a visual bias toward the voiceconsistent object, well before hearing the disambiguating word, showing that autistic adults rapidly use the speaker's voice to anticipate the intended meaning. However, this target bias emerged earlier in the TD group compared to the autism group (2240 ms vs. 1800 ms before disambiguation). Experiment 2 recorded ERPs to explore speaker-meaning integration processes. Participants listened to sentences as described above, and ERPs were time-locked to the onset of the target word. A control condition included a semantic anomaly. Results revealed an enhanced N400 for inconsistent speaker-meaning sentences that was comparable to that elicited by anomalous sentences, in both groups. Overall, contrary to research that has characterized autism in terms of a local processing bias and pragmatic dysfunction, autistic people were unimpaired at integrating multiple modalities of linguistic information and were comparably sensitive to speaker-meaning inconsistency effects.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Typically developing (TD) individuals rapidly integrate information about a speaker and their intended meaning while processing sentences online. We examined whether the same processes are activated in autistic adults and tested their timecourse in 2 preregistered experiments. Experiment 1 employed the visual world paradigm. Participants listened to sentences where the speaker's voice and message were either consistent or inconsistent (e.g., "When we go shopping, I usually look for my favorite wine," spoken by an adult or a child), and concurrently viewed visual scenes including consistent and inconsistent objects (e.g., wine and sweets). All participants were slower to select the mentioned object in the inconsistent condition. Importantly, eye movements showed a visual bias toward the voiceconsistent object, well before hearing the disambiguating word, showing that autistic adults rapidly use the speaker's voice to anticipate the intended meaning. However, this target bias emerged earlier in the TD group compared to the autism group (2240 ms vs. 1800 ms before disambiguation). Experiment 2 recorded ERPs to explore speaker-meaning integration processes. Participants listened to sentences as described above, and ERPs were time-locked to the onset of the target word. A control condition included a semantic anomaly. Results revealed an enhanced N400 for inconsistent speaker-meaning sentences that was comparable to that elicited by anomalous sentences, in both groups. Overall, contrary to research that has characterized autism in terms of a local processing bias and pragmatic dysfunction, autistic people were unimpaired at integrating multiple modalities of linguistic information and were comparably sensitive to speaker-meaning inconsistency effects. |
Mahsa Barzy; Ruth Filik; David Williams; Heather J Ferguson Emotional processing of ironic versus literal criticism in autistic and nonautistic adults: Evidence from eye-tracking Journal Article In: Autism Research, vol. 13, no. 4, pp. 563–578, 2020. @article{Barzy2020,
title = {Emotional processing of ironic versus literal criticism in autistic and nonautistic adults: Evidence from eye-tracking},
author = {Mahsa Barzy and Ruth Filik and David Williams and Heather J Ferguson},
doi = {10.1002/aur.2272},
year = {2020},
date = {2020-01-01},
journal = {Autism Research},
volume = {13},
number = {4},
pages = {563--578},
abstract = {Typically developing adults are able to keep track of story characters' emotional states online while reading. Filik et al. showed that initially, participants expected the victim to be more hurt by ironic comments than literal, but later considered them less hurtful; ironic comments were regarded as more amusing. We examined these processes in autistic adults, since previous research has demonstrated socio-emotional difficulties among autistic people, which may lead to problems processing irony and its related emotional processes despite an intact ability to integrate language in context. We recorded eye movements from autistic and nonautistic adults while they read narratives in which a character (the victim) was either criticized in an ironic or a literal manner by another character (the protagonist). A target sentence then either described the victim as feeling hurt/amused by the comment, or the protagonist as having intended to hurt/amused the victim by making the comment. Results from the nonautistic adults broadly replicated the key findings from Filik et al., supporting the two-stage account. Importantly, the autistic adults did not show comparable two-stage processing of ironic language; they did not differentiate between the emotional responses for victims or protagonists following ironic versus literal criticism. These findings suggest that autistic people experience a specific difficulty taking into account other peoples' communicative intentions (i.e., infer their mental state) to appropriately anticipate emotional responses to an ironic comment. We discuss how these difficulties might link to atypical socio-emotional processing in autism, and the ability to maintain successful real-life social interactions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Typically developing adults are able to keep track of story characters' emotional states online while reading. Filik et al. showed that initially, participants expected the victim to be more hurt by ironic comments than literal, but later considered them less hurtful; ironic comments were regarded as more amusing. We examined these processes in autistic adults, since previous research has demonstrated socio-emotional difficulties among autistic people, which may lead to problems processing irony and its related emotional processes despite an intact ability to integrate language in context. We recorded eye movements from autistic and nonautistic adults while they read narratives in which a character (the victim) was either criticized in an ironic or a literal manner by another character (the protagonist). A target sentence then either described the victim as feeling hurt/amused by the comment, or the protagonist as having intended to hurt/amused the victim by making the comment. Results from the nonautistic adults broadly replicated the key findings from Filik et al., supporting the two-stage account. Importantly, the autistic adults did not show comparable two-stage processing of ironic language; they did not differentiate between the emotional responses for victims or protagonists following ironic versus literal criticism. These findings suggest that autistic people experience a specific difficulty taking into account other peoples' communicative intentions (i.e., infer their mental state) to appropriately anticipate emotional responses to an ironic comment. We discuss how these difficulties might link to atypical socio-emotional processing in autism, and the ability to maintain successful real-life social interactions. |
Vanessa Baudiffier; David Caplan; Daniel Gaonac'h; David Chesnet The effect of noun animacy on the processing of unambiguous sentences: Evidence from French relative clauses Journal Article In: Quarterly Journal of Experimental Psychology, vol. 64, no. 10, pp. 1896–1905, 2011. @article{Baudiffier2011,
title = {The effect of noun animacy on the processing of unambiguous sentences: Evidence from French relative clauses},
author = {Vanessa Baudiffier and David Caplan and Daniel Gaonac'h and David Chesnet},
doi = {10.1080/17470218.2011.608851},
year = {2011},
date = {2011-01-01},
journal = {Quarterly Journal of Experimental Psychology},
volume = {64},
number = {10},
pages = {1896--1905},
abstract = {Two experiments, one using self-paced reading and one using eye tracking, investigated the influence of noun animacy on the processing of subject relative (SR) clauses, object relative (OR) clauses, and object relative clauses with stylistic inversion (OR-SI) in French. Each sentence type was presented in two versions: either with an animate relative clause (RC) subject and an inanimate object (AS/IO), or with an inanimate RC subject and an animate object (IS/AO). There was an interaction between the RC structure and noun animacy. The advantage of SR sentences over OR and OR-SI sentences disappeared in AS/IO sentences. The interaction between animacy and structure occurred in self-paced reading times and in total fixation times on the RCs, but not in first-pass reading times. The results are consistent with a late interaction between animacy and structural processing during parsing and provide data relevant to several models of parsing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Two experiments, one using self-paced reading and one using eye tracking, investigated the influence of noun animacy on the processing of subject relative (SR) clauses, object relative (OR) clauses, and object relative clauses with stylistic inversion (OR-SI) in French. Each sentence type was presented in two versions: either with an animate relative clause (RC) subject and an inanimate object (AS/IO), or with an inanimate RC subject and an animate object (IS/AO). There was an interaction between the RC structure and noun animacy. The advantage of SR sentences over OR and OR-SI sentences disappeared in AS/IO sentences. The interaction between animacy and structure occurred in self-paced reading times and in total fixation times on the RCs, but not in first-pass reading times. The results are consistent with a late interaction between animacy and structural processing during parsing and provide data relevant to several models of parsing. |
Mareike Bayer; Katja Ruthmann; Annekathrin Schacht The impact of personal relevance on emotion processing: Evidence from event-related potentials and pupillary responses Journal Article In: Social Cognitive and Affective Neuroscience, vol. 12, no. 9, pp. 1470–1479, 2017. @article{Bayer2017b,
title = {The impact of personal relevance on emotion processing: Evidence from event-related potentials and pupillary responses},
author = {Mareike Bayer and Katja Ruthmann and Annekathrin Schacht},
doi = {10.1093/scan/nsx075},
year = {2017},
date = {2017-01-01},
journal = {Social Cognitive and Affective Neuroscience},
volume = {12},
number = {9},
pages = {1470--1479},
abstract = {Emotional stimuli attract attention and lead to increased activity in the visual cortex. The present study investigated the impact of personal relevance on emotion processing by presenting emotional words within sentences that referred to participants' significant others or to unknown agents. In event-related potentials, personal relevance increased visual cortex activity within 100 ms after stimulus onset and the amplitudes of the Late Positive Complex (LPC). Moreover, personally relevant contexts gave rise to augmented pupillary responses and higher arousal ratings, suggesting a general boost of attention and arousal. Finally, personal relevance increased emotion-related ERP effects starting around 200 ms after word onset; effects for negative words compared to neutral words were prolonged in duration. Source localizations of these interactions revealed activations in prefrontal regions, in the visual cortex and in the fusiform gyrus. Taken together, these results demonstrate the high impact of personal relevance on reading in general and on emotion processing in particular.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Emotional stimuli attract attention and lead to increased activity in the visual cortex. The present study investigated the impact of personal relevance on emotion processing by presenting emotional words within sentences that referred to participants' significant others or to unknown agents. In event-related potentials, personal relevance increased visual cortex activity within 100 ms after stimulus onset and the amplitudes of the Late Positive Complex (LPC). Moreover, personally relevant contexts gave rise to augmented pupillary responses and higher arousal ratings, suggesting a general boost of attention and arousal. Finally, personal relevance increased emotion-related ERP effects starting around 200 ms after word onset; effects for negative words compared to neutral words were prolonged in duration. Source localizations of these interactions revealed activations in prefrontal regions, in the visual cortex and in the fusiform gyrus. Taken together, these results demonstrate the high impact of personal relevance on reading in general and on emotion processing in particular. |
Patrice Speeter Beddor; Kevin B McGowan; Julie E Boland; Andries W Coetzee; Anthony Brasher The time course of perception of coarticulation Journal Article In: Journal of the Acoustical Society of America, vol. 133, no. 4, pp. 2350–2366, 2013. @article{Beddor2013,
title = {The time course of perception of coarticulation},
author = {Patrice Speeter Beddor and Kevin B McGowan and Julie E Boland and Andries W Coetzee and Anthony Brasher},
doi = {10.1121/1.4794366},
year = {2013},
date = {2013-01-01},
journal = {Journal of the Acoustical Society of America},
volume = {133},
number = {4},
pages = {2350--2366},
abstract = {The perception of coarticulated speech as it unfolds over time was investigated by monitoring eye movements of participants as they listened to words with oral vowels or with late or early onset of anticipatory vowel nasalization. When listeners heard [CVNC] and had visual choices of images of CVNC (e.g., send) and CVC (said) words, they fixated more quickly and more often on the CVNC image when onset of nasalization began early in the vowel compared to when the coarticulatory information occurred later. Moreover, when a standard eye movement programming delay is factored in, fixations on the CVNC image began to occur before listeners heard the nasal consonant. Listeners' attention to coarticulatory cues for velum lowering was selective in two respects: (a) listeners assigned greater perceptual weight to coarticulatory information in phonetic contexts in which [V] but not N is an especially robust property, and (b) individual listeners differed in their perceptual weights. Overall, the time course of perception of velum lowering in American English indicates that the dynamics of perception parallel the dynamics of the gestural information encoded in the acoustic signal. In real-time processing, listeners closely track unfolding coarticulatory information in ways that speed lexical activation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The perception of coarticulated speech as it unfolds over time was investigated by monitoring eye movements of participants as they listened to words with oral vowels or with late or early onset of anticipatory vowel nasalization. When listeners heard [CVNC] and had visual choices of images of CVNC (e.g., send) and CVC (said) words, they fixated more quickly and more often on the CVNC image when onset of nasalization began early in the vowel compared to when the coarticulatory information occurred later. Moreover, when a standard eye movement programming delay is factored in, fixations on the CVNC image began to occur before listeners heard the nasal consonant. Listeners' attention to coarticulatory cues for velum lowering was selective in two respects: (a) listeners assigned greater perceptual weight to coarticulatory information in phonetic contexts in which [V] but not N is an especially robust property, and (b) individual listeners differed in their perceptual weights. Overall, the time course of perception of velum lowering in American English indicates that the dynamics of perception parallel the dynamics of the gestural information encoded in the acoustic signal. In real-time processing, listeners closely track unfolding coarticulatory information in ways that speed lexical activation. |
Patrice Speeter Beddor; Kevin B McGowan; Andries W Coetzee; Will Styler; Julie E Boland The time course of individuals' perception of coarticulatory information is linked to their production: Implications for sound change Journal Article In: Language, vol. 94, no. 4, pp. 931–968, 2018. @article{Beddor2018,
title = {The time course of individuals' perception of coarticulatory information is linked to their production: Implications for sound change},
author = {Patrice Speeter Beddor and Kevin B McGowan and Andries W Coetzee and Will Styler and Julie E Boland},
doi = {10.1353/lan.2018.0051},
year = {2018},
date = {2018-01-01},
journal = {Language},
volume = {94},
number = {4},
pages = {931--968},
abstract = {Understanding the relation between speech production and perception is foundational to pho-netic theory, and is similarly central to theories of the phonetics of sound change. For sound changes that are arguably perceptually motivated, it is particularly important to establish that an individual listener's selective attention-for example, to the redundant information afforded by coarticulation-is reflected in that individual's own productions. This study reports the results of a pair of experiments designed to test the hypothesis that individuals who produce more consistent and extensive coarticulation will attend to that information especially closely in perception. The production experiment used nasal airflow to measure the time course of participants' coarticulatory vowel nasalization; the perception experiment used an eye-tracking paradigm to measure the time course of those same participants' attention to coarticulated nasality. Results showed that a speaker's coarticulatory patterns predicted, to some degree, that individual's perception, thereby supporting the hypothesis: participants who produced earlier onset of coarticulatory nasalization were, as listeners, more efficient users of nasality as that information unfolded over time. Thus, an individual's perception of coarticulated speech is made public through their productions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Understanding the relation between speech production and perception is foundational to pho-netic theory, and is similarly central to theories of the phonetics of sound change. For sound changes that are arguably perceptually motivated, it is particularly important to establish that an individual listener's selective attention-for example, to the redundant information afforded by coarticulation-is reflected in that individual's own productions. This study reports the results of a pair of experiments designed to test the hypothesis that individuals who produce more consistent and extensive coarticulation will attend to that information especially closely in perception. The production experiment used nasal airflow to measure the time course of participants' coarticulatory vowel nasalization; the perception experiment used an eye-tracking paradigm to measure the time course of those same participants' attention to coarticulated nasality. Results showed that a speaker's coarticulatory patterns predicted, to some degree, that individual's perception, thereby supporting the hypothesis: participants who produced earlier onset of coarticulatory nasalization were, as listeners, more efficient users of nasality as that information unfolded over time. Thus, an individual's perception of coarticulated speech is made public through their productions. |
Nathalie N Bélanger; Timothy J Slattery; Rachel I Mayberry; Keith Rayner Skilled deaf readers have an enhanced perceptual span in reading Journal Article In: Psychological Science, vol. 23, no. 7, pp. 816–823, 2012. @article{Belanger2012,
title = {Skilled deaf readers have an enhanced perceptual span in reading},
author = {Nathalie N Bélanger and Timothy J Slattery and Rachel I Mayberry and Keith Rayner},
doi = {10.1177/0956797611435130},
year = {2012},
date = {2012-01-01},
journal = {Psychological Science},
volume = {23},
number = {7},
pages = {816--823},
abstract = {Recent evidence suggests that deaf people have enhanced visual attention to simple stimuli in the parafovea in comparison to hearing people. Although a large part of reading involves processing the fixated words in foveal vision, readers also utilize information in parafoveal vision to pre-process upcoming words and decide where to look next. We investigated whether auditory deprivation affects low-level visual processing during reading, and compared the perceptual span of deaf signers who were skilled and less skilled readers to that of skilled hearing readers. Compared to hearing readers, deaf readers had a larger perceptual span than would be expected by their reading ability. These results provide the first evidence that deaf readers' enhanced attentional allocation to the parafovea is used during a complex cognitive task such as reading},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Recent evidence suggests that deaf people have enhanced visual attention to simple stimuli in the parafovea in comparison to hearing people. Although a large part of reading involves processing the fixated words in foveal vision, readers also utilize information in parafoveal vision to pre-process upcoming words and decide where to look next. We investigated whether auditory deprivation affects low-level visual processing during reading, and compared the perceptual span of deaf signers who were skilled and less skilled readers to that of skilled hearing readers. Compared to hearing readers, deaf readers had a larger perceptual span than would be expected by their reading ability. These results provide the first evidence that deaf readers' enhanced attentional allocation to the parafovea is used during a complex cognitive task such as reading |
Nathalie N Bélanger; Rachel I Mayberry; Keith Rayner Orthographic and phonological preview benefits: Parafoveal processing in skilled and less-skilled deaf readers Journal Article In: Quarterly Journal of Experimental Psychology, vol. 66, no. 11, pp. 2237–2252, 2013. @article{Belanger2013a,
title = {Orthographic and phonological preview benefits: Parafoveal processing in skilled and less-skilled deaf readers},
author = {Nathalie N Bélanger and Rachel I Mayberry and Keith Rayner},
doi = {10.1080/17470218.2013.780085},
year = {2013},
date = {2013-01-01},
journal = {Quarterly Journal of Experimental Psychology},
volume = {66},
number = {11},
pages = {2237--2252},
abstract = {Many deaf individuals do not develop the high-level reading skills that will allow them to fully take part into society. To attempt to explain this widespread difficulty in the deaf population, much research has honed in on the use of phonological codes during reading. The hypothesis that the use of phonological codes is associated with good reading skills in deaf readers, though not well supported, still lingers in the literature. We investigated skilled and less-skilled adult deaf readers' processing of orthographic and phonological codes in parafoveal vision during reading by monitoring their eye movements and using the boundary paradigm. Orthographic preview benefits were found in early measures of reading for skilled hearing, skilled deaf, and less-skilled deaf readers, but only skilled hearing readers processed phonological codes in parafoveal vision. Crucially, skilled and less-skilled deaf readers showed a very similar pattern of preview benefits during reading. These results support the notion that reading difficulties in deaf adults are not linked to their failure to activate phonological codes during reading.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Many deaf individuals do not develop the high-level reading skills that will allow them to fully take part into society. To attempt to explain this widespread difficulty in the deaf population, much research has honed in on the use of phonological codes during reading. The hypothesis that the use of phonological codes is associated with good reading skills in deaf readers, though not well supported, still lingers in the literature. We investigated skilled and less-skilled adult deaf readers' processing of orthographic and phonological codes in parafoveal vision during reading by monitoring their eye movements and using the boundary paradigm. Orthographic preview benefits were found in early measures of reading for skilled hearing, skilled deaf, and less-skilled deaf readers, but only skilled hearing readers processed phonological codes in parafoveal vision. Crucially, skilled and less-skilled deaf readers showed a very similar pattern of preview benefits during reading. These results support the notion that reading difficulties in deaf adults are not linked to their failure to activate phonological codes during reading. |
Nathalie N Bélanger; Keith Rayner Frequency and predictability effects in eye fixations for skilled and less-skilled deaf readers Journal Article In: Visual Cognition, vol. 21, no. 4, pp. 477–497, 2013. @article{Belanger2013b,
title = {Frequency and predictability effects in eye fixations for skilled and less-skilled deaf readers},
author = {Nathalie N Bélanger and Keith Rayner},
doi = {10.1080/13506285.2013.804016},
year = {2013},
date = {2013-01-01},
journal = {Visual Cognition},
volume = {21},
number = {4},
pages = {477--497},
abstract = {The illiteracy rate in the deaf population has been alarmingly high for several decades, despite the fact that deaf children go through the standard stages of schooling. Much research addressing this issue has focused on word-level processes, but in the recent years, little research has focused on sentence-levels processes. Previous research (Fischler, 1985) investigated word integration within context in college-level deaf and hearing readers in a lexical decision task following incomplete sentences with targets that were congruous or incongruous relative to the preceding context; it was found that deaf readers, as a group, were more dependent on contextual information than their hearing counterparts. The present experiment extended Fischler's results and investigated the relationship between frequency, predictability, and reading skill in skilled hearing, skilled deaf, and less-skilled deaf readers. Results suggest that only less-skilled deaf readers, and not all deaf readers, rely more on contextual cues to boost word processing. Additionally, early effects of frequency and predictability were found for all three groups of readers, without any evidence for an interaction between frequency and predictability.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The illiteracy rate in the deaf population has been alarmingly high for several decades, despite the fact that deaf children go through the standard stages of schooling. Much research addressing this issue has focused on word-level processes, but in the recent years, little research has focused on sentence-levels processes. Previous research (Fischler, 1985) investigated word integration within context in college-level deaf and hearing readers in a lexical decision task following incomplete sentences with targets that were congruous or incongruous relative to the preceding context; it was found that deaf readers, as a group, were more dependent on contextual information than their hearing counterparts. The present experiment extended Fischler's results and investigated the relationship between frequency, predictability, and reading skill in skilled hearing, skilled deaf, and less-skilled deaf readers. Results suggest that only less-skilled deaf readers, and not all deaf readers, rely more on contextual cues to boost word processing. Additionally, early effects of frequency and predictability were found for all three groups of readers, without any evidence for an interaction between frequency and predictability. |
Nathalie N Bélanger; Michelle Lee; Elizabeth R Schotter Young skilled deaf readers have an enhanced perceptual span in reading Journal Article In: Quarterly Journal of Experimental Psychology, vol. 71, no. 1, pp. 291–301, 2018. @article{Belanger2018,
title = {Young skilled deaf readers have an enhanced perceptual span in reading},
author = {Nathalie N Bélanger and Michelle Lee and Elizabeth R Schotter},
doi = {10.1080/17470218.2017.1324498},
year = {2018},
date = {2018-01-01},
journal = {Quarterly Journal of Experimental Psychology},
volume = {71},
number = {1},
pages = {291--301},
abstract = {Recent evidence suggests that deaf people have enhanced visual attention to simple stimuli in the parafovea in comparison to hearing people. Although a large part of reading involves processing the fixated words in foveal vision, readers also utilize information in parafoveal vision to pre-process upcoming words and decide where to look next. We investigated whether auditory deprivation affects low-level visual processing during reading, and compared the perceptual span of deaf signers who were skilled and less skilled readers to that of skilled hearing readers. Compared to hearing readers, deaf readers had a larger perceptual span than would be expected by their reading ability. These results provide the first evidence that deaf readers' enhanced attentional allocation to the parafovea is used during a complex cognitive task such as reading.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Recent evidence suggests that deaf people have enhanced visual attention to simple stimuli in the parafovea in comparison to hearing people. Although a large part of reading involves processing the fixated words in foveal vision, readers also utilize information in parafoveal vision to pre-process upcoming words and decide where to look next. We investigated whether auditory deprivation affects low-level visual processing during reading, and compared the perceptual span of deaf signers who were skilled and less skilled readers to that of skilled hearing readers. Compared to hearing readers, deaf readers had a larger perceptual span than would be expected by their reading ability. These results provide the first evidence that deaf readers' enhanced attentional allocation to the parafovea is used during a complex cognitive task such as reading. |
Eva Belke; Antje S Meyer; Markus F Damian Refractory effects in picture naming as assessed in a semantic blocking paradigm Journal Article In: Quarterly Journal of Experimental Psychology, vol. 58, no. 4, pp. 667–692, 2005. @article{Belke2005,
title = {Refractory effects in picture naming as assessed in a semantic blocking paradigm},
author = {Eva Belke and Antje S Meyer and Markus F Damian},
doi = {10.1080/02724980443000142},
year = {2005},
date = {2005-01-01},
journal = {Quarterly Journal of Experimental Psychology},
volume = {58},
number = {4},
pages = {667--692},
abstract = {In the cyclic semantic blocking paradigm participants repeatedly name sets of objects with semantically related names (homogeneous sets) or unrelated names (heterogeneous sets). The naming latencies are typically longer in related than in unrelated sets. In Experiment 1 we replicated this semantic blocking effect and demonstrated that the effect only arose after all objects of a set had been shown and named once. In Experiment 2, the objects of a set were presented simultaneously (instead of on successive trials). Evidence for semantic blocking was found in the naming latencies and in the gaze durations for the objects, which were longer in homogeneous than in heterogeneous sets. For the gaze-to-speech lag between the offset of gaze on an object and the onset of the articulation of its name, a repetition priming effect was obtained but no blocking effect. Experiment 3 showed that the blocking effect for speech onset latencies generalized to new, previously unnamed lexical items. We propose that the blocking effect is due to refractory behaviour in the semantic system.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
In the cyclic semantic blocking paradigm participants repeatedly name sets of objects with semantically related names (homogeneous sets) or unrelated names (heterogeneous sets). The naming latencies are typically longer in related than in unrelated sets. In Experiment 1 we replicated this semantic blocking effect and demonstrated that the effect only arose after all objects of a set had been shown and named once. In Experiment 2, the objects of a set were presented simultaneously (instead of on successive trials). Evidence for semantic blocking was found in the naming latencies and in the gaze durations for the objects, which were longer in homogeneous than in heterogeneous sets. For the gaze-to-speech lag between the offset of gaze on an object and the onset of the articulation of its name, a repetition priming effect was obtained but no blocking effect. Experiment 3 showed that the blocking effect for speech onset latencies generalized to new, previously unnamed lexical items. We propose that the blocking effect is due to refractory behaviour in the semantic system. |
Eva Belke Visual determinants of preferred adjective order Journal Article In: Visual Cognition, vol. 14, no. 3, pp. 261–294, 2006. @article{Belke2006,
title = {Visual determinants of preferred adjective order},
author = {Eva Belke},
doi = {10.1080/13506280500260484},
year = {2006},
date = {2006-01-01},
journal = {Visual Cognition},
volume = {14},
number = {3},
pages = {261--294},
abstract = {In referential communication, speakers refer to a target object among a set of context objects. The NPs they produce are characterized by a canonical order of prenominal adjectives: The dimensions that are easiest to detect (e.g., absolute dimensions) are commonly placed closer to the noun than other dimensions (e.g., relative dimensions). This stands in stark contrast to the assumption that language production is an incremental process. According to this incremental-procedural view, the dimensions that are easiest to detect should be named first. In the present paper, an alternative account of the canonical order effect is presented, suggesting that the prenominal adjective ordering rules are a result of the perceptual analysis processes underlying the evaluation of distinctive target features. Analyses of speakers' eye movements during referential communication (Experiment 1) and analyses of utterance formats produced under time pressure (Experiment 2) provide evidence for the suggested perceptual classification account.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
In referential communication, speakers refer to a target object among a set of context objects. The NPs they produce are characterized by a canonical order of prenominal adjectives: The dimensions that are easiest to detect (e.g., absolute dimensions) are commonly placed closer to the noun than other dimensions (e.g., relative dimensions). This stands in stark contrast to the assumption that language production is an incremental process. According to this incremental-procedural view, the dimensions that are easiest to detect should be named first. In the present paper, an alternative account of the canonical order effect is presented, suggesting that the prenominal adjective ordering rules are a result of the perceptual analysis processes underlying the evaluation of distinctive target features. Analyses of speakers' eye movements during referential communication (Experiment 1) and analyses of utterance formats produced under time pressure (Experiment 2) provide evidence for the suggested perceptual classification account. |
Stéphanie Bellocchi; Delphine Massendari; Jonathan Grainger; Stéphanie Ducrot Effects of inter-character spacing on saccade programming in beginning readers and dyslexics Journal Article In: Child Neuropsychology, vol. 25, no. 4, pp. 482–506, 2019. @article{Bellocchi2019,
title = {Effects of inter-character spacing on saccade programming in beginning readers and dyslexics},
author = {Stéphanie Bellocchi and Delphine Massendari and Jonathan Grainger and Stéphanie Ducrot},
doi = {10.1080/09297049.2018.1504907},
year = {2019},
date = {2019-01-01},
journal = {Child Neuropsychology},
volume = {25},
number = {4},
pages = {482--506},
publisher = {Routledge},
abstract = {The present study investigated the impact of inter-character spacing on saccade programming in beginning readers and dyslexic children. In two experiments, eye movements were recorded while dyslexic children, reading-age, and chronological-age controls, performed an oculomotor lateralized bisection task on words and strings of hashes presented either with default inter-character spacing or with extra spacing between the characters. The results of Experiment 1 showed that (1) only proficient readers had already developed highly automatized procedures for programming both left- and rightward saccades, depending on the discreteness of the stimuli and (2) children of all groups were disrupted (i.e., had trouble to land close to the beginning of the stimuli) by extra spacing between the characters of the stimuli, and particularly for stimuli presented in the left visual field. Experiment 2 was designed to disentangle the role of inter-character spacing and spatial width. Stimuli were made the same physical length in the default and extra-spacing conditions by having more characters in the default spacing condition. Our results showed that inter-letter spacing still influenced saccade programming when controlling for spatial width, thus confirming the detrimental effect of extra spacing for saccade programming. We conclude that the beneficial effect of increased inter-letter spacing on reading can be better explained in terms of decreased visual crowding than improved saccade targeting.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The present study investigated the impact of inter-character spacing on saccade programming in beginning readers and dyslexic children. In two experiments, eye movements were recorded while dyslexic children, reading-age, and chronological-age controls, performed an oculomotor lateralized bisection task on words and strings of hashes presented either with default inter-character spacing or with extra spacing between the characters. The results of Experiment 1 showed that (1) only proficient readers had already developed highly automatized procedures for programming both left- and rightward saccades, depending on the discreteness of the stimuli and (2) children of all groups were disrupted (i.e., had trouble to land close to the beginning of the stimuli) by extra spacing between the characters of the stimuli, and particularly for stimuli presented in the left visual field. Experiment 2 was designed to disentangle the role of inter-character spacing and spatial width. Stimuli were made the same physical length in the default and extra-spacing conditions by having more characters in the default spacing condition. Our results showed that inter-letter spacing still influenced saccade programming when controlling for spatial width, thus confirming the detrimental effect of extra spacing for saccade programming. We conclude that the beneficial effect of increased inter-letter spacing on reading can be better explained in terms of decreased visual crowding than improved saccade targeting. |