Li Zhang; Jie Ren; Liang Xu; Xue Jun Qiu; Jost B Jonas Visual comfort and fatigue when watching three-dimensional displays as measured by eye movement analysis Journal Article British Journal of Ophthalmology, 97 (7), pp. 941–942, 2013. @article{Zhang2013a, title = {Visual comfort and fatigue when watching three-dimensional displays as measured by eye movement analysis}, author = {Li Zhang and Jie Ren and Liang Xu and Xue Jun Qiu and Jost B Jonas}, doi = {10.1136/bjophthalmol-2012-303001}, year = {2013}, date = {2013-01-01}, journal = {British Journal of Ophthalmology}, volume = {97}, number = {7}, pages = {941--942}, abstract = {With the growth in three-dimensional viewing of movies, we assessed whether visual fatigue or alertness differed between three-dimensional (3D) viewing versus two- dimensional (2D) viewing of movies. We used a camera-based analysis of eye move- ments to measure blinking, fixation and sac- cades as surrogates of visual fatigue.}, keywords = {}, pubstate = {published}, tppubtype = {article} } With the growth in three-dimensional viewing of movies, we assessed whether visual fatigue or alertness differed between three-dimensional (3D) viewing versus two- dimensional (2D) viewing of movies. We used a camera-based analysis of eye move- ments to measure blinking, fixation and sac- cades as surrogates of visual fatigue. |
Li Zhang; Ya Qin Zhang; Jing Shang Zhang; Liang Xu; Jost B Jonas Visual fatigue and discomfort after stereoscopic display viewing Journal Article Acta Ophthalmologica, 91 (2), pp. 149–153, 2013. @article{Zhang2013b, title = {Visual fatigue and discomfort after stereoscopic display viewing}, author = {Li Zhang and Ya Qin Zhang and Jing Shang Zhang and Liang Xu and Jost B Jonas}, doi = {10.1111/aos.12006}, year = {2013}, date = {2013-01-01}, journal = {Acta Ophthalmologica}, volume = {91}, number = {2}, pages = {149--153}, abstract = {Purpose: Different types of stereoscopic video displays have recently been introduced. We measured and compared visual fatigue and visual discomfort induced by viewing two different stereoscopic displays that either used the pattern retarder-spatial domain technology with linearly polarized three-dimensional technology or the circularly polarized three-dimensional technology using shutter glasses.; Methods: During this observational cross-over study performed at two subsequent days, a video was watched by 30 subjects (age: 20-30 years). Half of the participants watched the screen with a pattern retard three-dimensional display at the first day and a shutter glasses three-dimensional display at the second day, and reverse. The study participants underwent a standardized interview on visual discomfort and fatigue, and a series of functional examinations prior to, and shortly after viewing the movie. Additionally, a subjective score for visual fatigue was given.; Results: Accommodative magnitude (right eye: p textless 0.001; left eye: p = 0.01), accommodative facility (p = 0.008), near-point convergence break-up point (p = 0.007), near-point convergence recover point (p = 0.001), negative (p = 0.03) and positive (p = 0.001) relative accommodation were significantly smaller, and the visual fatigue score was significantly higher (1.65 ± 1.18 versus 1.20 ± 1.03; p = 0.02) after viewing the shutter glasses three-dimensional display than after viewing the pattern retard three-dimensional display.; Conclusions: Stereoscopic viewing using pattern retard (polarized) three-dimensional displays as compared with stereoscopic viewing using shutter glasses three-dimensional displays resulted in significantly less visual fatigue as assessed subjectively, parallel to significantly better values of accommodation and convergence as measured objectively.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Purpose: Different types of stereoscopic video displays have recently been introduced. We measured and compared visual fatigue and visual discomfort induced by viewing two different stereoscopic displays that either used the pattern retarder-spatial domain technology with linearly polarized three-dimensional technology or the circularly polarized three-dimensional technology using shutter glasses.; Methods: During this observational cross-over study performed at two subsequent days, a video was watched by 30 subjects (age: 20-30 years). Half of the participants watched the screen with a pattern retard three-dimensional display at the first day and a shutter glasses three-dimensional display at the second day, and reverse. The study participants underwent a standardized interview on visual discomfort and fatigue, and a series of functional examinations prior to, and shortly after viewing the movie. Additionally, a subjective score for visual fatigue was given.; Results: Accommodative magnitude (right eye: p textless 0.001; left eye: p = 0.01), accommodative facility (p = 0.008), near-point convergence break-up point (p = 0.007), near-point convergence recover point (p = 0.001), negative (p = 0.03) and positive (p = 0.001) relative accommodation were significantly smaller, and the visual fatigue score was significantly higher (1.65 ± 1.18 versus 1.20 ± 1.03; p = 0.02) after viewing the shutter glasses three-dimensional display than after viewing the pattern retard three-dimensional display.; Conclusions: Stereoscopic viewing using pattern retard (polarized) three-dimensional displays as compared with stereoscopic viewing using shutter glasses three-dimensional displays resulted in significantly less visual fatigue as assessed subjectively, parallel to significantly better values of accommodation and convergence as measured objectively. |
Ruyuan Zhang; Oh-Sang Kwon; Duje Tadin Illusory movement of stationary stimuli in the visual periphery: Evidence for a strong centrifugal prior in motion processing Journal Article Journal of Neuroscience, 33 (10), pp. 4415–4423, 2013. @article{Zhang2013c, title = {Illusory movement of stationary stimuli in the visual periphery: Evidence for a strong centrifugal prior in motion processing}, author = {Ruyuan Zhang and Oh-Sang Kwon and Duje Tadin}, doi = {10.1523/JNEUROSCI.4744-12.2013}, year = {2013}, date = {2013-01-01}, journal = {Journal of Neuroscience}, volume = {33}, number = {10}, pages = {4415--4423}, abstract = {Visual input is remarkably diverse. Certain sensory inputs are more probable than others, mirroring statistical regularities of the visual environment. The visual system exploits many of these regularities, resulting, on average, in better inferences about visual stimuli. However, by incorporating prior knowledge into perceptual decisions, visual processing can also result in perceptions that do not match sensory inputs. Such perceptual biases can often reveal unique insights into underlying mechanisms and computations. For example, a prior assumption that objects move slowly can explain a wide range of motion phenomena. The prior on slow speed is usually rationalized by its match with visual input, which typically includes stationary or slow moving objects. However, this only holds for foveal and parafoveal stimulation. The visual periphery tends to be exposed to faster motions, which are biased toward centrifugal directions. Thus, if prior assumptions derive from experience, peripheral motion processing should be biased toward centrifugal speeds. Here, in experiments with human participants, we support this hypothesis and report a novel visual illusion where stationary objects in the visual periphery are perceived as moving centrifugally, while objects moving as fast as 7°/s toward fovea are perceived as stationary. These behavioral results were quantitatively explained by a Bayesian observer that has a strong centrifugal prior. This prior is consistent with both the prevalence of centrifugal motions in the visual periphery and a centrifugal bias of direction tuning in cortical area MT, supporting the notion that visual processing mirrors its input statistics.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Visual input is remarkably diverse. Certain sensory inputs are more probable than others, mirroring statistical regularities of the visual environment. The visual system exploits many of these regularities, resulting, on average, in better inferences about visual stimuli. However, by incorporating prior knowledge into perceptual decisions, visual processing can also result in perceptions that do not match sensory inputs. Such perceptual biases can often reveal unique insights into underlying mechanisms and computations. For example, a prior assumption that objects move slowly can explain a wide range of motion phenomena. The prior on slow speed is usually rationalized by its match with visual input, which typically includes stationary or slow moving objects. However, this only holds for foveal and parafoveal stimulation. The visual periphery tends to be exposed to faster motions, which are biased toward centrifugal directions. Thus, if prior assumptions derive from experience, peripheral motion processing should be biased toward centrifugal speeds. Here, in experiments with human participants, we support this hypothesis and report a novel visual illusion where stationary objects in the visual periphery are perceived as moving centrifugally, while objects moving as fast as 7°/s toward fovea are perceived as stationary. These behavioral results were quantitatively explained by a Bayesian observer that has a strong centrifugal prior. This prior is consistent with both the prevalence of centrifugal motions in the visual periphery and a centrifugal bias of direction tuning in cortical area MT, supporting the notion that visual processing mirrors its input statistics. |
Luming Zhang; Yue Gao; Rongrong Ji; Yingjie Xia; Qionghai Dai; Xuelong Li Actively learning human gaze shifting paths for semantics-aware photo cropping Journal Article IEEE Transactions on Image Processing, 23 (5), pp. 2235–2245, 2014. @article{Zhang2014, title = {Actively learning human gaze shifting paths for semantics-aware photo cropping}, author = {Luming Zhang and Yue Gao and Rongrong Ji and Yingjie Xia and Qionghai Dai and Xuelong Li}, doi = {10.1109/TIP.2014.2311658}, year = {2014}, date = {2014-01-01}, journal = {IEEE Transactions on Image Processing}, volume = {23}, number = {5}, pages = {2235--2245}, abstract = {Photo cropping is a widely used tool in printing industry, photography, and cinematography. Conventional cropping models suffer from the following three challenges. First, the deemphasized role of semantic contents that are many times more important than low-level features in photo aesthetics. Second, the absence of a sequential ordering in the existing models. In contrast, humans look at semantically important regions sequentially when viewing a photo. Third, the difficulty of leveraging inputs from multiple users. Experience from multiple users is particularly critical in cropping as photo assessment is quite a subjective task. To address these challenges, this paper proposes semantics-aware photo cropping, which crops a photo by simulating the process of humans sequentially perceiving semantically important regions of a photo. We first project the local features (graphlets in this paper) onto the semantic space, which is constructed based on the category information of the training photos. An efficient learning algorithm is then derived to sequentially select semantically representative graphlets of a photo, and the selecting process can be interpreted by a path, which simulates humans actively perceiving semantics in a photo. Furthermore, we learn a prior distribution of such active graphlet paths from training photos that are marked as aesthetically pleasing by multiple users. The learned priors enforce the corresponding active graphlet path of a test photo to be maximally similar to those from the training photos. Experimental results show that: 1) the active graphlet path accurately predicts human gaze shifting, and thus is more indicative for photo aesthetics than conventional saliency maps and 2) the cropped photos produced by our approach outperform its competitors in both qualitative and quantitative comparisons.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Photo cropping is a widely used tool in printing industry, photography, and cinematography. Conventional cropping models suffer from the following three challenges. First, the deemphasized role of semantic contents that are many times more important than low-level features in photo aesthetics. Second, the absence of a sequential ordering in the existing models. In contrast, humans look at semantically important regions sequentially when viewing a photo. Third, the difficulty of leveraging inputs from multiple users. Experience from multiple users is particularly critical in cropping as photo assessment is quite a subjective task. To address these challenges, this paper proposes semantics-aware photo cropping, which crops a photo by simulating the process of humans sequentially perceiving semantically important regions of a photo. We first project the local features (graphlets in this paper) onto the semantic space, which is constructed based on the category information of the training photos. An efficient learning algorithm is then derived to sequentially select semantically representative graphlets of a photo, and the selecting process can be interpreted by a path, which simulates humans actively perceiving semantics in a photo. Furthermore, we learn a prior distribution of such active graphlet paths from training photos that are marked as aesthetically pleasing by multiple users. The learned priors enforce the corresponding active graphlet path of a test photo to be maximally similar to those from the training photos. Experimental results show that: 1) the active graphlet path accurately predicts human gaze shifting, and thus is more indicative for photo aesthetics than conventional saliency maps and 2) the cropped photos produced by our approach outperform its competitors in both qualitative and quantitative comparisons. |
Jiedong Zhang; Jia Liu; Yaoda Xu Neural decoding reveals impaired face configural processing in the right fusiform face area of individuals with developmental prosopagnosia Journal Article Journal of Neuroscience, 35 (4), pp. 1539–1548, 2015. @article{Zhang2015, title = {Neural decoding reveals impaired face configural processing in the right fusiform face area of individuals with developmental prosopagnosia}, author = {Jiedong Zhang and Jia Liu and Yaoda Xu}, doi = {10.1523/JNEUROSCI.2646-14.2015}, year = {2015}, date = {2015-01-01}, journal = {Journal of Neuroscience}, volume = {35}, number = {4}, pages = {1539--1548}, abstract = {Most of human daily social interactions rely on the ability to successfully recognize faces. Yet ∼2% of the human population suffers from face blindness without any acquired brain damage [this is also known as developmental prosopagnosia (DP) or congenital prosopagnosia]). Despite the presence of severe behavioral face recognition deficits, surprisingly, a majority of DP individuals exhibit normal face selectivity in the right fusiform face area (FFA), a key brain region involved in face configural processing. This finding, together with evidence showing impairments downstream from the right FFA in DP individuals, has led some to argue that perhaps the right FFA is largely intact in DP individuals. Using fMRI multivoxel pattern analysis, here we report the discovery of a neural impairment in the right FFA of DP individuals that may play a critical role in mediating their face-processing deficits. In seven individuals with DP, we discovered that, despite the right FFA's preference for faces and it showing decoding for the different face parts, it exhibited impaired face configural decoding and did not contain distinct neural response patterns for the intact and the scrambled face configurations. This abnormality was not present throughout the ventral visual cortex, as normal neural decoding was found in an adjacent object-processing region. To our knowledge, this is the first direct neural evidence showing impaired face configural processing in the right FFA in individuals with DP. The discovery of this neural impairment provides a new clue to our understanding of the neural basis of DP.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Most of human daily social interactions rely on the ability to successfully recognize faces. Yet ∼2% of the human population suffers from face blindness without any acquired brain damage [this is also known as developmental prosopagnosia (DP) or congenital prosopagnosia]). Despite the presence of severe behavioral face recognition deficits, surprisingly, a majority of DP individuals exhibit normal face selectivity in the right fusiform face area (FFA), a key brain region involved in face configural processing. This finding, together with evidence showing impairments downstream from the right FFA in DP individuals, has led some to argue that perhaps the right FFA is largely intact in DP individuals. Using fMRI multivoxel pattern analysis, here we report the discovery of a neural impairment in the right FFA of DP individuals that may play a critical role in mediating their face-processing deficits. In seven individuals with DP, we discovered that, despite the right FFA's preference for faces and it showing decoding for the different face parts, it exhibited impaired face configural decoding and did not contain distinct neural response patterns for the intact and the scrambled face configurations. This abnormality was not present throughout the ventral visual cortex, as normal neural decoding was found in an adjacent object-processing region. To our knowledge, this is the first direct neural evidence showing impaired face configural processing in the right FFA in individuals with DP. The discovery of this neural impairment provides a new clue to our understanding of the neural basis of DP. |
Luming Zhang; Meng Wang; Liqiang Nie; Liang Hong; Yong Rui; Qi Tian Retargeting semantically-rich photos Journal Article IEEE Transactions on Multimedia, 17 (9), pp. 1538–1549, 2015. @article{Zhang2015a, title = {Retargeting semantically-rich photos}, author = {Luming Zhang and Meng Wang and Liqiang Nie and Liang Hong and Yong Rui and Qi Tian}, year = {2015}, date = {2015-01-01}, journal = {IEEE Transactions on Multimedia}, volume = {17}, number = {9}, pages = {1538--1549}, abstract = {Semantically-rich photos contain a rich variety of semantic objects (e.g., pedestrians and bicycles). Retargeting these photos is a challenging task since each semantic object has fixed geometric characteristics. Shrinking these objects simultaneously during retargeting is prone to distortion. In this paper, we propose to retarget semantically-rich photos by detecting photo semantics from image tags, which are predicted by a multi-label SVM. The key technique is a generative model termed latent stability discovery (LSD). It can robustly localize various semantic objects in a photo by making use of the predicted noisy image tags. BasedonLSD,afeaturefusionalgorithm is proposed to detect salient regions at both the low-level and high-level. These salient regions are linked into a path sequentially to simulate human visual perception. Finally, we learn the prior distribution of such paths from aesthetically pleasing training photos. The prior enforces the path of a retargeted photo to be maximally similar to those from the training photos. In the experiment, we collect 217 1600x1200 photos, each containing over seven salient objects. Comprehensive user studies demonstrate the competitiveness of our method.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Semantically-rich photos contain a rich variety of semantic objects (e.g., pedestrians and bicycles). Retargeting these photos is a challenging task since each semantic object has fixed geometric characteristics. Shrinking these objects simultaneously during retargeting is prone to distortion. In this paper, we propose to retarget semantically-rich photos by detecting photo semantics from image tags, which are predicted by a multi-label SVM. The key technique is a generative model termed latent stability discovery (LSD). It can robustly localize various semantic objects in a photo by making use of the predicted noisy image tags. BasedonLSD,afeaturefusionalgorithm is proposed to detect salient regions at both the low-level and high-level. These salient regions are linked into a path sequentially to simulate human visual perception. Finally, we learn the prior distribution of such paths from aesthetically pleasing training photos. The prior enforces the path of a retargeted photo to be maximally similar to those from the training photos. In the experiment, we collect 217 1600x1200 photos, each containing over seven salient objects. Comprehensive user studies demonstrate the competitiveness of our method. |
Wenjia Zhang; Nan Li; Xiaoyue Wang; Suiping Wang Integration of sentence-level semantic information in parafovea: Evidence from the RSVP-flanker paradigm Journal Article PLoS ONE, 10 (9), pp. e0139016, 2015. @article{Zhang2015b, title = {Integration of sentence-level semantic information in parafovea: Evidence from the RSVP-flanker paradigm}, author = {Wenjia Zhang and Nan Li and Xiaoyue Wang and Suiping Wang}, doi = {10.1371/journal.pone.0139016}, year = {2015}, date = {2015-01-01}, journal = {PLoS ONE}, volume = {10}, number = {9}, pages = {e0139016}, abstract = {During text reading, the parafoveal word was usually presented between 2° and 5° from the point of fixation. Whether semantic information of parafoveal words can be processed during sentence reading is a critical and long-standing issue. Recently, studies using the RSVP-flanker paradigm have shown that the incongruent parafoveal word, presented as right flanker, elicited a more negative N400 compared with the congruent parafoveal word. This suggests that the semantic information of parafoveal words can be extracted and integrated during sentence reading, because the N400 effect is a classical index of semantic integration. However, as most previous studies did not control the word-pair congruency of the parafoveal and the foveal words that were presented in the critical triad, it is still unclear whether such integration happened at the sentence level or just at the word-pair level. The present study addressed this question by manipulating verbs in Chinese sentences to yield either a semantically congruent or semantically incongruent context for the critical noun. In particular, the interval between the critical nouns and verbs was controlled to be 4 or 5 characters. Thus, to detect the incongruence of the parafoveal noun, participants had to integrate it with the global sentential context. The results revealed that the N400 time-locked to the critical triads was more negative in incongruent than in congruent sentences, suggesting that parafoveal semantic information can be integrated at the sentence level during Chinese reading.}, keywords = {}, pubstate = {published}, tppubtype = {article} } During text reading, the parafoveal word was usually presented between 2° and 5° from the point of fixation. Whether semantic information of parafoveal words can be processed during sentence reading is a critical and long-standing issue. Recently, studies using the RSVP-flanker paradigm have shown that the incongruent parafoveal word, presented as right flanker, elicited a more negative N400 compared with the congruent parafoveal word. This suggests that the semantic information of parafoveal words can be extracted and integrated during sentence reading, because the N400 effect is a classical index of semantic integration. However, as most previous studies did not control the word-pair congruency of the parafoveal and the foveal words that were presented in the critical triad, it is still unclear whether such integration happened at the sentence level or just at the word-pair level. The present study addressed this question by manipulating verbs in Chinese sentences to yield either a semantically congruent or semantically incongruent context for the critical noun. In particular, the interval between the critical nouns and verbs was controlled to be 4 or 5 characters. Thus, to detect the incongruence of the parafoveal noun, participants had to integrate it with the global sentential context. The results revealed that the N400 time-locked to the critical triads was more negative in incongruent than in congruent sentences, suggesting that parafoveal semantic information can be integrated at the sentence level during Chinese reading. |
Youming Zhang; Jorma Laurikkala; Martti Juhola; Youming Zhang; Jorma Laurikkala; Martti Juhola Biometric verification with eye movements: Results from a long-term recording series Journal Article IET Biometrics, 4 (3), pp. 162–168, 2015. @article{Zhang2015c, title = {Biometric verification with eye movements: Results from a long-term recording series}, author = {Youming Zhang and Jorma Laurikkala and Martti Juhola and Youming Zhang and Jorma Laurikkala and Martti Juhola}, doi = {10.1049/iet-bmt.2014.0044}, year = {2015}, date = {2015-01-01}, journal = {IET Biometrics}, volume = {4}, number = {3}, pages = {162--168}, abstract = {The authors present the author's results of using saccadic eye movements for biometric user verification. The method can be applied to computers or other devices, in which it is possible to include an eye movement camera system. Thus far, this idea has been little researched. As they have extensively studied eye movement signals for medical applications, they saw an opportunity for the biometric use of saccades. Saccades are the fastest of all eye movements, and are easy to stimulate and detect from signals. As signals measured from a physiological origin, the properties of eye movements (e.g. latency and maximum angular velocity) may contain considerable variability between different times of day, between days or weeks and so on. Since such variability might impair biometric verification based on saccades, they attempted to tackle this issue. In contrast to their earlier results, where they did not include such long intervals between sessions of eye movement recordings as in the present research, their results showed that – notwithstanding some variability present in saccadic variables – this variability was not considerable enough to essentially disturb or impair verification results. The only exception was a test series of very long intervals ∼16 or 32 months in length. For the best results obtained with various classification methods, false rejection and false acceptance rates were textless5%. Thus, they conclude that saccadic eye movements can provide a realistic basis for biometric user verification.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The authors present the author's results of using saccadic eye movements for biometric user verification. The method can be applied to computers or other devices, in which it is possible to include an eye movement camera system. Thus far, this idea has been little researched. As they have extensively studied eye movement signals for medical applications, they saw an opportunity for the biometric use of saccades. Saccades are the fastest of all eye movements, and are easy to stimulate and detect from signals. As signals measured from a physiological origin, the properties of eye movements (e.g. latency and maximum angular velocity) may contain considerable variability between different times of day, between days or weeks and so on. Since such variability might impair biometric verification based on saccades, they attempted to tackle this issue. In contrast to their earlier results, where they did not include such long intervals between sessions of eye movement recordings as in the present research, their results showed that – notwithstanding some variability present in saccadic variables – this variability was not considerable enough to essentially disturb or impair verification results. The only exception was a test series of very long intervals ∼16 or 32 months in length. For the best results obtained with various classification methods, false rejection and false acceptance rates were textless5%. Thus, they conclude that saccadic eye movements can provide a realistic basis for biometric user verification. |
Yan Zhang; Xiaochuan Pan; Rubin Wang; Masamichi Sakagami Functional connectivity between prefrontal cortex and striatum estimated by phase locking value Journal Article Cognitive Neurodynamics, 10 (3), pp. 245–254, 2016. @article{Zhang2016, title = {Functional connectivity between prefrontal cortex and striatum estimated by phase locking value}, author = {Yan Zhang and Xiaochuan Pan and Rubin Wang and Masamichi Sakagami}, doi = {10.1007/s11571-016-9376-2}, year = {2016}, date = {2016-01-01}, journal = {Cognitive Neurodynamics}, volume = {10}, number = {3}, pages = {245--254}, publisher = {Springer Netherlands}, abstract = {The interplay between the prefrontal cortex (PFC) and striatum has an important role in cognitive processes. To investigate interactive functions between the two areas in reward processing, we recorded local field potentials (LFPs) simultaneously from the two areas of two monkeys performing a reward prediction task (large reward vs small reward). The power of the LFPs was calculated in three frequency bands: the beta band (15–29 Hz), the low gamma band (30–49 Hz), and the high gamma band (50–100 Hz). We found that both the PFC and striatum encoded the reward information in the beta band. The reward information was also found in the high gamma band in the PFC, not in the striatum. We further calculated the phase-locking value (PLV) between two LFP signals to measure the phase synchrony between the PFC and striatum. It was found that significant differences occurred between PLVs in different task periods and in different frequency bands. The PLVs in small reward condition were significant higher than that in large reward condition in the beta band. In contrast, the PLVs in the high gamma band were stronger in large reward trials than in small trials. These results suggested that the functional connectivity between the PFC and striatum depended on the task periods and reward conditions. The beta synchrony between the PFC and striatum may regulate behavioral outputs of the monkeys in the small reward condition.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The interplay between the prefrontal cortex (PFC) and striatum has an important role in cognitive processes. To investigate interactive functions between the two areas in reward processing, we recorded local field potentials (LFPs) simultaneously from the two areas of two monkeys performing a reward prediction task (large reward vs small reward). The power of the LFPs was calculated in three frequency bands: the beta band (15–29 Hz), the low gamma band (30–49 Hz), and the high gamma band (50–100 Hz). We found that both the PFC and striatum encoded the reward information in the beta band. The reward information was also found in the high gamma band in the PFC, not in the striatum. We further calculated the phase-locking value (PLV) between two LFP signals to measure the phase synchrony between the PFC and striatum. It was found that significant differences occurred between PLVs in different task periods and in different frequency bands. The PLVs in small reward condition were significant higher than that in large reward condition in the beta band. In contrast, the PLVs in the high gamma band were stronger in large reward trials than in small trials. These results suggested that the functional connectivity between the PFC and striatum depended on the task periods and reward conditions. The beta synchrony between the PFC and striatum may regulate behavioral outputs of the monkeys in the small reward condition. |
Xuemeng Zhang; Shuaiyu Chen; Hong Chen; Yan Gu; Wenjian Xu General and food-specific inhibitory control as moderators of the effects of the impulsive systems on food choices Journal Article Frontiers in Psychology, 8 , pp. 1–8, 2017. @article{Zhang2017, title = {General and food-specific inhibitory control as moderators of the effects of the impulsive systems on food choices}, author = {Xuemeng Zhang and Shuaiyu Chen and Hong Chen and Yan Gu and Wenjian Xu}, doi = {10.3389/fpsyg.2017.00802}, year = {2017}, date = {2017-01-01}, journal = {Frontiers in Psychology}, volume = {8}, pages = {1--8}, abstract = {The present study aimed to extend the application of the reflective-impulsive model to restrained eating and explore the effect of automatic attention (impulsive system) on food choices. Furthermore, we examined the moderating effects of general inhibitory control (G-IC) and food-specific inhibitory control (F-IC) on successful and unsuccessful restrained eaters (US-REs). Automatic attention was measured using ``the EyeLink 1000,'' which tracked eye movements during the process of making food choices, and G-IC and F-IC were measured using the Stop-Signal Task. The results showed that food choices were related to automatic attention and that G-IC and F-IC moderated the predictive relationship between automatic attention and food choices. Furthermore, among successful restrained eaters (S-REs), automatic attention to high caloric foods did not predict food choices, regardless of whether G-IC or F-IC was high or low. Whereas food choice was positively correlated with automatic attention among US-REs with poor F-IC, this pattern was not observed in those with poor G-IC. In conclusion, the S-REs had more effective self-management skills and their food choices were affected less by automatic attention and inhibitory control. Unsuccessful restrained eating was associated with poor F-IC (not G-IC) and greater automatic attention to high caloric foods. Thus, clinical interventions should focus on enhancing F-IC, not G-IC, and on reducing automatic attention to high caloric foods.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The present study aimed to extend the application of the reflective-impulsive model to restrained eating and explore the effect of automatic attention (impulsive system) on food choices. Furthermore, we examined the moderating effects of general inhibitory control (G-IC) and food-specific inhibitory control (F-IC) on successful and unsuccessful restrained eaters (US-REs). Automatic attention was measured using ``the EyeLink 1000,'' which tracked eye movements during the process of making food choices, and G-IC and F-IC were measured using the Stop-Signal Task. The results showed that food choices were related to automatic attention and that G-IC and F-IC moderated the predictive relationship between automatic attention and food choices. Furthermore, among successful restrained eaters (S-REs), automatic attention to high caloric foods did not predict food choices, regardless of whether G-IC or F-IC was high or low. Whereas food choice was positively correlated with automatic attention among US-REs with poor F-IC, this pattern was not observed in those with poor G-IC. In conclusion, the S-REs had more effective self-management skills and their food choices were affected less by automatic attention and inhibitory control. Unsuccessful restrained eating was associated with poor F-IC (not G-IC) and greater automatic attention to high caloric foods. Thus, clinical interventions should focus on enhancing F-IC, not G-IC, and on reducing automatic attention to high caloric foods. |
Yan Zhang; Xiaoying Wang; Juan Wang; Lili Zhang; Yu Xiang Patterns of eye movements when observers judge female facial attractiveness Journal Article Frontiers in Psychology, 8 (NOV), pp. 1909, 2017. @article{Zhang2017a, title = {Patterns of eye movements when observers judge female facial attractiveness}, author = {Yan Zhang and Xiaoying Wang and Juan Wang and Lili Zhang and Yu Xiang}, doi = {10.3389/fpsyg.2017.01909}, year = {2017}, date = {2017-11-01}, journal = {Frontiers in Psychology}, volume = {8}, number = {NOV}, pages = {1909}, publisher = {Frontiers}, abstract = {The purpose of the present study is to explore the fixation patterns for the explicit judgments of attractiveness judgments and infer which features are used for attractiveness. Facial attractiveness is of high importance for human interaction and social behavior. Behavioral studies on the perceptual cues for female facial attractiveness suggested three potentially important features: averageness, symmetry, and sexual dimorphy. However, none of these studies explained which regions of stimulus images influence observers' judgments. Therefore, the present research recorded the eye movements of 24 male observers and 19 female observers as they rated a set of 30 photographs of female facial attractiveness. Results demonstrated the following: (1) Fixation is longer and more frequent on the noses of female faces than on their eyes and mouths (no difference exists between the eyes and the mouth); (2) The average pupil diameter at the nose region is bigger than that at the eyes and mouth (no difference exists between the eyes and the mouth); (3) the number of fixations of male participants was significantly more than female participants. (4) Observers first fixate on the eyes and mouth (no difference exists between the eyes and the mouth) before fixating on the nose area. In general, participants attend predominantly to the nose to form attractiveness judgments. The results of this study add a new dimension to the existing literature on judgment of facial attractiveness. The major contribution of the present study is the finding that the area of the nose is vital in the judgment of facial attractiveness. This finding establish a contribution of partial processing on female facial attractiveness judgments during eye-tracking.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The purpose of the present study is to explore the fixation patterns for the explicit judgments of attractiveness judgments and infer which features are used for attractiveness. Facial attractiveness is of high importance for human interaction and social behavior. Behavioral studies on the perceptual cues for female facial attractiveness suggested three potentially important features: averageness, symmetry, and sexual dimorphy. However, none of these studies explained which regions of stimulus images influence observers' judgments. Therefore, the present research recorded the eye movements of 24 male observers and 19 female observers as they rated a set of 30 photographs of female facial attractiveness. Results demonstrated the following: (1) Fixation is longer and more frequent on the noses of female faces than on their eyes and mouths (no difference exists between the eyes and the mouth); (2) The average pupil diameter at the nose region is bigger than that at the eyes and mouth (no difference exists between the eyes and the mouth); (3) the number of fixations of male participants was significantly more than female participants. (4) Observers first fixate on the eyes and mouth (no difference exists between the eyes and the mouth) before fixating on the nose area. In general, participants attend predominantly to the nose to form attractiveness judgments. The results of this study add a new dimension to the existing literature on judgment of facial attractiveness. The major contribution of the present study is the finding that the area of the nose is vital in the judgment of facial attractiveness. This finding establish a contribution of partial processing on female facial attractiveness judgments during eye-tracking. |
Youming Zhang; Martti Juhola On biometrics with eye movements Journal Article IEEE Journal of Biomedical and Health Informatics, 21 (5), pp. 1360–1366, 2017. @article{Zhang2017b, title = {On biometrics with eye movements}, author = {Youming Zhang and Martti Juhola}, doi = {10.1109/JBHI.2016.2551862}, year = {2017}, date = {2017-01-01}, journal = {IEEE Journal of Biomedical and Health Informatics}, volume = {21}, number = {5}, pages = {1360--1366}, abstract = {Eye movements are a relatively novel data source for biometric identification. When video cameras applied to eye tracking become smaller and more efficient, this data source could offer interesting opportunities for the development of eye movement biometrics. In the present article, we study primarily biometric identification as seen as a classification task of multiple classes, and secondarily biometric verification considered as binary classification. Our research is based on the saccadic eye movement signal measurements from 109 young subjects. In order to test the data measured, we use a procedure of biometric identification according to the one-versus-one (subject) principle. In a development from our previous research, which also involved biometric verification based on saccadic eye movements, we now apply another eye movement tracker device with a higher sampling frequency of 250 Hz. The results obtained are good, with correct identification rates at 80-90% at their best.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Eye movements are a relatively novel data source for biometric identification. When video cameras applied to eye tracking become smaller and more efficient, this data source could offer interesting opportunities for the development of eye movement biometrics. In the present article, we study primarily biometric identification as seen as a classification task of multiple classes, and secondarily biometric verification considered as binary classification. Our research is based on the saccadic eye movement signal measurements from 109 young subjects. In order to test the data measured, we use a procedure of biometric identification according to the one-versus-one (subject) principle. In a development from our previous research, which also involved biometric verification based on saccadic eye movements, we now apply another eye movement tracker device with a higher sampling frequency of 250 Hz. The results obtained are good, with correct identification rates at 80-90% at their best. |
Yan Zhang; Yu Xiang; Ying Guo; Lili Zhang Beauty-related perceptual bias: Who captures the mind of the beholder? Journal Article Brain and Behavior, 8 (5), pp. 1–7, 2018. @article{Zhang2018a, title = {Beauty-related perceptual bias: Who captures the mind of the beholder?}, author = {Yan Zhang and Yu Xiang and Ying Guo and Lili Zhang}, doi = {10.1002/brb3.945}, year = {2018}, date = {2018-01-01}, journal = {Brain and Behavior}, volume = {8}, number = {5}, pages = {1--7}, abstract = {Introduction: To explore the beauty- related perceptual bias and answers the question: Who can capture the mind of the beholder? Many studies have explored the specificity of human faces through ERP or other ways, and the materials they used are general human faces and other objects. Therefore, we want to further explore the difference between attractive faces and beautiful objects such as flowers. Methods: We recorded the eye movement of 22 male observers and 23 female observers using a standard two-alternative forced choice. Results: (1) The attractive faces were looked at longer and more often in comparison with the beautiful flowers; (2) fixation counts of female participants are more than male participants; and (3) the participants watched the beautiful flowers first, followed by the attractive faces, but there was no significant difference on the first fixation duration between the beautiful flowers and the attractive faces. Conclusions: The data in this study may suggest that people prefer attractive faces to beautiful flowers.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Introduction: To explore the beauty- related perceptual bias and answers the question: Who can capture the mind of the beholder? Many studies have explored the specificity of human faces through ERP or other ways, and the materials they used are general human faces and other objects. Therefore, we want to further explore the difference between attractive faces and beautiful objects such as flowers. Methods: We recorded the eye movement of 22 male observers and 23 female observers using a standard two-alternative forced choice. Results: (1) The attractive faces were looked at longer and more often in comparison with the beautiful flowers; (2) fixation counts of female participants are more than male participants; and (3) the participants watched the beautiful flowers first, followed by the attractive faces, but there was no significant difference on the first fixation duration between the beautiful flowers and the attractive faces. Conclusions: The data in this study may suggest that people prefer attractive faces to beautiful flowers. |
Jun-Yun Zhang; Cong Yu Vernier learning with short- and long-staircase training and its transfer to a new location with double training Journal Article Journal of Vision, 18 (13), pp. 1–8, 2018. @article{Zhang2018b, title = {Vernier learning with short- and long-staircase training and its transfer to a new location with double training}, author = {Jun-Yun Zhang and Cong Yu}, doi = {10.1167/18.13.8}, year = {2018}, date = {2018-01-01}, journal = {Journal of Vision}, volume = {18}, number = {13}, pages = {1--8}, abstract = {We previously demonstrated that perceptual learning of Vernier discrimination, when paired with orientation learning at the same retinal location, can transfer completely to untrained locations (Wang, Zhang, Klein, Levi, & Yu, 2014; Zhang, Wang, Klein, Levi, & Yu, 2011). However, Hung and Seitz (2014) reported that the transfer is possible only when Vernier is trained with short staircases, but not with very long staircases. Here we ran two experiments to examine Hung and Seitz's conclusions. The first experiment confirmed the transfer effects with short-staircase Vernier training in both our study and Hung and Seitz's. The second experiment revealed that long-staircase training only produced very fast learning at the beginning of the pretraining session, but with no further learning afterward. Moreover, the learning and transfer effects differed insignificantly with a small effect size, making it difficult to support Hung and Seitz's claim that learning with long-staircase training cannot transfer to an untrained retinal location.}, keywords = {}, pubstate = {published}, tppubtype = {article} } We previously demonstrated that perceptual learning of Vernier discrimination, when paired with orientation learning at the same retinal location, can transfer completely to untrained locations (Wang, Zhang, Klein, Levi, & Yu, 2014; Zhang, Wang, Klein, Levi, & Yu, 2011). However, Hung and Seitz (2014) reported that the transfer is possible only when Vernier is trained with short staircases, but not with very long staircases. Here we ran two experiments to examine Hung and Seitz's conclusions. The first experiment confirmed the transfer effects with short-staircase Vernier training in both our study and Hung and Seitz's. The second experiment revealed that long-staircase training only produced very fast learning at the beginning of the pretraining session, but with no further learning afterward. Moreover, the learning and transfer effects differed insignificantly with a small effect size, making it difficult to support Hung and Seitz's claim that learning with long-staircase training cannot transfer to an untrained retinal location. |
Mengmi Zhang; Jiashi Feng; Keng Teck Ma; Joo Hwee Lim; Qi Zhao; Gabriel Kreiman Finding any Waldo with zero-shot invariant and efficient visual search Journal Article Nature Communications, 9 , pp. 3730, 2018. @article{Zhang2018c, title = {Finding any Waldo with zero-shot invariant and efficient visual search}, author = {Mengmi Zhang and Jiashi Feng and Keng Teck Ma and Joo Hwee Lim and Qi Zhao and Gabriel Kreiman}, doi = {10.1038/s41467-018-06217-x}, year = {2018}, date = {2018-01-01}, journal = {Nature Communications}, volume = {9}, pages = {3730}, publisher = {Springer US}, abstract = {Searching for a target object in a cluttered scene constitutes a fundamental challenge in daily vision. Visual search must be selective enough to discriminate the target from distractors, invariant to changes in the appearance of the target, efficient to avoid exhaustive exploration of the image, and must generalize to locate novel target objects with zero-shot training. Previous work on visual search has focused on searching for perfect matches of a target after extensive category-specific training. Here, we show for the first time that humans can effi- ciently and invariantly search for natural objects in complex scenes. To gain insight into the mechanisms that guide visual search, we propose a biologically inspired computational model that can locate targets without exhaustive sampling and which can generalize to novel objects. The model provides an approximation to the mechanisms integrating bottom-up and top-down signals during search in natural scenes.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Searching for a target object in a cluttered scene constitutes a fundamental challenge in daily vision. Visual search must be selective enough to discriminate the target from distractors, invariant to changes in the appearance of the target, efficient to avoid exhaustive exploration of the image, and must generalize to locate novel target objects with zero-shot training. Previous work on visual search has focused on searching for perfect matches of a target after extensive category-specific training. Here, we show for the first time that humans can effi- ciently and invariantly search for natural objects in complex scenes. To gain insight into the mechanisms that guide visual search, we propose a biologically inspired computational model that can locate targets without exhaustive sampling and which can generalize to novel objects. The model provides an approximation to the mechanisms integrating bottom-up and top-down signals during search in natural scenes. |
Xiaoli Zhang; Julie D Golomb Target localization after saccades and at fixation: Nontargets both facilitate and bias responses Journal Article Visual Cognition, 26 (9), pp. 734–752, 2018. @article{Zhang2018cb, title = {Target localization after saccades and at fixation: Nontargets both facilitate and bias responses}, author = {Xiaoli Zhang and Julie D Golomb}, doi = {10.1080/13506285.2018.1553810}, year = {2018}, date = {2018-01-01}, journal = {Visual Cognition}, volume = {26}, number = {9}, pages = {734--752}, publisher = {Taylor & Francis}, abstract = {The image on our retina changes every time we make an eye movement. To maintain visual stability after saccades, specifically to locate visual targets, we may use nontarget objects as “landmarks”. In the current study, we compared how the presence of nontargets affects target localization after saccades and during sustained fixation. Participants fixated a target object, which either maintained its location on the screen (sustained-fixation trials), or displaced to trigger a saccade (saccade trials). After the target disappeared, participants reported the most recent target location with a mouse click. We found that the presence of nontargets decreased response error magnitude and variability. However, this nontarget facilitation effect was not larger for saccade trials than sustained-fixation trials, indicating that nontarget facilitation might be a general effect for target localization, rather than of particular importance to post-saccadic stability. Additionally, participants' responses were biased towards the nontarget locations, particularly when the nontarget-target relationships were preserved in relative coordinates across the saccade. This nontarget bias interacted with biases from other spatial references, e.g., eye movement paths, possibly in a way that emphasized non-redundant information. In summary, the presence of nontargets is one of several sources of reference that combine to influence (both facilitate and bias) target localization.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The image on our retina changes every time we make an eye movement. To maintain visual stability after saccades, specifically to locate visual targets, we may use nontarget objects as “landmarks”. In the current study, we compared how the presence of nontargets affects target localization after saccades and during sustained fixation. Participants fixated a target object, which either maintained its location on the screen (sustained-fixation trials), or displaced to trigger a saccade (saccade trials). After the target disappeared, participants reported the most recent target location with a mouse click. We found that the presence of nontargets decreased response error magnitude and variability. However, this nontarget facilitation effect was not larger for saccade trials than sustained-fixation trials, indicating that nontarget facilitation might be a general effect for target localization, rather than of particular importance to post-saccadic stability. Additionally, participants' responses were biased towards the nontarget locations, particularly when the nontarget-target relationships were preserved in relative coordinates across the saccade. This nontarget bias interacted with biases from other spatial references, e.g., eye movement paths, possibly in a way that emphasized non-redundant information. In summary, the presence of nontargets is one of several sources of reference that combine to influence (both facilitate and bias) target localization. |
Bao Zhang; Shuhui Liu; Mattia Doro; Giovanni Galfano Attentional guidance from multiple working memory representations: Evidence from eye movements Journal Article Scientific Reports, 8 , pp. 13876, 2018. @article{Zhang2018d, title = {Attentional guidance from multiple working memory representations: Evidence from eye movements}, author = {Bao Zhang and Shuhui Liu and Mattia Doro and Giovanni Galfano}, doi = {10.1038/s41598-018-32144-4}, year = {2018}, date = {2018-01-01}, journal = {Scientific Reports}, volume = {8}, pages = {13876}, publisher = {Springer US}, abstract = {Recent studies have shown that the representation of an item in visual working memory (VWM) can bias the deployment of attention to stimuli in the visual scene possessing the same features. When multiple item representations are simultaneously held in VWM, whether these representations, especially those held in a non-prioritized or accessory status, are able to bias attention, is still controversial. In the present study we adopted an eye tracking technique to shed light on this issue. In particular, we implemented a manipulation aimed at prioritizing one of the VWM representation to an active status, and tested whether attention could be guided by both the prioritized and the accessory representations when they reappeared as distractors in a visual search task. Notably, in Experiment 1, an analysis of first fixation proportion (FFP) revealed that both the prioritized and the accessory representations were able to capture attention suggesting a significant attentional guidance effect. However, such effect was not present in manual response times (RT). Most critically, in Experiment 2, we used a more robust experimental design controlling for different factors that might have played a role in shaping these findings. The results showed evidence for attentional guidance from the accessory representation in both manual RTs and FFPs. Interestingly, FFPs showed a stronger attentional bias for the prioritized representation than for the accessory representation across experiments. The overall findings suggest that multiple VWM representations, even the accessory representation, can simultaneously interact with visual attention.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Recent studies have shown that the representation of an item in visual working memory (VWM) can bias the deployment of attention to stimuli in the visual scene possessing the same features. When multiple item representations are simultaneously held in VWM, whether these representations, especially those held in a non-prioritized or accessory status, are able to bias attention, is still controversial. In the present study we adopted an eye tracking technique to shed light on this issue. In particular, we implemented a manipulation aimed at prioritizing one of the VWM representation to an active status, and tested whether attention could be guided by both the prioritized and the accessory representations when they reappeared as distractors in a visual search task. Notably, in Experiment 1, an analysis of first fixation proportion (FFP) revealed that both the prioritized and the accessory representations were able to capture attention suggesting a significant attentional guidance effect. However, such effect was not present in manual response times (RT). Most critically, in Experiment 2, we used a more robust experimental design controlling for different factors that might have played a role in shaping these findings. The results showed evidence for attentional guidance from the accessory representation in both manual RTs and FFPs. Interestingly, FFPs showed a stronger attentional bias for the prioritized representation than for the accessory representation across experiments. The overall findings suggest that multiple VWM representations, even the accessory representation, can simultaneously interact with visual attention. |
Yu Zhang; Aijuan Yan; Bingyu Liu; Ying Wan; Yuchen Zhao; Ying Liu; Jiangxiu Tan; Lu Song; Yong Gu; Zhenguo Liu Oculomotor performances are associated with motor and non-motor symptoms in Parkinson's disease Journal Article Frontiers in Neurology, 9 , pp. 1–8, 2018. @article{Zhang2018e, title = {Oculomotor performances are associated with motor and non-motor symptoms in Parkinson's disease}, author = {Yu Zhang and Aijuan Yan and Bingyu Liu and Ying Wan and Yuchen Zhao and Ying Liu and Jiangxiu Tan and Lu Song and Yong Gu and Zhenguo Liu}, doi = {10.3389/fneur.2018.00960}, year = {2018}, date = {2018-01-01}, journal = {Frontiers in Neurology}, volume = {9}, pages = {1--8}, abstract = {Background: Parkinson's disease (PD) patients exhibit deficits in oculomotor behavior, yet the results are inconsistent across studies. In addition, how these results are associated with clinical symptoms is unclear, especially in China. Methods: We designed a case-control study in China including 37 PD patients and 39 controls. Clinical manifestations in PD patients were recorded. Oculomotor performance was measured by a video-based eye tracker system. Results: We found that six oculomotor parameters, including fixation stability, saccadic latency, smooth pursuit gain, saccade frequency, viewing range, and saccade frequency during free-viewing context, were significantly different in PD patients and control group. Combining application of these six parameters could improve diagnostic accuracy to over 90%. Moreover, pursuit gain was significantly associated with PD duration, UPDRS III, in PD patients. Saccade latency was significantly associated with PD duration, Berg balance score, RBD score, and Total LEDD in PD patients. Conclusions: PD patients commonly exhibit oculomotor deficits in multiple behavioral contexts, which are associated with both motor and non-motor symptoms. Oculomotor test may provide a valuable tool for the clinical assessment of PD.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Background: Parkinson's disease (PD) patients exhibit deficits in oculomotor behavior, yet the results are inconsistent across studies. In addition, how these results are associated with clinical symptoms is unclear, especially in China. Methods: We designed a case-control study in China including 37 PD patients and 39 controls. Clinical manifestations in PD patients were recorded. Oculomotor performance was measured by a video-based eye tracker system. Results: We found that six oculomotor parameters, including fixation stability, saccadic latency, smooth pursuit gain, saccade frequency, viewing range, and saccade frequency during free-viewing context, were significantly different in PD patients and control group. Combining application of these six parameters could improve diagnostic accuracy to over 90%. Moreover, pursuit gain was significantly associated with PD duration, UPDRS III, in PD patients. Saccade latency was significantly associated with PD duration, Berg balance score, RBD score, and Total LEDD in PD patients. Conclusions: PD patients commonly exhibit oculomotor deficits in multiple behavioral contexts, which are associated with both motor and non-motor symptoms. Oculomotor test may provide a valuable tool for the clinical assessment of PD. |
Xilin Zhang; Nicole Mlynaryk; Sara Ahmed; Shruti Japee; Leslie G Ungerleider The role of inferior frontal junction in controlling the spatially global effect of feature-based attention in human visual areas Journal Article PLoS Biology, 16 (6), pp. e2005399, 2018. @article{Zhang2018f, title = {The role of inferior frontal junction in controlling the spatially global effect of feature-based attention in human visual areas}, author = {Xilin Zhang and Nicole Mlynaryk and Sara Ahmed and Shruti Japee and Leslie G Ungerleider}, doi = {10.1371/journal.pbio.2005399}, year = {2018}, date = {2018-01-01}, journal = {PLoS Biology}, volume = {16}, number = {6}, pages = {e2005399}, abstract = {Feature-based attention has a spatially global effect, i.e., responses to stimuli that share features with an attended stimulus are enhanced not only at the attended location but throughout the visual field. However, how feature-based attention modulates cortical neural responses at unattended locations remains unclear. Here we used functional magnetic resonance imaging (fMRI) to examine this issue as human participants performed motion- (Experiment 1) and color- (Experiment 2) based attention tasks. Results indicated that, in both experiments, the respective visual processing areas (middle temporal area [MT+] for motion and V4 for color) as well as early visual, parietal, and prefrontal areas all showed the classic feature-based attention effect, with neural responses to the unattended stimulus significantly elevated when it shared the same feature with the attended stimulus. Effective connectivity analysis using dynamic causal modeling (DCM) showed that this spatially global effect in the respective visual processing areas (MT+ for motion and V4 for color), intraparietal sulcus (IPS), frontal eye field (FEF), medial frontal gyrus (mFG), and primary visual cortex (V1) was derived by feedback from the inferior frontal junction (IFJ). Complementary effective connectivity analysis using Granger causality modeling (GCM) confirmed that, in both experiments, the node with the highest outflow and netflow degree was IFJ, which was thus considered to be the source of the network. These results indicate a source for the spatially global effect of feature-based attention in the human prefrontal cortex.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Feature-based attention has a spatially global effect, i.e., responses to stimuli that share features with an attended stimulus are enhanced not only at the attended location but throughout the visual field. However, how feature-based attention modulates cortical neural responses at unattended locations remains unclear. Here we used functional magnetic resonance imaging (fMRI) to examine this issue as human participants performed motion- (Experiment 1) and color- (Experiment 2) based attention tasks. Results indicated that, in both experiments, the respective visual processing areas (middle temporal area [MT+] for motion and V4 for color) as well as early visual, parietal, and prefrontal areas all showed the classic feature-based attention effect, with neural responses to the unattended stimulus significantly elevated when it shared the same feature with the attended stimulus. Effective connectivity analysis using dynamic causal modeling (DCM) showed that this spatially global effect in the respective visual processing areas (MT+ for motion and V4 for color), intraparietal sulcus (IPS), frontal eye field (FEF), medial frontal gyrus (mFG), and primary visual cortex (V1) was derived by feedback from the inferior frontal junction (IFJ). Complementary effective connectivity analysis using Granger causality modeling (GCM) confirmed that, in both experiments, the node with the highest outflow and netflow degree was IFJ, which was thus considered to be the source of the network. These results indicate a source for the spatially global effect of feature-based attention in the human prefrontal cortex. |
Chen Zhang; Angelina Paolozza; Po He Tseng; James N Reynolds; Douglas P Munoz; Laurent Itti Detection of children/youth with fetal alcohol spectrum disorder through eye movement, psychometric, and neuroimaging data Journal Article Frontiers in Neurology, 10 (FEB), pp. 1–15, 2019. @article{Zhang2019a, title = {Detection of children/youth with fetal alcohol spectrum disorder through eye movement, psychometric, and neuroimaging data}, author = {Chen Zhang and Angelina Paolozza and Po He Tseng and James N Reynolds and Douglas P Munoz and Laurent Itti}, doi = {10.3389/fneur.2019.00080}, year = {2019}, date = {2019-01-01}, journal = {Frontiers in Neurology}, volume = {10}, number = {FEB}, pages = {1--15}, abstract = {Background: Fetal alcohol spectrum disorders (FASD) is one of the most common causes of developmental disabilities and neurobehavioral deficits. Despite the high-prevalence of FASD, the current diagnostic process is challenging and time- and money-consuming, with underreported profiles of the neurocognitive and neurobehavioral impairments because of limited clinical capacity. We assessed children/youth with FASD from a multimodal perspective and developed a high-performing, low-cost screening protocol using a machine learning framework. Methods and Findings: Participants with FASD and age-matched typically developing controls completed up to six assessments, including saccadic eye movement tasks (prosaccade, antisaccade, and memory-guided saccade), free viewing of videos, psychometric tests, and neuroimaging of the corpus callosum. We comparatively investigated new machine learning methods applied to these data, toward the acquisition of a quantitative signature of the neurodevelopmental deficits, and the development of an objective, high-throughput screening tool to identify children/youth with FASD. Our method provides a comprehensive profile of distinct measures in domains including sensorimotor and visuospatial control, visual perception, attention, inhibition, working memory, academic functions, and brain structure. We also showed that a combination of four to six assessments yields the best FASD vs. control classification accuracy; however, this protocol is expensive and time consuming. We conducted a cost/benefit analysis of the six assessments and developed a high-performing, low-cost screening protocol based on a subset of eye movement and psychometric tests that approached the best result under a range of constraints (time, cost, participant age, required administration, and access to neuroimaging facility). Using insights from the theory of value of information, we proposed an optimal annual screening procedure for children at risk of FASD. Conclusions: We developed a high-capacity, low-cost screening procedure under constrains, with high expected monetary benefit, substantial impact of the referral and diagnostic process, and expected maximized long-term benefits to the tested individuals and to society. This annual screening procedure for children/youth at risk of FASD can be easily and widely deployed for early identification, potentially leading to earlier intervention and treatment. This is crucial for neurodevelopmental disorders, to mitigate the severity of the disorder and/or frequency of secondary comorbidities.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Background: Fetal alcohol spectrum disorders (FASD) is one of the most common causes of developmental disabilities and neurobehavioral deficits. Despite the high-prevalence of FASD, the current diagnostic process is challenging and time- and money-consuming, with underreported profiles of the neurocognitive and neurobehavioral impairments because of limited clinical capacity. We assessed children/youth with FASD from a multimodal perspective and developed a high-performing, low-cost screening protocol using a machine learning framework. Methods and Findings: Participants with FASD and age-matched typically developing controls completed up to six assessments, including saccadic eye movement tasks (prosaccade, antisaccade, and memory-guided saccade), free viewing of videos, psychometric tests, and neuroimaging of the corpus callosum. We comparatively investigated new machine learning methods applied to these data, toward the acquisition of a quantitative signature of the neurodevelopmental deficits, and the development of an objective, high-throughput screening tool to identify children/youth with FASD. Our method provides a comprehensive profile of distinct measures in domains including sensorimotor and visuospatial control, visual perception, attention, inhibition, working memory, academic functions, and brain structure. We also showed that a combination of four to six assessments yields the best FASD vs. control classification accuracy; however, this protocol is expensive and time consuming. We conducted a cost/benefit analysis of the six assessments and developed a high-performing, low-cost screening protocol based on a subset of eye movement and psychometric tests that approached the best result under a range of constraints (time, cost, participant age, required administration, and access to neuroimaging facility). Using insights from the theory of value of information, we proposed an optimal annual screening procedure for children at risk of FASD. Conclusions: We developed a high-capacity, low-cost screening procedure under constrains, with high expected monetary benefit, substantial impact of the referral and diagnostic process, and expected maximized long-term benefits to the tested individuals and to society. This annual screening procedure for children/youth at risk of FASD can be easily and widely deployed for early identification, potentially leading to earlier intervention and treatment. This is crucial for neurodevelopmental disorders, to mitigate the severity of the disorder and/or frequency of secondary comorbidities. |
Dexiang Zhang; Jukka Hyönä; Lei Cui; Zhaoxia Zhu; Shouxin Li Effects of task instructions and topic signaling on text processing among adult readers with different reading styles: An eye-tracking study Journal Article Learning and Instruction, 64 , pp. 1–15, 2019. @article{Zhang2019b, title = {Effects of task instructions and topic signaling on text processing among adult readers with different reading styles: An eye-tracking study}, author = {Dexiang Zhang and Jukka Hyönä and Lei Cui and Zhaoxia Zhu and Shouxin Li}, doi = {10.1016/j.learninstruc.2019.101246}, year = {2019}, date = {2019-12-01}, journal = {Learning and Instruction}, volume = {64}, pages = {1--15}, publisher = {Elsevier BV}, abstract = {Effects of task instructions and topic signaling on text processing among adult readers with different reading styles were studied by eye-tracking. In Experiment 1, readers read two multiple-topic expository texts guided either by a summary or a verification task. In Experiment 2, readers read a text with or without the topic sentences underlined. Four types of readers emerged: topic structure processors (TSPs), fast linear readers (FLRs), slow linear readers (SLRs), and nonselective reviewers (NSRs). TSPs paid ample fixation time on topic sentences regardless of their signaling. FLRs were characterized by fast first-pass reading, little rereading of previous text, and some signs of structure processing. The common feature of SLRs and NSRs was their slow first-pass reading. They differed from each other in that NSRs were characterized by spending ample time also during second-pass reading. They only showed some signs of topic structure processing when cued by task instructions or topic signaling.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Effects of task instructions and topic signaling on text processing among adult readers with different reading styles were studied by eye-tracking. In Experiment 1, readers read two multiple-topic expository texts guided either by a summary or a verification task. In Experiment 2, readers read a text with or without the topic sentences underlined. Four types of readers emerged: topic structure processors (TSPs), fast linear readers (FLRs), slow linear readers (SLRs), and nonselective reviewers (NSRs). TSPs paid ample fixation time on topic sentences regardless of their signaling. FLRs were characterized by fast first-pass reading, little rereading of previous text, and some signs of structure processing. The common feature of SLRs and NSRs was their slow first-pass reading. They differed from each other in that NSRs were characterized by spending ample time also during second-pass reading. They only showed some signs of topic structure processing when cued by task instructions or topic signaling. |
Felicia Zhang; Lauren L Emberson Opposing timing constraints severely limit the use of pupillometry to investigate visual statistical learning Journal Article Frontiers in Psychology, 10 (JULY), pp. 1–15, 2019. @article{Zhang2019c, title = {Opposing timing constraints severely limit the use of pupillometry to investigate visual statistical learning}, author = {Felicia Zhang and Lauren L Emberson}, doi = {10.3389/fpsyg.2019.01792}, year = {2019}, date = {2019-01-01}, journal = {Frontiers in Psychology}, volume = {10}, number = {JULY}, pages = {1--15}, abstract = {Majority of visual statistical learning (VSL) research uses only offline measures, collected after the familiarization phase (i.e. learning) has occurred. Offline measures have revealed a lot about the extent of statistical learning (SL) but less is known about the learning mechanisms that support VSL. Studies have shown that prediction can be a potential learning mechanism for VSL, but it is difficult to examine the role of prediction in VSL using offline measures alone. Pupil diameter is a promising online measure to index prediction in VSL because it can be collected during learning, requires no overt action or task and can be used in a wide-range of populations (e.g., infants and adults). Furthermore, pupil diameter has already been used to investigate processes that are part of prediction such as prediction error and updating. While the properties of pupil diameter have the potentially to powerfully expand studies in VSL, through a series of three experiments, we find that the two are not compatible with each other. Our results revealed that pupil diameter, used to index prediction, is not related to offline measures of learning. We also found that pupil differences that appear to be a result of prediction, are actually a result of where we chose to baseline instead. Ultimately, we conclude that the fast-paced nature of VSL paradigms make it incompatible with the slow nature of pupil change. Therefore, our findings suggest pupillometry should not be used to investigate learning mechanisms in fast-paced VSL tasks.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Majority of visual statistical learning (VSL) research uses only offline measures, collected after the familiarization phase (i.e. learning) has occurred. Offline measures have revealed a lot about the extent of statistical learning (SL) but less is known about the learning mechanisms that support VSL. Studies have shown that prediction can be a potential learning mechanism for VSL, but it is difficult to examine the role of prediction in VSL using offline measures alone. Pupil diameter is a promising online measure to index prediction in VSL because it can be collected during learning, requires no overt action or task and can be used in a wide-range of populations (e.g., infants and adults). Furthermore, pupil diameter has already been used to investigate processes that are part of prediction such as prediction error and updating. While the properties of pupil diameter have the potentially to powerfully expand studies in VSL, through a series of three experiments, we find that the two are not compatible with each other. Our results revealed that pupil diameter, used to index prediction, is not related to offline measures of learning. We also found that pupil differences that appear to be a result of prediction, are actually a result of where we chose to baseline instead. Ultimately, we conclude that the fast-paced nature of VSL paradigms make it incompatible with the slow nature of pupil change. Therefore, our findings suggest pupillometry should not be used to investigate learning mechanisms in fast-paced VSL tasks. |
Jinxiao Zhang; Antoni B Chan; Esther Y Y Lau; Janet H Hsiao Individuals with insomnia misrecognize angry faces as fearful faces while missing the eyes: An eye-tracking study Journal Article Sleep, 42 (2), pp. zsy220, 2019. @article{Zhang2019d, title = {Individuals with insomnia misrecognize angry faces as fearful faces while missing the eyes: An eye-tracking study}, author = {Jinxiao Zhang and Antoni B Chan and Esther Y Y Lau and Janet H Hsiao}, doi = {10.1093/sleep/zsy220}, year = {2019}, date = {2019-01-01}, journal = {Sleep}, volume = {42}, number = {2}, pages = {zsy220}, abstract = {Individuals with insomnia have been found to have disturbed perception of facial expressions. Through eye movement examinations, here we test the hypothesis that this effect is due to impaired visual attention functions for retrieving diagnostic features in facial expression judgments. Twenty-three individuals with insomnia symptoms and 23 controls without insomnia completed a task to categorize happy, sad, fearful, and angry facial expressions. The participants with insomnia were less accurate in recognizing angry faces and misidentified them as fearful faces more often than the controls. A hidden Markov modeling approach for eye movement data analysis revealed that when viewing facial expressions, more individuals with insomnia adopted a nose-mouth eye movement pattern focusing on the vertical face midline while more controls adopted an eyes-mouth pattern preferentially attending to lateral features, particularly the two eyes. As previous studies found that the primary diagnostic feature for recognizing angry faces is the eyes while the diagnostic features for other facial expressions involve the mouth region, missing the eye region may contribute to specific difficulties in recognizing angry facial expressions, consistent with our behavioral finding in participants with insomnia symptoms. Taken together, the findings suggest that impaired information selection through visual attention control may be related to the compromised emotion perception in individuals with insomnia.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Individuals with insomnia have been found to have disturbed perception of facial expressions. Through eye movement examinations, here we test the hypothesis that this effect is due to impaired visual attention functions for retrieving diagnostic features in facial expression judgments. Twenty-three individuals with insomnia symptoms and 23 controls without insomnia completed a task to categorize happy, sad, fearful, and angry facial expressions. The participants with insomnia were less accurate in recognizing angry faces and misidentified them as fearful faces more often than the controls. A hidden Markov modeling approach for eye movement data analysis revealed that when viewing facial expressions, more individuals with insomnia adopted a nose-mouth eye movement pattern focusing on the vertical face midline while more controls adopted an eyes-mouth pattern preferentially attending to lateral features, particularly the two eyes. As previous studies found that the primary diagnostic feature for recognizing angry faces is the eyes while the diagnostic features for other facial expressions involve the mouth region, missing the eye region may contribute to specific difficulties in recognizing angry facial expressions, consistent with our behavioral finding in participants with insomnia symptoms. Taken together, the findings suggest that impaired information selection through visual attention control may be related to the compromised emotion perception in individuals with insomnia. |
Manman Zhang; Simon P Liversedge; Xuejun Bai; Guoli Yan; Chuanli Zang The influence of foveal lexical processing load on parafoveal preview and saccadic targeting during Chinese reading Journal Article Journal of Experimental Psychology: Human Perception and Performance, 45 (6), pp. 812–825, 2019. @article{Zhang2019e, title = {The influence of foveal lexical processing load on parafoveal preview and saccadic targeting during Chinese reading}, author = {Manman Zhang and Simon P Liversedge and Xuejun Bai and Guoli Yan and Chuanli Zang}, doi = {10.1037/xhp0000644}, year = {2019}, date = {2019-01-01}, journal = {Journal of Experimental Psychology: Human Perception and Performance}, volume = {45}, number = {6}, pages = {812--825}, abstract = {Whether increased foveal load causes a reduction of parafoveal processing remains equivocal. The present study examined foveal load effects on parafoveal processing in natural Chinese reading. Parafoveal preview of a single-character parafoveal target word was manipulated by using the boundary paradigm (Rayner, 1975; pseudocharacter or identity previews) under high foveal load (low-frequency pretarget word) compared with low foveal load (high-frequency pretarget word) conditions. Despite an effective manipulation of foveal processing load, we obtained no evidence of any modulatory influence on parafoveal processing in first-pass reading times. However, our results clearly showed that saccadic targeting, in relation to forward saccade length from the pretarget word and in relation to target word skipping, was influenced by foveal load and this influence occurred independent of parafoveal preview. Given the optimal experimental conditions, these results provide very strong evidence that preview benefit is not modulated by foveal lexical load during Chinese reading.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Whether increased foveal load causes a reduction of parafoveal processing remains equivocal. The present study examined foveal load effects on parafoveal processing in natural Chinese reading. Parafoveal preview of a single-character parafoveal target word was manipulated by using the boundary paradigm (Rayner, 1975; pseudocharacter or identity previews) under high foveal load (low-frequency pretarget word) compared with low foveal load (high-frequency pretarget word) conditions. Despite an effective manipulation of foveal processing load, we obtained no evidence of any modulatory influence on parafoveal processing in first-pass reading times. However, our results clearly showed that saccadic targeting, in relation to forward saccade length from the pretarget word and in relation to target word skipping, was influenced by foveal load and this influence occurred independent of parafoveal preview. Given the optimal experimental conditions, these results provide very strong evidence that preview benefit is not modulated by foveal lexical load during Chinese reading. |
Xiaoxian Zhang; Wanlu Fu; Licheng Xue; Jing Zhao; Zhiguo Wang Children with mathematical learning difficulties are sluggish in disengaging attention Journal Article Frontiers in Psychology, 10 , pp. 1–9, 2019. @article{Zhang2019f, title = {Children with mathematical learning difficulties are sluggish in disengaging attention}, author = {Xiaoxian Zhang and Wanlu Fu and Licheng Xue and Jing Zhao and Zhiguo Wang}, doi = {10.3389/fpsyg.2019.00932}, year = {2019}, date = {2019-01-01}, journal = {Frontiers in Psychology}, volume = {10}, pages = {1--9}, abstract = {Mathematical learning difficulties (MLD) refer to a variety of deficits in math skills, typically pertaining to the domains of arithmetic and problem solving. The present study examined the time course of attentional orienting in MLD children with a spatial cueing task, by parametrically manipulating the cue-target onset asynchrony (CTOA). The results of Experiment 1 revealed that, in contrast to typical developing children, the inhibitory aftereffect of attentional orienting-frequently referred to as inhibition of return (IOR)-was not observed in the MLD children, even at the longest CTOA tested (800 ms). However, robust early facilitation effects were observed in the MLD children, suggesting that they have difficulties in attentional disengagement rather than attentional engagement. In a second experiment, a secondary cue was introduced to the cueing task to encourage attentional disengagement and IOR effects were observed in the MLD children. Taken together, the present experiments indicate that MLD children are sluggish in disengaging spatial attention.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Mathematical learning difficulties (MLD) refer to a variety of deficits in math skills, typically pertaining to the domains of arithmetic and problem solving. The present study examined the time course of attentional orienting in MLD children with a spatial cueing task, by parametrically manipulating the cue-target onset asynchrony (CTOA). The results of Experiment 1 revealed that, in contrast to typical developing children, the inhibitory aftereffect of attentional orienting-frequently referred to as inhibition of return (IOR)-was not observed in the MLD children, even at the longest CTOA tested (800 ms). However, robust early facilitation effects were observed in the MLD children, suggesting that they have difficulties in attentional disengagement rather than attentional engagement. In a second experiment, a secondary cue was introduced to the cueing task to encourage attentional disengagement and IOR effects were observed in the MLD children. Taken together, the present experiments indicate that MLD children are sluggish in disengaging spatial attention. |
Kaining Zhang; Charles D Chen; Ilya E Monosov Novelty, salience, and surprise timing are signaled by neurons in the basal forebrain Journal Article Current Biology, 29 (1), pp. 134–142, 2019. @article{Zhang2019g, title = {Novelty, salience, and surprise timing are signaled by neurons in the basal forebrain}, author = {Kaining Zhang and Charles D Chen and Ilya E Monosov}, doi = {10.1016/j.cub.2018.11.012}, year = {2019}, date = {2019-01-01}, journal = {Current Biology}, volume = {29}, number = {1}, pages = {134--142}, publisher = {Elsevier Ltd.}, abstract = {The basal forebrain (BF) is a principal source of modulation of the neocortex [1–6] and is thought to regulate cognitive functions such as attention, motivation, and learning by broadcasting information about salience [2, 3, 5, 7–19]. However, events can be salient for multiple reasons—such as novelty, surprise, or reward prediction errors [20–24]—and to date, precisely which salience-related information the BF broadcasts isunclear.Here, wereport that theprimate BF contains at least two types of neurons that often process salient events in distinct manners: one with phasic burst responses to cues predicting salient events and one with ramping activity anticipating such events. Bursting neurons respond to cues that convey predictions about the magnitude, probability, and timing of primary reinforcements. They also burst to the reinforcement itself, particularly when it is unexpected. However, they do not have a selective response to reinforcement omission (the unexpected absence of an event). Thus, bursting neurons do not convey value-prediction errors but do signal surprise associated with external events. Indeed, they are not limited to processing primary reinforcement: they discriminate fully expected novel visual objects from familiar objects and respond to object-sequence violations. In contrast, ramping neurons predict the timing of many salient, novel, and surprising events. Their ramping activity is highly sensitive to the subjects' confidence in event timing and on average encodes the subjects' surprise after unexpected events occur. These data suggest that the primate BF contains mechanisms to anticipate the timing of a diverse set of important external events (via ramping activity) and to rapidly deploy cognitive resources when these events occur (via short latency bursting).}, keywords = {}, pubstate = {published}, tppubtype = {article} } The basal forebrain (BF) is a principal source of modulation of the neocortex [1–6] and is thought to regulate cognitive functions such as attention, motivation, and learning by broadcasting information about salience [2, 3, 5, 7–19]. However, events can be salient for multiple reasons—such as novelty, surprise, or reward prediction errors [20–24]—and to date, precisely which salience-related information the BF broadcasts isunclear.Here, wereport that theprimate BF contains at least two types of neurons that often process salient events in distinct manners: one with phasic burst responses to cues predicting salient events and one with ramping activity anticipating such events. Bursting neurons respond to cues that convey predictions about the magnitude, probability, and timing of primary reinforcements. They also burst to the reinforcement itself, particularly when it is unexpected. However, they do not have a selective response to reinforcement omission (the unexpected absence of an event). Thus, bursting neurons do not convey value-prediction errors but do signal surprise associated with external events. Indeed, they are not limited to processing primary reinforcement: they discriminate fully expected novel visual objects from familiar objects and respond to object-sequence violations. In contrast, ramping neurons predict the timing of many salient, novel, and surprising events. Their ramping activity is highly sensitive to the subjects' confidence in event timing and on average encodes the subjects' surprise after unexpected events occur. These data suggest that the primate BF contains mechanisms to anticipate the timing of a diverse set of important external events (via ramping activity) and to rapidly deploy cognitive resources when these events occur (via short latency bursting). |
Felicia Zhang; Sagi Jaffe-Dax; Robert C Wilson; Lauren L Emberson Prediction in infants and adults: A pupillometry study Journal Article Developmental Science, 22 (4), pp. 1–9, 2019. @article{Zhang2019h, title = {Prediction in infants and adults: A pupillometry study}, author = {Felicia Zhang and Sagi Jaffe-Dax and Robert C Wilson and Lauren L Emberson}, doi = {10.1111/desc.12780}, year = {2019}, date = {2019-12-01}, journal = {Developmental Science}, volume = {22}, number = {4}, pages = {1--9}, publisher = {John Wiley & Sons, Ltd (10.1111)}, abstract = {Adults use both bottom-up sensory inputs and top-down signals to generate predictions about future sensory inputs. Infants have also been shown to make predictions with simple stimuli and recent work has suggested top-down processing is available early in infancy. However, it is unknown whether this indicates that top-down prediction is an ability that is continuous across the lifespan or whether an infant's ability to predict is different from an adult's, qualitatively or quantitatively. We employed pupillometry to provide a direct comparison of prediction abilities across these disparate age groups. Pupil dilation response (PDR) was measured in 6-month olds and adults as they completed an identical implicit learning task designed to help learn associations between sounds and pictures. We found significantly larger PDR for visual omission trials (i.e. trials that violated participants' predictions without the presentation of new stimuli to control for bottom-up signals) compared to visual present trials (i.e. trials that confirmed participants' predictions) in both age groups. Furthermore, a computational learning model that is closely linked to prediction error (Rescorla-Wagner model) demonstrated similar learning trajectories suggesting a continuity of predictive capacity and learning across the two age groups.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Adults use both bottom-up sensory inputs and top-down signals to generate predictions about future sensory inputs. Infants have also been shown to make predictions with simple stimuli and recent work has suggested top-down processing is available early in infancy. However, it is unknown whether this indicates that top-down prediction is an ability that is continuous across the lifespan or whether an infant's ability to predict is different from an adult's, qualitatively or quantitatively. We employed pupillometry to provide a direct comparison of prediction abilities across these disparate age groups. Pupil dilation response (PDR) was measured in 6-month olds and adults as they completed an identical implicit learning task designed to help learn associations between sounds and pictures. We found significantly larger PDR for visual omission trials (i.e. trials that violated participants' predictions without the presentation of new stimuli to control for bottom-up signals) compared to visual present trials (i.e. trials that confirmed participants' predictions) in both age groups. Furthermore, a computational learning model that is closely linked to prediction error (Rescorla-Wagner model) demonstrated similar learning trajectories suggesting a continuity of predictive capacity and learning across the two age groups. |
Xuemeng Zhang; Yijun Luo; Yong Liu; Chao Yang; Hong Chen Lack of conflict during food choice is associated with the failure of restrained eating Journal Article Eating Behaviors, 34 , pp. 1–8, 2019. @article{Zhang2019i, title = {Lack of conflict during food choice is associated with the failure of restrained eating}, author = {Xuemeng Zhang and Yijun Luo and Yong Liu and Chao Yang and Hong Chen}, doi = {10.1016/j.eatbeh.2019.101309}, year = {2019}, date = {2019-01-01}, journal = {Eating Behaviors}, volume = {34}, pages = {1--8}, abstract = {Restrained eaters tend to sustain a restriction in caloric intake to lose or maintain body weight; however, only a few restrained eaters can achieve the goal of restricting their caloric intake to lose or maintain body weight. Those who are effective restrained eaters habitually adhere to their intentions to avoid eating certain palatable foods, whereas those who are ineffective restrained eaters are generally unable to translate their intentions into behavior. To restrain eating regardless of temptation, an individual must first identify potential conflicts between achieving restrained eating and temptation to eat. Regarding food selections, the association between a lack of conflict between temptation, eating enjoyment, and weight loss or maintenance goals and the failure of restriction of caloric intake remains unknown. The present study used an eye-tracking technique to assess the degree of conflict experienced by effective and ineffective restrained eaters during food choice. Participants were required to choose between pairs of high-and low-calorie foods. The results showed that choosing the low-calorie food was associated with the experience of more conflict, measured by longer response times and more gaze switches, than choosing the high-calorie food. Ineffective restrained eaters experienced less conflict, exhibiting shorter response times and fewer gaze switches, than did effective restrained eaters, which suggests that a failure to restrain eating might be associated with a lack of experience of conflict.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Restrained eaters tend to sustain a restriction in caloric intake to lose or maintain body weight; however, only a few restrained eaters can achieve the goal of restricting their caloric intake to lose or maintain body weight. Those who are effective restrained eaters habitually adhere to their intentions to avoid eating certain palatable foods, whereas those who are ineffective restrained eaters are generally unable to translate their intentions into behavior. To restrain eating regardless of temptation, an individual must first identify potential conflicts between achieving restrained eating and temptation to eat. Regarding food selections, the association between a lack of conflict between temptation, eating enjoyment, and weight loss or maintenance goals and the failure of restriction of caloric intake remains unknown. The present study used an eye-tracking technique to assess the degree of conflict experienced by effective and ineffective restrained eaters during food choice. Participants were required to choose between pairs of high-and low-calorie foods. The results showed that choosing the low-calorie food was associated with the experience of more conflict, measured by longer response times and more gaze switches, than choosing the high-calorie food. Ineffective restrained eaters experienced less conflict, exhibiting shorter response times and fewer gaze switches, than did effective restrained eaters, which suggests that a failure to restrain eating might be associated with a lack of experience of conflict. |
Bao Zhang; Shuhui Liu; Cenlou Hu; Ziwen Luo; Sai Huang; Jie Sui Enhanced memory-driven attentional capture in action video game players Journal Article Computers in Human Behavior, 107 , pp. 1–7, 2020. @article{Zhang2020a, title = {Enhanced memory-driven attentional capture in action video game players}, author = {Bao Zhang and Shuhui Liu and Cenlou Hu and Ziwen Luo and Sai Huang and Jie Sui}, doi = {10.1016/j.chb.2020.106271}, year = {2020}, date = {2020-01-01}, journal = {Computers in Human Behavior}, volume = {107}, pages = {1--7}, publisher = {Elsevier Ltd}, abstract = {Action video game players (AVGPs) have been shown to have an enhanced cognitive control ability to reduce stimulus-driven attentional capture (e.g., from an exogenous salient distractor) compared with non-action video game players (NVGPs). Here we examined whether these benefits could extend to the memory-driven attentional capture (i.e., working memory (WM) representations bias visual attention toward a matching distractor). AVGPs and NVGPs were instructed to complete a visual search task while actively maintaining 1, 2 or 4 items in WM. There was a robust advantage to the memory-driven attentional capture in reaction time and first eye movement fixation in the AVGPs compared to the NVGPs when they had to maintain one item in WM. Moreover, the effect of memory-driven attentional capture was maintained in the AVGPs when the WM load was increased, but it was eliminated in the NVGPs. The results suggest that AVGPs may devote more attentional resources to sustaining the cognitive control rather than to suppressing the attentional capture driven by the active WM representations.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Action video game players (AVGPs) have been shown to have an enhanced cognitive control ability to reduce stimulus-driven attentional capture (e.g., from an exogenous salient distractor) compared with non-action video game players (NVGPs). Here we examined whether these benefits could extend to the memory-driven attentional capture (i.e., working memory (WM) representations bias visual attention toward a matching distractor). AVGPs and NVGPs were instructed to complete a visual search task while actively maintaining 1, 2 or 4 items in WM. There was a robust advantage to the memory-driven attentional capture in reaction time and first eye movement fixation in the AVGPs compared to the NVGPs when they had to maintain one item in WM. Moreover, the effect of memory-driven attentional capture was maintained in the AVGPs when the WM load was increased, but it was eliminated in the NVGPs. The results suggest that AVGPs may devote more attentional resources to sustaining the cognitive control rather than to suppressing the attentional capture driven by the active WM representations. |
Hanshu Zhang; Joseph W Houpt Exaggerated prevalence effect with the explicit prevalence information: The description-experience gap in visual search Journal Article Attention, Perception, and Psychophysics, 82 (7), pp. 3340–3356, 2020. @article{Zhang2020b, title = {Exaggerated prevalence effect with the explicit prevalence information: The description-experience gap in visual search}, author = {Hanshu Zhang and Joseph W Houpt}, doi = {10.3758/s13414-020-02045-8}, year = {2020}, date = {2020-01-01}, journal = {Attention, Perception, and Psychophysics}, volume = {82}, number = {7}, pages = {3340--3356}, publisher = {Attention, Perception, & Psychophysics}, abstract = {Despite the increasing focus on target prevalence in visual search research, few papers have thoroughly examined the effect of how target prevalence is communicated. Findings in the judgment and decision-making literature have demonstrated that people behave differently depending on whether probabilistic information is made explicit or learned through experience, hence there is potential for a similar difference when communicating prevalence in visual search. Our current research examined how visual search changes depending on whether the target prevalence information was explicitly given to observers or they learned the prevalence through experience with additional manipulations of target reward and salience. We found that when the target prevalence was low, learning prevalence from experience resulted in more target-present responses and longer search times before quitting compared to when observers were explicitly informed of the target probability. The discrepancy narrowed with increased prevalence and reversed in the high target prevalence condition. Eye-tracking results indicated that search with experience consistently resulted in longer fixation durations, with the largest difference in low-prevalence conditions. Longer search time was primarily due to observers re-visited more items. Our work addressed the importance of exploring influences brought by probability communication in future prevalence visual search studies.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Despite the increasing focus on target prevalence in visual search research, few papers have thoroughly examined the effect of how target prevalence is communicated. Findings in the judgment and decision-making literature have demonstrated that people behave differently depending on whether probabilistic information is made explicit or learned through experience, hence there is potential for a similar difference when communicating prevalence in visual search. Our current research examined how visual search changes depending on whether the target prevalence information was explicitly given to observers or they learned the prevalence through experience with additional manipulations of target reward and salience. We found that when the target prevalence was low, learning prevalence from experience resulted in more target-present responses and longer search times before quitting compared to when observers were explicitly informed of the target probability. The discrepancy narrowed with increased prevalence and reversed in the high target prevalence condition. Eye-tracking results indicated that search with experience consistently resulted in longer fixation durations, with the largest difference in low-prevalence conditions. Longer search time was primarily due to observers re-visited more items. Our work addressed the importance of exploring influences brought by probability communication in future prevalence visual search studies. |
Hui Zhang; Ping Wang; Tinghu Kang Aesthetic experience of field cognitive style in the appreciation of cursive and running scripts: An eye movement study Journal Article Art and Design Review, 8 , pp. 215–227, 2020. @article{Zhang2020c, title = {Aesthetic experience of field cognitive style in the appreciation of cursive and running scripts: An eye movement study}, author = {Hui Zhang and Ping Wang and Tinghu Kang}, doi = {10.4236/adr.2020.84017}, year = {2020}, date = {2020-01-01}, journal = {Art and Design Review}, volume = {8}, pages = {215--227}, abstract = {This study compares the characteristics of the aesthetic experience of different cognitive styles in calligraphy style. The study used a cursive script and running script as experimental materials and the EyeLink 1000 Plus eye tracker to record eye movements while viewing calligraphy. The results showed that, in the overall analysis, there were differences in the field cogni-tion style in total fixation counts, saccade amplitude, and saccade counts and differences in the calligraphic style in total fixation counts and saccade counts. Further local analysis found significant differences in the field cogni-tive style in mean pupil diameter, fixation counts, and regression in count, and that there were differences in fixation counts and regression in count in the calligraphic style, as well as interactions with the area of interest. The results indicate that the field cognitive style is characterized by different aesthetic experiences in calligraphy appreciation and that there are aesthetic preferences in calligraphy style.}, keywords = {}, pubstate = {published}, tppubtype = {article} } This study compares the characteristics of the aesthetic experience of different cognitive styles in calligraphy style. The study used a cursive script and running script as experimental materials and the EyeLink 1000 Plus eye tracker to record eye movements while viewing calligraphy. The results showed that, in the overall analysis, there were differences in the field cogni-tion style in total fixation counts, saccade amplitude, and saccade counts and differences in the calligraphic style in total fixation counts and saccade counts. Further local analysis found significant differences in the field cogni-tive style in mean pupil diameter, fixation counts, and regression in count, and that there were differences in fixation counts and regression in count in the calligraphic style, as well as interactions with the area of interest. The results indicate that the field cognitive style is characterized by different aesthetic experiences in calligraphy appreciation and that there are aesthetic preferences in calligraphy style. |
Manman Zhang; Simon P Liversedge; Xuejun Bai; Guoli Yan; Chuanli Zang The influence of foveal lexical processing load on parafoveal preview and saccadic targeting during Chinese reading Journal Article Acta Psychologica Sinica, 52 (8), pp. 1–11, 2020. @article{Zhang2020d, title = {The influence of foveal lexical processing load on parafoveal preview and saccadic targeting during Chinese reading}, author = {Manman Zhang and Simon P Liversedge and Xuejun Bai and Guoli Yan and Chuanli Zang}, doi = {10.1037/xhp0000644}, year = {2020}, date = {2020-01-01}, journal = {Acta Psychologica Sinica}, volume = {52}, number = {8}, pages = {1--11}, abstract = {Parafoveal pre-processing contributes to highly efficient reading for skilled readers. Research has demonstrated that high-skilled or fast readers extract more parafoveal information from a wider parafoveal region more efficiently compared to less-skilled or slow readers. It is argued that individual differences in parafoveal preview are due to high-skilled or fast readers focusing less of their at- tention on foveal word processing than less-skilled or slow readers. In other words, foveal processing difficulty might modulate an individual's amount of parafoveal preview (i.e., Foveal Load Hypothesis). However, few studies have provided evidence in support of this claim. Therefore, the present study aimed to explore whether and how foveal lexical processing load modulates parafoveal preview of readers with different reading speeds (a commonly used measurement of reading skill or reading proficiency). By using a three-minute reading comprehension task, 28 groups of fast and slow readers were selected from 300 participants (234 were valid) according to their reading speed in the current study. Participants were then asked to read sentences while their eye movements were recorded using an Eyelink 1000 eye-tracker. Each experimental sentence contained a pre-target word that varied in lexical frequency to manipulate foveal processing load (low load: high frequency; high load: low frequency), and a target word ma- nipulated for preview (identical or pseudocharacter) within the boundary paradigm. Global analyses showed that, although fast readers had similar accuracy of reading comprehension to slow readers, they had shorter reading times, longer forward saccades, made fewer fixations and regressions, and had higher reading speeds compared to slow readers, indicating that our selection of fast and slow readers was highly effective. The pre-target word analyses showed that there was a main effect of word frequency on first-pass reading times, indicating an effective manipulation of foveal load. Addition- ally, there were significant interactions of Reading Group × Word Frequency, and Reading Group × Word Frequency × Parafoveal Preview for first fixation and single fixation durations, showing that the frequency effects were reliable for fast readers rather than for slow readers with pseudocharacter previews, while the frequency effects were similar for the two groups with identical previews. However, the target word analyses did not show any three-way or two-way interactions for the first-pass reading times as well as for skipping probability. To be specific, the first-pass reading times were shorter at the target word with identical previews in relation to pseudocharacter previews (i.e., preview benefit effects); importantly, similar size effects occurred for both fast readers and slow readers. The findings in the present study suggest that lexical information from the currently fixated word can be extracted and can be used quickly for fast readers, while such information is used later for slow readers. This, however, does not result in more (or less) preview benefit for fast readers in relation to slow readers. In conclusion, foveal lexical processing does not modulate preview benefit for fast and slow readers, and the present results provide no support for the Foveal Load Hypothesis. Our findings of foveal load effects on parafoveal preview for fast and slow readers cannot be readily explained by current computational models (e.g., E-Z Reader model and SWIFT model).}, keywords = {}, pubstate = {published}, tppubtype = {article} } Parafoveal pre-processing contributes to highly efficient reading for skilled readers. Research has demonstrated that high-skilled or fast readers extract more parafoveal information from a wider parafoveal region more efficiently compared to less-skilled or slow readers. It is argued that individual differences in parafoveal preview are due to high-skilled or fast readers focusing less of their at- tention on foveal word processing than less-skilled or slow readers. In other words, foveal processing difficulty might modulate an individual's amount of parafoveal preview (i.e., Foveal Load Hypothesis). However, few studies have provided evidence in support of this claim. Therefore, the present study aimed to explore whether and how foveal lexical processing load modulates parafoveal preview of readers with different reading speeds (a commonly used measurement of reading skill or reading proficiency). By using a three-minute reading comprehension task, 28 groups of fast and slow readers were selected from 300 participants (234 were valid) according to their reading speed in the current study. Participants were then asked to read sentences while their eye movements were recorded using an Eyelink 1000 eye-tracker. Each experimental sentence contained a pre-target word that varied in lexical frequency to manipulate foveal processing load (low load: high frequency; high load: low frequency), and a target word ma- nipulated for preview (identical or pseudocharacter) within the boundary paradigm. Global analyses showed that, although fast readers had similar accuracy of reading comprehension to slow readers, they had shorter reading times, longer forward saccades, made fewer fixations and regressions, and had higher reading speeds compared to slow readers, indicating that our selection of fast and slow readers was highly effective. The pre-target word analyses showed that there was a main effect of word frequency on first-pass reading times, indicating an effective manipulation of foveal load. Addition- ally, there were significant interactions of Reading Group × Word Frequency, and Reading Group × Word Frequency × Parafoveal Preview for first fixation and single fixation durations, showing that the frequency effects were reliable for fast readers rather than for slow readers with pseudocharacter previews, while the frequency effects were similar for the two groups with identical previews. However, the target word analyses did not show any three-way or two-way interactions for the first-pass reading times as well as for skipping probability. To be specific, the first-pass reading times were shorter at the target word with identical previews in relation to pseudocharacter previews (i.e., preview benefit effects); importantly, similar size effects occurred for both fast readers and slow readers. The findings in the present study suggest that lexical information from the currently fixated word can be extracted and can be used quickly for fast readers, while such information is used later for slow readers. This, however, does not result in more (or less) preview benefit for fast readers in relation to slow readers. In conclusion, foveal lexical processing does not modulate preview benefit for fast and slow readers, and the present results provide no support for the Foveal Load Hypothesis. Our findings of foveal load effects on parafoveal preview for fast and slow readers cannot be readily explained by current computational models (e.g., E-Z Reader model and SWIFT model). |
Li Zhang; Guoli Yan; Li Zhou; Zebo Lan; Valerie Benson Journal of Autism and Developmental Disorders, 50 , pp. 500–512, 2020. @article{Zhang2020e, title = {The influence of irrelevant visual distractors on eye movement control in Chinese children with autism spectrum disorder: Evidence from the remote distractor paradigm}, author = {Li Zhang and Guoli Yan and Li Zhou and Zebo Lan and Valerie Benson}, doi = {10.1007/s10803-019-04271-y}, year = {2020}, date = {2020-01-01}, journal = {Journal of Autism and Developmental Disorders}, volume = {50}, pages = {500--512}, publisher = {Springer US}, abstract = {The current study examined eye movement control in autistic (ASD) children. Simple targets were presented in isolation, or with central, parafoveal, or peripheral distractors synchronously. Sixteen children with ASD (47–81 months) and nineteen age and IQ matched typically developing children were instructed to look to the target as accurately and quickly as possible. Both groups showed high proportions (40%) of saccadic errors towards parafoveal and peripheral distractors. For correctly executed eye movements to the targets, centrally presented distractors produced the longest latencies (time taken to initiate eye movements), followed by parafoveal and peripheral distractor conditions. Central distractors had a greater effect in the ASD group, indicating evidence for potential atypical voluntary attentional control in ASD children.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The current study examined eye movement control in autistic (ASD) children. Simple targets were presented in isolation, or with central, parafoveal, or peripheral distractors synchronously. Sixteen children with ASD (47–81 months) and nineteen age and IQ matched typically developing children were instructed to look to the target as accurately and quickly as possible. Both groups showed high proportions (40%) of saccadic errors towards parafoveal and peripheral distractors. For correctly executed eye movements to the targets, centrally presented distractors produced the longest latencies (time taken to initiate eye movements), followed by parafoveal and peripheral distractor conditions. Central distractors had a greater effect in the ASD group, indicating evidence for potential atypical voluntary attentional control in ASD children. |
Xinru Zhang; Zhongling Pi; Chenyu Li; Weiping Hu Intrinsic motivation enhances online group creativity via promoting members' effort, not interaction Journal Article British Journal of Educational Technology, pp. 1–13, 2020. @article{Zhang2020eb, title = {Intrinsic motivation enhances online group creativity via promoting members' effort, not interaction}, author = {Xinru Zhang and Zhongling Pi and Chenyu Li and Weiping Hu}, doi = {10.1111/bjet.13045}, year = {2020}, date = {2020-01-01}, journal = {British Journal of Educational Technology}, pages = {1--13}, abstract = {Intrinsic motivation is seen as the principal source of vitality in educational settings. This study examined whether intrinsic motivation promoted online group creativity and tested a cognitive mechanism that might explain this effect. University students (N = 72; 61 women) who volunteered to participate were asked to fulfill a creative task with a peer using online software. The peer was actually a fake participant who was programed to send prepared answers in sequence. Ratings of creativity (fluency, flexibility and originality) and eye movement data (focus on own vs. peer's ideas on the screen) were used to compare students who were induced to have high intrinsic motivation and those induced to have low intrinsic motivation. Results showed that compared to participants with low intrinsic motivation, those with high intrinsic motivation showed higher fluency and flexibility on the creative task and spent a larger percentage of time looking at their own ideas on the screen. The two groups did not differ in how much they looked at the peer's ideas. In addition, students' percentage dwell time on their own ideas mediated the beneficial effect of intrinsic motivation on idea fluency. These results suggest that although intrinsic motivation could enhance the fluency of creative ideas in an online group, it does not necessarily promote interaction among group members. Given the importance of interaction in online group setting, findings of this study suggest that in addition to enhancing intrinsic motivation, other measures should be taken to promote the interaction behavior in online groups. Practitioner Notes What is already known about this topic The generation of creative ideas in group settings calls for both individual effort and cognitive stimulation from other members. Intrinsic motivation has been shown to foster creativity in face-to-face groups, which is primarily due the promotion of individual effort. In online group settings, students' creativity tends to rely on intrinsic motivation because the extrinsic motivation typically provided by teachers' supervision and peer pressure in face-to-face settings is minimized online. What this paper adds Creative performance in online groups benefits from intrinsic motivation. Intrinsic motivation promotes creativity through an individual's own cognitive effort instead of interaction among members. Implications for practice and/or policy Improving students' intrinsic motivation is an effective way to promote creativity in online groups. Teachers should take additional steps to encourage students to interact more with each other in online groups.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Intrinsic motivation is seen as the principal source of vitality in educational settings. This study examined whether intrinsic motivation promoted online group creativity and tested a cognitive mechanism that might explain this effect. University students (N = 72; 61 women) who volunteered to participate were asked to fulfill a creative task with a peer using online software. The peer was actually a fake participant who was programed to send prepared answers in sequence. Ratings of creativity (fluency, flexibility and originality) and eye movement data (focus on own vs. peer's ideas on the screen) were used to compare students who were induced to have high intrinsic motivation and those induced to have low intrinsic motivation. Results showed that compared to participants with low intrinsic motivation, those with high intrinsic motivation showed higher fluency and flexibility on the creative task and spent a larger percentage of time looking at their own ideas on the screen. The two groups did not differ in how much they looked at the peer's ideas. In addition, students' percentage dwell time on their own ideas mediated the beneficial effect of intrinsic motivation on idea fluency. These results suggest that although intrinsic motivation could enhance the fluency of creative ideas in an online group, it does not necessarily promote interaction among group members. Given the importance of interaction in online group setting, findings of this study suggest that in addition to enhancing intrinsic motivation, other measures should be taken to promote the interaction behavior in online groups. Practitioner Notes What is already known about this topic The generation of creative ideas in group settings calls for both individual effort and cognitive stimulation from other members. Intrinsic motivation has been shown to foster creativity in face-to-face groups, which is primarily due the promotion of individual effort. In online group settings, students' creativity tends to rely on intrinsic motivation because the extrinsic motivation typically provided by teachers' supervision and peer pressure in face-to-face settings is minimized online. What this paper adds Creative performance in online groups benefits from intrinsic motivation. Intrinsic motivation promotes creativity through an individual's own cognitive effort instead of interaction among members. Implications for practice and/or policy Improving students' intrinsic motivation is an effective way to promote creativity in online groups. Teachers should take additional steps to encourage students to interact more with each other in online groups. |
Y Zhang; Q Yuan Indian Journal of Pharmaceutical Sciences, 82 , pp. 32–40, 2020. @article{Zhang2020f, title = {Effect of the combination of biofeedback and sequential psychotherapy on the cognitive function of trauma patients based on the fusion of set theory model}, author = {Y Zhang and Q Yuan}, doi = {10.36468/pharmaceutical-sciences.spl.78}, year = {2020}, date = {2020-01-01}, journal = {Indian Journal of Pharmaceutical Sciences}, volume = {82}, pages = {32--40}, abstract = {This study intended to take a special group of trauma patients as research subjects to propose a method analysing the effect of combination of biofeedback and sequential psychotherapy based on the fusion of the set theory model on the cognitive function of these patients with trauma. The occurrence and development of post-traumatic stress disorder and the cognitive function is investigated. The set theory model is used in this study to carry out a survey on the effect of the combination of biofeedback and sequential psychotherapy on patients with post-traumatic stress disorder to describe the occurrence, development, change trajectory and time course characteristics of post-traumatic stress disorder. The set theory model was employed to investigate the cognitive development characteristics of these trauma patients. In addition, through the set theory model, psychological behavior mechanism for the occurrence and development of post-traumatic stress disorder is revealed. The study of the combination of biofeedback and sequential psychotherapy is adopted to investigate the effect of the post-traumatic stress disorder on the cognitive function of the trauma patients. The results of this study could be used to provide scientific advice for the placement and psychological assistance of trauma patients in future, to provide a scientific basis for a targeted psychological intervention and overall planning of the intervention, and to provide scientific and objective indicators and methods for the diagnosis and assessment of intervention of traumatic psychology in patients with trauma in the future.}, keywords = {}, pubstate = {published}, tppubtype = {article} } This study intended to take a special group of trauma patients as research subjects to propose a method analysing the effect of combination of biofeedback and sequential psychotherapy based on the fusion of the set theory model on the cognitive function of these patients with trauma. The occurrence and development of post-traumatic stress disorder and the cognitive function is investigated. The set theory model is used in this study to carry out a survey on the effect of the combination of biofeedback and sequential psychotherapy on patients with post-traumatic stress disorder to describe the occurrence, development, change trajectory and time course characteristics of post-traumatic stress disorder. The set theory model was employed to investigate the cognitive development characteristics of these trauma patients. In addition, through the set theory model, psychological behavior mechanism for the occurrence and development of post-traumatic stress disorder is revealed. The study of the combination of biofeedback and sequential psychotherapy is adopted to investigate the effect of the post-traumatic stress disorder on the cognitive function of the trauma patients. The results of this study could be used to provide scientific advice for the placement and psychological assistance of trauma patients in future, to provide a scientific basis for a targeted psychological intervention and overall planning of the intervention, and to provide scientific and objective indicators and methods for the diagnosis and assessment of intervention of traumatic psychology in patients with trauma in the future. |
Han Zhang; Chuyan Qu; Kevin F Miller; Kai S Cortina Missing the joke: Reduced rereading of garden-path jokes during mind-wandering Journal Article Journal of experimental psychology. Learning, memory, and cognition, 46 (4), pp. 638–648, 2020. @article{Zhang2020g, title = {Missing the joke: Reduced rereading of garden-path jokes during mind-wandering}, author = {Han Zhang and Chuyan Qu and Kevin F Miller and Kai S Cortina}, doi = {10.1037/xlm0000745}, year = {2020}, date = {2020-01-01}, journal = {Journal of experimental psychology. Learning, memory, and cognition}, volume = {46}, number = {4}, pages = {638--648}, abstract = {Mind-wandering (i.e., thoughts irrelevant to the current task) occurs frequently during reading. The current study examined whether mind-wandering was associated with reduced rereading when the reader read the so-called garden-path jokes. In a garden-path joke, the reader's initial interpretation is violated by the final punchline, and the violation creates a semantic incongruity that needs to be resolved (e.g., "My girlfriend has read so many negative things about smoking. Therefore, she decided to quit reading."). Rereading text prior to the punchline can help resolve the incongruity. In a main study and a preregistered replication, participants read jokes and nonfunny controls embedded in filler texts and responded to thought probes that assessed intentional and unintentional mind-wandering. Results were consistent across the two studies: When the reader was not mind-wandering, jokes elicited more rereading (from the punchline) than the nonfunny controls did, and had a recall advantage over the nonfunny controls. During mind-wandering, however, the additional eye movement processing and the recall advantage of jokes were generally reduced. These results show that mind-wandering is associated with reduced rereading, which is important for resolving higher level comprehension difficulties. (PsycInfo Database Record (c) 2020 APA, all rights reserved).}, keywords = {}, pubstate = {published}, tppubtype = {article} } Mind-wandering (i.e., thoughts irrelevant to the current task) occurs frequently during reading. The current study examined whether mind-wandering was associated with reduced rereading when the reader read the so-called garden-path jokes. In a garden-path joke, the reader's initial interpretation is violated by the final punchline, and the violation creates a semantic incongruity that needs to be resolved (e.g., "My girlfriend has read so many negative things about smoking. Therefore, she decided to quit reading."). Rereading text prior to the punchline can help resolve the incongruity. In a main study and a preregistered replication, participants read jokes and nonfunny controls embedded in filler texts and responded to thought probes that assessed intentional and unintentional mind-wandering. Results were consistent across the two studies: When the reader was not mind-wandering, jokes elicited more rereading (from the punchline) than the nonfunny controls did, and had a recall advantage over the nonfunny controls. During mind-wandering, however, the additional eye movement processing and the recall advantage of jokes were generally reduced. These results show that mind-wandering is associated with reduced rereading, which is important for resolving higher level comprehension difficulties. (PsycInfo Database Record (c) 2020 APA, all rights reserved). |
Guangyao Zhang; Binke Yuan; Huimin Hua; Ya Lou; Nan Lin; Xingshan Li Individual differences in first-pass fixation duration in reading are related to resting-state functional connectivity Journal Article Brain and Language, 213 , pp. 1–10, 2021. @article{Zhang2021, title = {Individual differences in first-pass fixation duration in reading are related to resting-state functional connectivity}, author = {Guangyao Zhang and Binke Yuan and Huimin Hua and Ya Lou and Nan Lin and Xingshan Li}, doi = {10.1016/j.bandl.2020.104893}, year = {2021}, date = {2021-01-01}, journal = {Brain and Language}, volume = {213}, pages = {1--10}, publisher = {Elsevier Inc.}, abstract = {Although there are considerable individual differences in eye movements during text reading, their neural correlates remain unclear. In this study, we investigated the relationship between the first-pass fixation duration (FPFD) in natural reading and resting-state functional connectivity (RSFC) in the brain. We defined the brain regions associated with early visual processing, word identification, attention shifts, and oculomotor control as seed regions. The results showed that individual FPFDs were positively correlated with individual RSFCs between the early visual network, visual word form area, and eye movement control/dorsal attention network. Our findings provide new evidence on the neural correlates of eye movements in text reading and indicate that individual differences in fixation time may shape the RSFC differences in the brain through the time-on-task effect and the mechanism of Hebbian learning.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Although there are considerable individual differences in eye movements during text reading, their neural correlates remain unclear. In this study, we investigated the relationship between the first-pass fixation duration (FPFD) in natural reading and resting-state functional connectivity (RSFC) in the brain. We defined the brain regions associated with early visual processing, word identification, attention shifts, and oculomotor control as seed regions. The results showed that individual FPFDs were positively correlated with individual RSFCs between the early visual network, visual word form area, and eye movement control/dorsal attention network. Our findings provide new evidence on the neural correlates of eye movements in text reading and indicate that individual differences in fixation time may shape the RSFC differences in the brain through the time-on-task effect and the mechanism of Hebbian learning. |
Gu Zhao; Qiang Liu; Jun Jiao; Peiling Zhou; Hong Li; Hong-jin Sun Dual-state modulation of the contextual cueing effect: Evidence from eye movement recordings Journal Article Journal of Vision, 12 (6), pp. 11–11, 2012. @article{Zhao2012, title = {Dual-state modulation of the contextual cueing effect: Evidence from eye movement recordings}, author = {Gu Zhao and Qiang Liu and Jun Jiao and Peiling Zhou and Hong Li and Hong-jin Sun}, doi = {10.1159/000171501}, year = {2012}, date = {2012-01-01}, journal = {Journal of Vision}, volume = {12}, number = {6}, pages = {11--11}, abstract = {The repeated configurations of random elements induce a better search performance than that of the displays of novel random configurations. The mechanism of such contextual cueing effect has been investigated through the use of the RT $backslash$texttimes Set Size function. There are divergent views on whether the contextual cueing effect is driven by attentional guidance or facilitation of initial perceptual processing or response selection. To explore this question, we used eye movement recording in this study, which offers information about the substages of the search task. The results suggest that the contextual cueing effect is contributed mainly by attentional guidance, and facilitation of response selection also plays a role.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The repeated configurations of random elements induce a better search performance than that of the displays of novel random configurations. The mechanism of such contextual cueing effect has been investigated through the use of the RT $backslash$texttimes Set Size function. There are divergent views on whether the contextual cueing effect is driven by attentional guidance or facilitation of initial perceptual processing or response selection. To explore this question, we used eye movement recording in this study, which offers information about the substages of the search task. The results suggest that the contextual cueing effect is contributed mainly by attentional guidance, and facilitation of response selection also plays a role. |
Jingjing Zhao; Yonghui Wang; Donglai Liu; Liang Zhao; Peng Liu Attention, Perception, and Psychophysics, 77 (7), pp. 2284–2292, 2015. @article{Zhao2015, title = {Strength of object representation: its key role in object-based attention for determining the competition result between Gestalt and top-down objects}, author = {Jingjing Zhao and Yonghui Wang and Donglai Liu and Liang Zhao and Peng Liu}, doi = {10.3758/s13414-015-0922-5}, year = {2015}, date = {2015-01-01}, journal = {Attention, Perception, and Psychophysics}, volume = {77}, number = {7}, pages = {2284--2292}, abstract = {It was found in previous studies that two types of objects (rectangles formed according to the Gestalt principle and Chinese words formed in a top-down fashion) can both induce an object-based effect. The aim of the present study was to investigate how the strength of an object representation affects the result of the competition between these two types of objects based on research carried out by Liu, Wang and Zhou [(2011) Acta Psychologica, 138(3), 397-404]. In Experiment 1, the rectangles were filled with two different colors to increase the strength of Gestalt object representation, and we found that the object effect changed significantly for the different stimulus types. Experiment 2 used Chinese words with various familiarities to manipulate the strength of the top-down object representation. As a result, the object-based effect induced by rectangles was observed only when the Chinese word familiarity was low. These results suggest that the strength of object representation determines the result of competition between different types of objects.}, keywords = {}, pubstate = {published}, tppubtype = {article} } It was found in previous studies that two types of objects (rectangles formed according to the Gestalt principle and Chinese words formed in a top-down fashion) can both induce an object-based effect. The aim of the present study was to investigate how the strength of an object representation affects the result of the competition between these two types of objects based on research carried out by Liu, Wang and Zhou [(2011) Acta Psychologica, 138(3), 397-404]. In Experiment 1, the rectangles were filled with two different colors to increase the strength of Gestalt object representation, and we found that the object effect changed significantly for the different stimulus types. Experiment 2 used Chinese words with various familiarities to manipulate the strength of the top-down object representation. As a result, the object-based effect induced by rectangles was observed only when the Chinese word familiarity was low. These results suggest that the strength of object representation determines the result of competition between different types of objects. |
Jing Zhao; Hang Yang; Xuchu Weng; Zhiguo Wang Emergent attentional bias toward visual word forms in the environment: Evidence from eye movements Journal Article Frontiers in Psychology, 9 , pp. 1–7, 2018. @article{Zhao2018, title = {Emergent attentional bias toward visual word forms in the environment: Evidence from eye movements}, author = {Jing Zhao and Hang Yang and Xuchu Weng and Zhiguo Wang}, doi = {10.3389/fpsyg.2018.01378}, year = {2018}, date = {2018-01-01}, journal = {Frontiers in Psychology}, volume = {9}, pages = {1--7}, abstract = {Young children are frequently exposed to environmental prints (e.g., billboards and product labels) that contain visual word forms on a daily basis. As the visual word forms in environmental prints are frequently used to convey information critical to an individual's survival and wellbeing (e.g., "STOP" in the stop sign), it is conceivable that an attentional bias toward words in the environment may emerge as the reading ability of young children develops. Empirical findings relevant to this issue, however, are inconclusive so far. The present study examines this issue in children in the early stages of formal reading training (grades 1, 3, and 5) with the eye-tracking technique. Children viewed images with word and non-word visual information (environmental prints) and images with the same words in standard typeface on a plain background (standard prints). For children in grade 1, the latency of their first fixations on words in environmental prints was longer than those in standard prints. This latency cost, however, was markedly reduced in grades 3 and 5, suggesting that in older children an attentional bias toward words has emerged to help filter out the non-word visual information in environmental prints. Importantly, this attentional bias was found to correlate moderately with word reading ability. These findings show that an attentional bias toward visual word forms emerges shortly after the start of formal schooling and it is closely linked to the development of reading skills.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Young children are frequently exposed to environmental prints (e.g., billboards and product labels) that contain visual word forms on a daily basis. As the visual word forms in environmental prints are frequently used to convey information critical to an individual's survival and wellbeing (e.g., "STOP" in the stop sign), it is conceivable that an attentional bias toward words in the environment may emerge as the reading ability of young children develops. Empirical findings relevant to this issue, however, are inconclusive so far. The present study examines this issue in children in the early stages of formal reading training (grades 1, 3, and 5) with the eye-tracking technique. Children viewed images with word and non-word visual information (environmental prints) and images with the same words in standard typeface on a plain background (standard prints). For children in grade 1, the latency of their first fixations on words in environmental prints was longer than those in standard prints. This latency cost, however, was markedly reduced in grades 3 and 5, suggesting that in older children an attentional bias toward words has emerged to help filter out the non-word visual information in environmental prints. Importantly, this attentional bias was found to correlate moderately with word reading ability. These findings show that an attentional bias toward visual word forms emerges shortly after the start of formal schooling and it is closely linked to the development of reading skills. |
Sainan Zhao; Lin Li; Min Chang; Qianqian Xu; Kuo Zhang; Jingxin Wang; Kevin B Paterson Older adults make greater use of word predictability in Chinese reading Journal Article Psychology and Aging, 34 (6), pp. 780–790, 2019. @article{Zhao2019, title = {Older adults make greater use of word predictability in Chinese reading}, author = {Sainan Zhao and Lin Li and Min Chang and Qianqian Xu and Kuo Zhang and Jingxin Wang and Kevin B Paterson}, doi = {10.1037/pag0000382}, year = {2019}, date = {2019-01-01}, journal = {Psychology and Aging}, volume = {34}, number = {6}, pages = {780--790}, abstract = {An influential account of normative aging effects on reading holds that older adults make greater use of contextual predictability to facilitate word identification. However, supporting evidence is scarce. Accordingly, we used measures of eye movements to experimentally investigate age differences in word predictability effects in Chinese reading, as this nonalphabetic language has characteristics that may promote such effects. Word-skipping rates were higher and reading times lower for more highly predictable words for both age groups. Effects of word predictability on word skipping did not differ across the 2 adult age groups. However, word predictability effects in reading time measures sensitive to both lexical identification (i.e., gaze duration) and contextual integration (i.e., regression-path reading times) were larger for the older than younger adults. Our findings therefore reveal that older Chinese readers make greater use of a word's predictability to facilitate both its lexical identification and integration with the prior sentence context.}, keywords = {}, pubstate = {published}, tppubtype = {article} } An influential account of normative aging effects on reading holds that older adults make greater use of contextual predictability to facilitate word identification. However, supporting evidence is scarce. Accordingly, we used measures of eye movements to experimentally investigate age differences in word predictability effects in Chinese reading, as this nonalphabetic language has characteristics that may promote such effects. Word-skipping rates were higher and reading times lower for more highly predictable words for both age groups. Effects of word predictability on word skipping did not differ across the 2 adult age groups. However, word predictability effects in reading time measures sensitive to both lexical identification (i.e., gaze duration) and contextual integration (i.e., regression-path reading times) were larger for the older than younger adults. Our findings therefore reveal that older Chinese readers make greater use of a word's predictability to facilitate both its lexical identification and integration with the prior sentence context. |
Sijia Zhao; Gabriela Bury; Alice Milne; Maria Chait Pupillometry as an objective measure of sustained attention in young and older listeners Journal Article Trends in Hearing, 23 , 2019. @article{Zhao2019a, title = {Pupillometry as an objective measure of sustained attention in young and older listeners}, author = {Sijia Zhao and Gabriela Bury and Alice Milne and Maria Chait}, doi = {10.1101/579540}, year = {2019}, date = {2019-01-01}, journal = {Trends in Hearing}, volume = {23}, abstract = {The ability to sustain attention on a task-relevant sound-source whilst avoiding distraction from other concurrent sounds is fundamental to listening in crowded environments. To isolate this aspect of hearing we designed a paradigm that continuously measured behavioural and pupillometry responses during 25-second-long trials in young (18-35 yo) and older (63-79 yo) participants. The auditory stimuli consisted of a number (1, 2 or 3) of concurrent, spectrally distinct tone streams. On each trial, participants detected brief silent gaps in one of the streams whilst resisting distraction from the others. Behavioural performance demonstrated increasing difficulty with time-on-task and with number/proximity of distractor streams. In young listeners (N=20), pupillometry revealed that pupil diameter (on the group and individual level) was dynamically modulated by instantaneous task difficulty such that periods where behavioural performance revealed a strain on sustained attention, were also accompanied by increased pupil diameter. Only trials on which participants performed successfully were included in the pupillometry analysis. Therefore, the observed effects reflect consequences of task demands as opposed to failure to attend.In line with existing reports, we observed global changes to pupil dynamics in the older group, including decreased pupil diameter, a limited dilation range, and reduced temporal variability. However, despite these changes, the older group showed similar effects of attentive tracking to those observed in the younger listeners. Overall, our results demonstrate that pupillometry can be a reliable and time-sensitive measure of the effort associated with attentive tracking over long durations in both young and (with some caveats) older listeners.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The ability to sustain attention on a task-relevant sound-source whilst avoiding distraction from other concurrent sounds is fundamental to listening in crowded environments. To isolate this aspect of hearing we designed a paradigm that continuously measured behavioural and pupillometry responses during 25-second-long trials in young (18-35 yo) and older (63-79 yo) participants. The auditory stimuli consisted of a number (1, 2 or 3) of concurrent, spectrally distinct tone streams. On each trial, participants detected brief silent gaps in one of the streams whilst resisting distraction from the others. Behavioural performance demonstrated increasing difficulty with time-on-task and with number/proximity of distractor streams. In young listeners (N=20), pupillometry revealed that pupil diameter (on the group and individual level) was dynamically modulated by instantaneous task difficulty such that periods where behavioural performance revealed a strain on sustained attention, were also accompanied by increased pupil diameter. Only trials on which participants performed successfully were included in the pupillometry analysis. Therefore, the observed effects reflect consequences of task demands as opposed to failure to attend.In line with existing reports, we observed global changes to pupil dynamics in the older group, including decreased pupil diameter, a limited dilation range, and reduced temporal variability. However, despite these changes, the older group showed similar effects of attentive tracking to those observed in the younger listeners. Overall, our results demonstrate that pupillometry can be a reliable and time-sensitive measure of the effort associated with attentive tracking over long durations in both young and (with some caveats) older listeners. |
Sijia Zhao; Maria Chait; Frederic Dick; Peter Dayan; Shigeto Furukawa; Hsin-I Liao Pupil-linked phasic arousal evoked by violation but not emergence of regularity within rapid sound sequences Journal Article Nature Communications, 10 , pp. 4030, 2019. @article{Zhao2019b, title = {Pupil-linked phasic arousal evoked by violation but not emergence of regularity within rapid sound sequences}, author = {Sijia Zhao and Maria Chait and Frederic Dick and Peter Dayan and Shigeto Furukawa and Hsin-I Liao}, doi = {10.1038/s41467-019-12048-1}, year = {2019}, date = {2019-12-01}, journal = {Nature Communications}, volume = {10}, pages = {4030}, publisher = {Springer Science and Business Media LLC}, abstract = {The ability to track the statistics of our surroundings is a key computational challenge. A prominent theory proposes that the brain monitors for unexpected uncertainty-events which deviate substantially from model predictions, indicating model failure. Norepinephrine is thought to play a key role in this process by serving as an interrupt signal, initiating model-resetting. However, evidence is from paradigms where participants actively monitored stimulus statistics. To determine whether Norepinephrine routinely reports the statistical structure of our surroundings, even when not behaviourally relevant, we used rapid tone-pip sequences that contained salient pattern-changes associated with abrupt structural violations vs. emergence of regular structure. Phasic pupil dilations (PDR) were monitored to assess Norepinephrine. We reveal a remarkable specificity: When not behaviourally relevant, only abrupt structural violations evoke a PDR. The results demonstrate that Norepinephrine tracks unexpected uncertainty on rapid time scales relevant to sensory signals.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The ability to track the statistics of our surroundings is a key computational challenge. A prominent theory proposes that the brain monitors for unexpected uncertainty-events which deviate substantially from model predictions, indicating model failure. Norepinephrine is thought to play a key role in this process by serving as an interrupt signal, initiating model-resetting. However, evidence is from paradigms where participants actively monitored stimulus statistics. To determine whether Norepinephrine routinely reports the statistical structure of our surroundings, even when not behaviourally relevant, we used rapid tone-pip sequences that contained salient pattern-changes associated with abrupt structural violations vs. emergence of regular structure. Phasic pupil dilations (PDR) were monitored to assess Norepinephrine. We reveal a remarkable specificity: When not behaviourally relevant, only abrupt structural violations evoke a PDR. The results demonstrate that Norepinephrine tracks unexpected uncertainty on rapid time scales relevant to sensory signals. |
Sijia Zhao; Nga Wai Yum; Lucas Benjamin; Elia Benhamou; Makoto Yoneya; Shigeto Furukawa; Frederic Dick; Malcolm Slaney; Maria Chait Rapid ocular responses are modulated by bottom-up-driven auditory salience Journal Article Journal of Neuroscience, 39 (39), pp. 7703–7714, 2019. @article{Zhao2019c, title = {Rapid ocular responses are modulated by bottom-up-driven auditory salience}, author = {Sijia Zhao and Nga Wai Yum and Lucas Benjamin and Elia Benhamou and Makoto Yoneya and Shigeto Furukawa and Frederic Dick and Malcolm Slaney and Maria Chait}, doi = {10.1523/JNEUROSCI.0776-19.2019}, year = {2019}, date = {2019-01-01}, journal = {Journal of Neuroscience}, volume = {39}, number = {39}, pages = {7703--7714}, abstract = {Despite the prevalent use of alerting sounds in alarms and human–machine interface systems and the long-hypothesized role of the auditory system as the brain's “early warning system,” we have only a rudimentary understanding of what determines auditory salience — the automatic attraction of attention by sound —and which brain mechanisms underlie this process. A major roadblock has been the lack ofa robust, objective means of quantifying sound-driven attentional capture. Here we demonstrate that: (1) a reliable salience scale can be obtained from crowd-sourcing (N ⫽ 911), (2) acoustic roughness appears to be a driving feature behind this scaling, consistent with previous reports implicating roughness in the perceptual distinctiveness of sounds, and (3) crowd-sourced auditory salience correlates with objective autonomic measures. Specifically, we show that a salience ranking obtained from online raters correlated robustly with the superior colliculus-mediated ocular freezing response, microsaccadic inhibition (MSI), measured in naive, passively listening human participants (ofeither sex). More salient sounds evoked earlier and larger MSI, consistent with a faster orienting response. These results are consistent with the hypothesis that MSI reflects a general reorienting response that is evoked by potentially behaviorally important events regardless oftheir modality.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Despite the prevalent use of alerting sounds in alarms and human–machine interface systems and the long-hypothesized role of the auditory system as the brain's “early warning system,” we have only a rudimentary understanding of what determines auditory salience — the automatic attraction of attention by sound —and which brain mechanisms underlie this process. A major roadblock has been the lack ofa robust, objective means of quantifying sound-driven attentional capture. Here we demonstrate that: (1) a reliable salience scale can be obtained from crowd-sourcing (N ⫽ 911), (2) acoustic roughness appears to be a driving feature behind this scaling, consistent with previous reports implicating roughness in the perceptual distinctiveness of sounds, and (3) crowd-sourced auditory salience correlates with objective autonomic measures. Specifically, we show that a salience ranking obtained from online raters correlated robustly with the superior colliculus-mediated ocular freezing response, microsaccadic inhibition (MSI), measured in naive, passively listening human participants (ofeither sex). More salient sounds evoked earlier and larger MSI, consistent with a faster orienting response. These results are consistent with the hypothesis that MSI reflects a general reorienting response that is evoked by potentially behaviorally important events regardless oftheir modality. |
Bin Zhao; Jinfeng Huang; Gaoyan Zhang; Jianwu Dang; Minbo Chen; Yingjian Fu; Longbiao Wang Brain network reconstruction of speech production based on electro-encephalography and eye movement Journal Article Acoustical Science and Technology, 41 (1), pp. 349–350, 2020. @article{Zhao2020, title = {Brain network reconstruction of speech production based on electro-encephalography and eye movement}, author = {Bin Zhao and Jinfeng Huang and Gaoyan Zhang and Jianwu Dang and Minbo Chen and Yingjian Fu and Longbiao Wang}, doi = {10.1250/ast.41.349}, year = {2020}, date = {2020-01-01}, journal = {Acoustical Science and Technology}, volume = {41}, number = {1}, pages = {349--350}, abstract = {To fully understand the brain mechanism associated with speech functions, it is necessary to unfold the spatiotemporal brain dynamics during the whole speech processing range [1]. However, previous functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) studies focused on cerebral activation patterns and their regional functions, while lacking information of the time courses [2]. In contrast, electroencephalography (EEG) and magneto- encephalography (MEG) with high temporal resolution are inferior in source localization, and are also easily buried in electromagnetic artifacts from muscular actions in articulation, thus interfering with the analysis. In this study, we introduced a novel multimodal data acquisition system to collect EEG, eye movement, and speech in an oral reading task. The behavior data (eye movement and speech) were used for segmenting cognitive stages. EEG data went through independent component analyses (ICA), component clustering, and time-varying (adaptive) multi-variate autoregressive modeling [3] for estimating the spatiotemporal causal interactions among brain regions in each cognitive and speech process. Statistical analyses and literature review were followed to interpret the brain dynamic results for better understanding the speech functions.}, keywords = {}, pubstate = {published}, tppubtype = {article} } To fully understand the brain mechanism associated with speech functions, it is necessary to unfold the spatiotemporal brain dynamics during the whole speech processing range [1]. However, previous functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) studies focused on cerebral activation patterns and their regional functions, while lacking information of the time courses [2]. In contrast, electroencephalography (EEG) and magneto- encephalography (MEG) with high temporal resolution are inferior in source localization, and are also easily buried in electromagnetic artifacts from muscular actions in articulation, thus interfering with the analysis. In this study, we introduced a novel multimodal data acquisition system to collect EEG, eye movement, and speech in an oral reading task. The behavior data (eye movement and speech) were used for segmenting cognitive stages. EEG data went through independent component analyses (ICA), component clustering, and time-varying (adaptive) multi-variate autoregressive modeling [3] for estimating the spatiotemporal causal interactions among brain regions in each cognitive and speech process. Statistical analyses and literature review were followed to interpret the brain dynamic results for better understanding the speech functions. |
Chenzhu Zhao Near or far? The effect of latest booking time on hotel booking intention: Based on eye-tracking experiments Journal Article International Journal of Frontiers in Sociology, 2 (7), pp. 1–12, 2020. @article{Zhao2020a, title = {Near or far? The effect of latest booking time on hotel booking intention: Based on eye-tracking experiments}, author = {Chenzhu Zhao}, doi = {10.25236/IJFS.2020.020701}, year = {2020}, date = {2020-01-01}, journal = {International Journal of Frontiers in Sociology}, volume = {2}, number = {7}, pages = {1--12}, abstract = {Online travel agencies (OTAs) depends on marketing clues to reduce the consumer uncertainty perceptions of online travel-related products. The latest booking time (LBT) provided by the consumer has a significant impact on purchasing decisions. This study aims to explore the effect of LBT on consumer visual attention and booking intention along with the moderation effect of online comment valence (OCV). Since eye movement is bound up with the transfer of visual attention, eye-tracking is used to record visual attention of consumer. Our research used a 3 (LBT: near vs. medium vs. far) × 3 (OCV: high vs. medium vs. low) design to conduct the experiments. The main findings showed the following:(1) LBT can obviously increase the visual attention to the whole advertisements and improve the booking intention;(2) OCV moderates the effect of LBT on both visual attention to the whole advertisements and booking intention. Only when OCV are medium and high, LBT can obviously improve attention to the whole advertisements and increase consumers' booking intention. The experiment results show that OTAs can improve the advertising effectiveness by adding LBT label, but LBT have no effect with low-level OCV.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Online travel agencies (OTAs) depends on marketing clues to reduce the consumer uncertainty perceptions of online travel-related products. The latest booking time (LBT) provided by the consumer has a significant impact on purchasing decisions. This study aims to explore the effect of LBT on consumer visual attention and booking intention along with the moderation effect of online comment valence (OCV). Since eye movement is bound up with the transfer of visual attention, eye-tracking is used to record visual attention of consumer. Our research used a 3 (LBT: near vs. medium vs. far) × 3 (OCV: high vs. medium vs. low) design to conduct the experiments. The main findings showed the following:(1) LBT can obviously increase the visual attention to the whole advertisements and improve the booking intention;(2) OCV moderates the effect of LBT on both visual attention to the whole advertisements and booking intention. Only when OCV are medium and high, LBT can obviously improve attention to the whole advertisements and increase consumers' booking intention. The experiment results show that OTAs can improve the advertising effectiveness by adding LBT label, but LBT have no effect with low-level OCV. |
Sainan Zhao; Lin Li; Min Chang; Jingxin Wang; Kevin B Paterson A further look at ageing and word predictability effects in Chinese reading: Evidence from one-character words Journal Article Quarterly Journal of Experimental Psychology, 74 (1), pp. 68–78, 2021. @article{Zhao2021, title = {A further look at ageing and word predictability effects in Chinese reading: Evidence from one-character words}, author = {Sainan Zhao and Lin Li and Min Chang and Jingxin Wang and Kevin B Paterson}, doi = {10.1177/1747021820951131}, year = {2021}, date = {2021-01-01}, journal = {Quarterly Journal of Experimental Psychology}, volume = {74}, number = {1}, pages = {68--78}, abstract = {Older adults are thought to compensate for slower lexical processing by making greater use of contextual knowledge, relative to young adults, to predict words in sentences. Accordingly, compared to young adults, older adults should produce larger contextual predictability effects in reading times and skipping rates for words. Empirical support for this account is nevertheless scarce. Perhaps the clearest evidence to date comes from a recent Chinese study showing larger word predictability effects for older adults in reading times but not skipping rates for two-character words. However, one possibility is that the absence of a word-skipping effect in this experiment was due to the older readers skipping words infrequently because of difficulty processing two-character words parafoveally. We therefore took a further look at this issue, using one-character target words to boost word-skipping. Young (18–30 years) and older (65+ years) adults read sentences containing a target word that was either highly predictable or less predictable from the prior sentence context. Our results replicate the finding that older adults produce larger word predictability effects in reading times but not word-skipping, despite high skipping rates. We discuss these findings in relation to ageing effects on reading in different writing systems.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Older adults are thought to compensate for slower lexical processing by making greater use of contextual knowledge, relative to young adults, to predict words in sentences. Accordingly, compared to young adults, older adults should produce larger contextual predictability effects in reading times and skipping rates for words. Empirical support for this account is nevertheless scarce. Perhaps the clearest evidence to date comes from a recent Chinese study showing larger word predictability effects for older adults in reading times but not skipping rates for two-character words. However, one possibility is that the absence of a word-skipping effect in this experiment was due to the older readers skipping words infrequently because of difficulty processing two-character words parafoveally. We therefore took a further look at this issue, using one-character target words to boost word-skipping. Young (18–30 years) and older (65+ years) adults read sentences containing a target word that was either highly predictable or less predictable from the prior sentence context. Our results replicate the finding that older adults produce larger word predictability effects in reading times but not word-skipping, despite high skipping rates. We discuss these findings in relation to ageing effects on reading in different writing systems. |
Wei Zheng; Yizhen Wang; Xiaolu Wang The effect of salience on Chinese pun comprehension: A visual world paradigm study Journal Article Frontiers in Psychology, 11 , pp. 1–12, 2020. @article{Zheng2020, title = {The effect of salience on Chinese pun comprehension: A visual world paradigm study}, author = {Wei Zheng and Yizhen Wang and Xiaolu Wang}, doi = {10.3389/fpsyg.2020.00116}, year = {2020}, date = {2020-01-01}, journal = {Frontiers in Psychology}, volume = {11}, pages = {1--12}, abstract = {The present study adopted the printed-word visual world paradigm to investigate the salience effect on Chinese pun comprehension. In such an experiment, participants listen to a spoken sentence while looking at a visual display of four printed words (including a semantic competitor, a phonological competitor, and two unrelated distractors). Previous studies based on alphabetic languages have found robust phonological effects (participants fixated more at phonological competitors than distractors during the unfolding of the spoken target words), while controversy remains regarding the existence of a similar semantic effect. A recent Chinese study reported reliable semantic effects in two experiments using this paradigm, suggesting that Chinese participants could actively map the semantic input from the auditory modality with the semantic information retrieved from printed words. In light of their study, we designed an experiment with two conditions: a replication condition to test the validity of using the printed-word world paradigm in Chinese semantic research, and a pun condition to assess the role played by salience during pun comprehension. Indeed, global analyses have revealed robust semantic effects in both experimental conditions, where participants were found more attracted to the semantic competitors than to the distractors with the emergence of target words. More importantly, the local analyses from the pun condition have shown that the participants were more attracted to the semantic competitors related to the salient meaning of the ambiguous word in a pun than to those related to the less salient meanings within 200 ms after target word offset. This finding suggests that the salient meaning of the ambiguous word in a pun is activated and assessed faster than its less salient counterpart. The initial advantage observed in the present study is consistent with the prediction of the graded salience hypothesis rather than the direct access model.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The present study adopted the printed-word visual world paradigm to investigate the salience effect on Chinese pun comprehension. In such an experiment, participants listen to a spoken sentence while looking at a visual display of four printed words (including a semantic competitor, a phonological competitor, and two unrelated distractors). Previous studies based on alphabetic languages have found robust phonological effects (participants fixated more at phonological competitors than distractors during the unfolding of the spoken target words), while controversy remains regarding the existence of a similar semantic effect. A recent Chinese study reported reliable semantic effects in two experiments using this paradigm, suggesting that Chinese participants could actively map the semantic input from the auditory modality with the semantic information retrieved from printed words. In light of their study, we designed an experiment with two conditions: a replication condition to test the validity of using the printed-word world paradigm in Chinese semantic research, and a pun condition to assess the role played by salience during pun comprehension. Indeed, global analyses have revealed robust semantic effects in both experimental conditions, where participants were found more attracted to the semantic competitors than to the distractors with the emergence of target words. More importantly, the local analyses from the pun condition have shown that the participants were more attracted to the semantic competitors related to the salient meaning of the ambiguous word in a pun than to those related to the less salient meanings within 200 ms after target word offset. This finding suggests that the salient meaning of the ambiguous word in a pun is activated and assessed faster than its less salient counterpart. The initial advantage observed in the present study is consistent with the prediction of the graded salience hypothesis rather than the direct access model. |
Peng Zhou; Liqun Gao Scope processing in Chinese Journal Article Journal of Psycholinguistic Research, 38 (1), pp. 11–24, 2009. @article{Zhou2009, title = {Scope processing in Chinese}, author = {Peng Zhou and Liqun Gao}, doi = {10.1007/s10936-008-9079-x}, year = {2009}, date = {2009-01-01}, journal = {Journal of Psycholinguistic Research}, volume = {38}, number = {1}, pages = {11--24}, abstract = {The standard view maintains that quantifier scope interpretation results from an interaction between different modules: the syntax, the semantics as well as the pragmatics. Thus, by examining the mechanism of quantifier scope interpretation, we will certainly gain some insight into how these different modules interact with one another. To observe it, two experiments, an offline judgment task and an eye-tracking experiment, were conducted to investigate the interpretation of doubly quantified sentences in Chinese, like Mei-ge qiangdao dou qiang-le yi-ge yinhang (Every robber robbed a bank). According to current literature, doubly quantified sentences in Chinese like the above are unambiguous, which can only be interpreted as "for every robber x, there is a bank y, such that x robbed y"(surface scope reading), contrary to their ambiguous English counterparts, which also allow the interpretation that "there is a bank y, such that for every robber x, x robbed y"(inverse scope reading). Specifically, three questions were examined, that is, (i) What is the initial reading of doubly quantified sentences in Chinese? (ii) Whether inverse scope interpretation can be available if appropriate contexts are provided? (iii) What are the processing time courses engaged in quantifier scope interpretation? The results showed that (i) Initially, the language processor computes the surface scope representation and the inverse scope representation in parallel, thus, doubly quantified sentences in Chinese are ambiguous; (ii) The discourse information is not employed in initial processing of relative scope, it serves to evaluate the two representations in reanalysis; (iii) The lexical information of verbs affects their scope-taking patterns. We suggest that these findings provide evidence for the Modular Model, one of the major contenders in the literature on sentence processing.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The standard view maintains that quantifier scope interpretation results from an interaction between different modules: the syntax, the semantics as well as the pragmatics. Thus, by examining the mechanism of quantifier scope interpretation, we will certainly gain some insight into how these different modules interact with one another. To observe it, two experiments, an offline judgment task and an eye-tracking experiment, were conducted to investigate the interpretation of doubly quantified sentences in Chinese, like Mei-ge qiangdao dou qiang-le yi-ge yinhang (Every robber robbed a bank). According to current literature, doubly quantified sentences in Chinese like the above are unambiguous, which can only be interpreted as "for every robber x, there is a bank y, such that x robbed y"(surface scope reading), contrary to their ambiguous English counterparts, which also allow the interpretation that "there is a bank y, such that for every robber x, x robbed y"(inverse scope reading). Specifically, three questions were examined, that is, (i) What is the initial reading of doubly quantified sentences in Chinese? (ii) Whether inverse scope interpretation can be available if appropriate contexts are provided? (iii) What are the processing time courses engaged in quantifier scope interpretation? The results showed that (i) Initially, the language processor computes the surface scope representation and the inverse scope representation in parallel, thus, doubly quantified sentences in Chinese are ambiguous; (ii) The discourse information is not employed in initial processing of relative scope, it serves to evaluate the two representations in reanalysis; (iii) The lexical information of verbs affects their scope-taking patterns. We suggest that these findings provide evidence for the Modular Model, one of the major contenders in the literature on sentence processing. |
Huihui Zhou; Robert Desimone Feature-based attention in the Frontal Eye Field and area V4 during visual search Journal Article Neuron, 70 (6), pp. 1205–1217, 2011. @article{Zhou2011, title = {Feature-based attention in the Frontal Eye Field and area V4 during visual search}, author = {Huihui Zhou and Robert Desimone}, doi = {10.1016/j.neuron.2011.04.032}, year = {2011}, date = {2011-01-01}, journal = {Neuron}, volume = {70}, number = {6}, pages = {1205--1217}, publisher = {Elsevier Inc.}, abstract = {When we search for a target in a crowded visual scene, we often use the distinguishing features of the target, such as color or shape, to guide our attention and eye movements. To investigate the neural mechanisms of feature-based attention, we simultaneously recorded neural responses in the frontal eye field (FEF) and area V4 while monkeys performed a visual search task. The responses of cells in both areas were modulated by feature attention, independent of spatial attention, and the magnitude of response enhancement was inversely correlated with the number of saccades needed to find the target. However, an analysis of the latency of sensory and attentional influences on responses suggested that V4 provides bottom-up sensory information about stimulus features, whereas the FEF provides a top-down attentional bias toward target features that modulates sensory processing in V4 and that could be used to guide the eyes to a searched-for target.}, keywords = {}, pubstate = {published}, tppubtype = {article} } When we search for a target in a crowded visual scene, we often use the distinguishing features of the target, such as color or shape, to guide our attention and eye movements. To investigate the neural mechanisms of feature-based attention, we simultaneously recorded neural responses in the frontal eye field (FEF) and area V4 while monkeys performed a visual search task. The responses of cells in both areas were modulated by feature attention, independent of spatial attention, and the magnitude of response enhancement was inversely correlated with the number of saccades needed to find the target. However, an analysis of the latency of sensory and attentional influences on responses suggested that V4 provides bottom-up sensory information about stimulus features, whereas the FEF provides a top-down attentional bias toward target features that modulates sensory processing in V4 and that could be used to guide the eyes to a searched-for target. |
Peng Zhou; Stephen Crain; Likan Zhan Sometimes children are as good as adults -- The pragmatic use of prosody in children's on-line sentence processing Journal Article Journal of Memory and Language, 67 (8), pp. 149–164, 2012. @article{Zhou2012a, title = {Sometimes children are as good as adults -- The pragmatic use of prosody in children's on-line sentence processing}, author = {Peng Zhou and Stephen Crain and Likan Zhan}, year = {2012}, date = {2012-01-01}, journal = {Journal of Memory and Language}, volume = {67}, number = {8}, pages = {149--164}, abstract = {This study examined 4-year-old Mandarin-speaking children's sensitivity to prosodic cues in resolving speech act ambiguities, using eye-movement recordings. Most previous on- line studies have focused on children's use of prosody in resolving structural ambiguities. Although children have been found to be sensitive to prosodic information, they use such information less effectively than adults in on-line sentence processing. The present study takes advantage of special properties of Mandarin Chinese to investigate the role of pros- ody in children's on-line processing of ambiguities in which prosody serves to signal the illocutionary meaning of an utterance (i.e., whether the speaker is asking a question or making a statement). We found that the effect of prosody in this case was as robust in chil- dren as it was in adults. This suggests that children are as sensitive as adults in using pros- ody in on-line sentence processing, when prosody is used to resolve a pragmatic ambiguity.}, keywords = {}, pubstate = {published}, tppubtype = {article} } This study examined 4-year-old Mandarin-speaking children's sensitivity to prosodic cues in resolving speech act ambiguities, using eye-movement recordings. Most previous on- line studies have focused on children's use of prosody in resolving structural ambiguities. Although children have been found to be sensitive to prosodic information, they use such information less effectively than adults in on-line sentence processing. The present study takes advantage of special properties of Mandarin Chinese to investigate the role of pros- ody in children's on-line processing of ambiguities in which prosody serves to signal the illocutionary meaning of an utterance (i.e., whether the speaker is asking a question or making a statement). We found that the effect of prosody in this case was as robust in chil- dren as it was in adults. This suggests that children are as sensitive as adults in using pros- ody in on-line sentence processing, when prosody is used to resolve a pragmatic ambiguity. |
Peng Zhou; Yi Su; Stephen Crain; Liqun Gao; Likan Zhan Children's use of phonological information in ambiguity resolution: A view from Mandarin Chinese Journal Article Journal of Child Language, 39 (4), pp. 687–730, 2012. @article{Zhou2012b, title = {Children's use of phonological information in ambiguity resolution: A view from Mandarin Chinese}, author = {Peng Zhou and Yi Su and Stephen Crain and Liqun Gao and Likan Zhan}, doi = {10.1017/S0305000911000249}, year = {2012}, date = {2012-01-01}, journal = {Journal of Child Language}, volume = {39}, number = {4}, pages = {687--730}, abstract = {How do children develop the mapping between prosody and other levels of linguistic knowledge? This question has received considerable attention in child language research. In the present study two experiments were conducted to investigate four- to five-year-old Mandarin-speaking children's sensitivity to prosody in ambiguity resolution. Experiment 1 used eye-tracking to assess children's use of stress in resolving structural ambiguities. Experiment 2 took advantage of special properties of Mandarin to investigate whether children can use intonational cues to resolve ambiguities involving speech acts. The results of our experiments show that children's use of prosodic information in ambiguity resolution varies depending on the type of ambiguity involved. Children can use prosodic information more effectively to resolve speech act ambiguities than to resolve structural ambiguities. This finding suggests that the mapping between prosody and semantics/pragmatics in young children is better established than the mapping between prosody and syntax.}, keywords = {}, pubstate = {published}, tppubtype = {article} } How do children develop the mapping between prosody and other levels of linguistic knowledge? This question has received considerable attention in child language research. In the present study two experiments were conducted to investigate four- to five-year-old Mandarin-speaking children's sensitivity to prosody in ambiguity resolution. Experiment 1 used eye-tracking to assess children's use of stress in resolving structural ambiguities. Experiment 2 took advantage of special properties of Mandarin to investigate whether children can use intonational cues to resolve ambiguities involving speech acts. The results of our experiments show that children's use of prosodic information in ambiguity resolution varies depending on the type of ambiguity involved. Children can use prosodic information more effectively to resolve speech act ambiguities than to resolve structural ambiguities. This finding suggests that the mapping between prosody and semantics/pragmatics in young children is better established than the mapping between prosody and syntax. |
Yang Zhou; Yining Liu; Wangzikang Zhang; Mingsha Zhang Asymmetric influence of egocentric representation onto allocentric perception Journal Article Journal of Neuroscience, 32 (24), pp. 8354–8360, 2012. @article{Zhou2012c, title = {Asymmetric influence of egocentric representation onto allocentric perception}, author = {Yang Zhou and Yining Liu and Wangzikang Zhang and Mingsha Zhang}, doi = {10.1523/JNEUROSCI.0829-12.2012}, year = {2012}, date = {2012-01-01}, journal = {Journal of Neuroscience}, volume = {32}, number = {24}, pages = {8354--8360}, abstract = {Objects in the visual world can be represented in both egocentric and allocentric coordinates. Previous studies have found that allocentric representation can affect the accuracy of spatial judgment relative to an egocentric frame, but not vice versa. Here we asked whether egocentric representation influenced the processing speed of allocentric perception. We measured the manual reaction time of human subjects in a position discrimination task in which the behavioral response purely relied on the target's allocentric location, independent of its egocentric position. We used two conditions of stimulus location: the compatible condition-allocentric left and egocentric left or allocentric right and egocentric right; the incompatible condition-allocentric left and egocentric right or allocentric right and egocentric left. We found that egocentric representation markedly influenced allocentric perception in three ways. First, in a given egocentric location, allocentric perception was significantly faster in the compatible condition than in the incompatible condition. Second, as the target became more eccentric in the visual field, the speed of allocentric perception gradually slowed down in the incompatible condition but remained unchanged in the compatible condition. Third, egocentric-allocentric incompatibility slowed allocentric perception more in the left egocentric side than the right egocentric side. These results cannot be explained by interhemispheric visuomotor transformation and stimulus-response compatibility theory. Our findings indicate that each hemisphere preferentially processes and integrates the contralateral egocentric and allocentric spatial information, and the right hemisphere receives more ipsilateral egocentric inputs than left hemisphere does.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Objects in the visual world can be represented in both egocentric and allocentric coordinates. Previous studies have found that allocentric representation can affect the accuracy of spatial judgment relative to an egocentric frame, but not vice versa. Here we asked whether egocentric representation influenced the processing speed of allocentric perception. We measured the manual reaction time of human subjects in a position discrimination task in which the behavioral response purely relied on the target's allocentric location, independent of its egocentric position. We used two conditions of stimulus location: the compatible condition-allocentric left and egocentric left or allocentric right and egocentric right; the incompatible condition-allocentric left and egocentric right or allocentric right and egocentric left. We found that egocentric representation markedly influenced allocentric perception in three ways. First, in a given egocentric location, allocentric perception was significantly faster in the compatible condition than in the incompatible condition. Second, as the target became more eccentric in the visual field, the speed of allocentric perception gradually slowed down in the incompatible condition but remained unchanged in the compatible condition. Third, egocentric-allocentric incompatibility slowed allocentric perception more in the left egocentric side than the right egocentric side. These results cannot be explained by interhemispheric visuomotor transformation and stimulus-response compatibility theory. Our findings indicate that each hemisphere preferentially processes and integrates the contralateral egocentric and allocentric spatial information, and the right hemisphere receives more ipsilateral egocentric inputs than left hemisphere does. |
Jing Zhou; Adam Reeves; Scott N J Watamaniuk; Stephen J Heinen Shared attention for smooth pursuit and saccades Journal Article Journal of Vision, 13 (4), pp. 1–12, 2013. @article{Zhou2013, title = {Shared attention for smooth pursuit and saccades}, author = {Jing Zhou and Adam Reeves and Scott N J Watamaniuk and Stephen J Heinen}, doi = {10.1167/13.4.7}, year = {2013}, date = {2013-01-01}, journal = {Journal of Vision}, volume = {13}, number = {4}, pages = {1--12}, abstract = {Identification of brief luminance decrements on parafoveal stimuli presented during smooth pursuit improves when a spot pursuit target is surrounded by a larger random dot cinematogram (RDC) that moves with it (Heinen, Jin, & Watamaniuk, 2011). This was hypothesized to occur because the RDC provided an alternative, less attention-demanding pursuit drive, and therefore released attentional resources for visual perception tasks that are shared with those used to pursue the spot. Here, we used the RDC as a tool to probe whether spot pursuit also shares attentional resources with the saccadic system. To this end, we set out to determine if the RDC could release attention from pursuit of the spot to perform a saccade task. Observers made a saccade to one of four parafoveal targets that moved with the spot pursuit stimulus. The targets either moved alone or were surrounded by an RDC (100% coherence). Saccade latency decreased with the RDC, suggesting that the RDC released attention needed to pursue the spot, which was then used for the saccade task. Additional evidence that attention was released by the RDC was obtained in an experiment in which attention was anchored to the fovea by requiring observers to detect a brief color change applied 130 ms before the saccade target appeared. This manipulation eliminated the RDC advantage. The results imply that attentional resources used by the pursuit and saccadic eye movement control systems are shared.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Identification of brief luminance decrements on parafoveal stimuli presented during smooth pursuit improves when a spot pursuit target is surrounded by a larger random dot cinematogram (RDC) that moves with it (Heinen, Jin, & Watamaniuk, 2011). This was hypothesized to occur because the RDC provided an alternative, less attention-demanding pursuit drive, and therefore released attentional resources for visual perception tasks that are shared with those used to pursue the spot. Here, we used the RDC as a tool to probe whether spot pursuit also shares attentional resources with the saccadic system. To this end, we set out to determine if the RDC could release attention from pursuit of the spot to perform a saccade task. Observers made a saccade to one of four parafoveal targets that moved with the spot pursuit stimulus. The targets either moved alone or were surrounded by an RDC (100% coherence). Saccade latency decreased with the RDC, suggesting that the RDC released attention needed to pursue the spot, which was then used for the saccade task. Additional evidence that attention was released by the RDC was obtained in an experiment in which attention was anchored to the fovea by requiring observers to detect a brief color change applied 130 ms before the saccade target appeared. This manipulation eliminated the RDC advantage. The results imply that attentional resources used by the pursuit and saccadic eye movement control systems are shared. |
Wei Zhou; Reinhold Kliegl; Ming Yan A validation of parafoveal semantic information extraction in reading Chinese Journal Article Journal of Research in Reading, 36 (SUPPL.1), pp. S51–S63, 2013. @article{Zhou2013a, title = {A validation of parafoveal semantic information extraction in reading Chinese}, author = {Wei Zhou and Reinhold Kliegl and Ming Yan}, doi = {10.1111/j.1467-9817.2013.01556.x}, year = {2013}, date = {2013-01-01}, journal = {Journal of Research in Reading}, volume = {36}, number = {SUPPL.1}, pages = {S51--S63}, abstract = {Parafoveal semantic processing has recently been well documented in reading Chinese sentences, presumably because of language-specific features. However, because of a large variation of fixation landing positions on pretarget words, some preview words actually were located in foveal vision when readers' eyes landed close to the end of the pretarget words. None of the previous studies has completely ruled out a possibility that the semantic preview effects might mainly arise from these foveally processed preview words. This case, whether previously observed positive evidence for parafoveal semantic processing can still hold, has been called into question. Using linear mixed models, we demonstrate in this study that semantic preview benefit from word N+1 decreased if fixation on pretarget word N was close to the preview. We argue that parafoveal semantic processing is not a consequence of foveally processed preview words.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Parafoveal semantic processing has recently been well documented in reading Chinese sentences, presumably because of language-specific features. However, because of a large variation of fixation landing positions on pretarget words, some preview words actually were located in foveal vision when readers' eyes landed close to the end of the pretarget words. None of the previous studies has completely ruled out a possibility that the semantic preview effects might mainly arise from these foveally processed preview words. This case, whether previously observed positive evidence for parafoveal semantic processing can still hold, has been called into question. Using linear mixed models, we demonstrate in this study that semantic preview benefit from word N+1 decreased if fixation on pretarget word N was close to the preview. We argue that parafoveal semantic processing is not a consequence of foveally processed preview words. |
Peng Zhou; Stephen Crain; Likan Zhan Grammatical aspect and event recognition in children's online sentence comprehension Journal Article Cognition, 133 (1), pp. 262–276, 2014. @article{Zhou2014, title = {Grammatical aspect and event recognition in children's online sentence comprehension}, author = {Peng Zhou and Stephen Crain and Likan Zhan}, doi = {10.1016/j.cognition.2014.06.018}, year = {2014}, date = {2014-01-01}, journal = {Cognition}, volume = {133}, number = {1}, pages = {262--276}, publisher = {133}, abstract = {This study investigated whether or not the temporal information encoded in aspectual morphemes can be used immediately by young children to facilitate event recognition during online sentence comprehension. We focused on the contrast between two grammatical aspectual morphemes in Mandarin Chinese, the perfective morpheme -le and the (imperfective) durative morpheme -zhe. The perfective morpheme -le is often used to indicate that an event has been completed, whereas the durative morpheme -zhe indicates that an event is still in progress or continuing. We were interested to see whether young children are able to use the temporal reference encoded in the two aspectual morphemes (i.e., completed versus ongoing) as rapidly as adults to facilitate event recognition during online sentence comprehension. Using the visual world eye-tracking paradigm, we tested 34 Mandarin-speaking adults and 99 Mandarin-speaking children (35 three-year-olds, 32 four-year-olds and 32 five-year-olds). On each trial, participants were presented with spoken sentences containing either of the two aspectual morphemes while viewing a visual image containing two pictures, one representing a completed event and one representing an ongoing event. Participants' eye movements were recorded from the onset of the spoken sentences. The results show that both the adults and the three age groups of children exhibited a facilitatory effect trigged by the aspectual morpheme: hearing the perfective morpheme -le triggered more eye movements to the completed event area, whereas hearing the durative morpheme -zhe triggered more eye movements to the ongoing event area. This effect occurred immediately after the onset of the aspectual morpheme, both for the adults and the three groups of children. This is evidence that young children are able to use the temporal information encoded in aspectual morphemes as rapidly as adults to facilitate event recognition. Children's eye movement patterns reflect a rapid mapping of grammatical aspect onto the temporal structures of events depicted in the visual scene.}, keywords = {}, pubstate = {published}, tppubtype = {article} } This study investigated whether or not the temporal information encoded in aspectual morphemes can be used immediately by young children to facilitate event recognition during online sentence comprehension. We focused on the contrast between two grammatical aspectual morphemes in Mandarin Chinese, the perfective morpheme -le and the (imperfective) durative morpheme -zhe. The perfective morpheme -le is often used to indicate that an event has been completed, whereas the durative morpheme -zhe indicates that an event is still in progress or continuing. We were interested to see whether young children are able to use the temporal reference encoded in the two aspectual morphemes (i.e., completed versus ongoing) as rapidly as adults to facilitate event recognition during online sentence comprehension. Using the visual world eye-tracking paradigm, we tested 34 Mandarin-speaking adults and 99 Mandarin-speaking children (35 three-year-olds, 32 four-year-olds and 32 five-year-olds). On each trial, participants were presented with spoken sentences containing either of the two aspectual morphemes while viewing a visual image containing two pictures, one representing a completed event and one representing an ongoing event. Participants' eye movements were recorded from the onset of the spoken sentences. The results show that both the adults and the three age groups of children exhibited a facilitatory effect trigged by the aspectual morpheme: hearing the perfective morpheme -le triggered more eye movements to the completed event area, whereas hearing the durative morpheme -zhe triggered more eye movements to the ongoing event area. This effect occurred immediately after the onset of the aspectual morpheme, both for the adults and the three groups of children. This is evidence that young children are able to use the temporal information encoded in aspectual morphemes as rapidly as adults to facilitate event recognition. Children's eye movement patterns reflect a rapid mapping of grammatical aspect onto the temporal structures of events depicted in the visual scene. |
Peiyun Zhou; Kiel Christianson I “hear” what you're “saying”: Auditory perceptual simulation, reading speed, and reading comprehension Journal Article Quarterly Journal of Experimental Psychology, 69 (5), pp. 972–995, 2016. @article{Zhou2016a, title = {I “hear” what you're “saying”: Auditory perceptual simulation, reading speed, and reading comprehension}, author = {Peiyun Zhou and Kiel Christianson}, doi = {10.1080/17470218.2015.1018282}, year = {2016}, date = {2016-01-01}, journal = {Quarterly Journal of Experimental Psychology}, volume = {69}, number = {5}, pages = {972--995}, publisher = {Taylor & Francis}, abstract = {Auditory perceptual simulation (APS) during silent reading refers to situations in which the reader actively simulates the voice of a character or other person depicted in a text. In three eye-tracking experiments, APS effects were investigated as people read utterances attributed to a native English speaker, a non-native English speaker, or no speaker at all. APS effects were measured via online eye movements and offline comprehension probes. Results demonstrated that inducing APS during silent reading resulted in observable differences in reading speed when readers simulated the speech of faster compared to slower speakers and compared to silent reading without APS. Social attitude survey results indicated that readers' attitudes towards the native and non-native speech did not consistently influence APS-related effects. APS of both native speech and non-native speech increased reading speed, facilitated deeper, less good-enough sentence processing, and improved comprehension compared to normal silent reading.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Auditory perceptual simulation (APS) during silent reading refers to situations in which the reader actively simulates the voice of a character or other person depicted in a text. In three eye-tracking experiments, APS effects were investigated as people read utterances attributed to a native English speaker, a non-native English speaker, or no speaker at all. APS effects were measured via online eye movements and offline comprehension probes. Results demonstrated that inducing APS during silent reading resulted in observable differences in reading speed when readers simulated the speech of faster compared to slower speakers and compared to silent reading without APS. Social attitude survey results indicated that readers' attitudes towards the native and non-native speech did not consistently influence APS-related effects. APS of both native speech and non-native speech increased reading speed, facilitated deeper, less good-enough sentence processing, and improved comprehension compared to normal silent reading. |
Peiyun Zhou; Kiel Christianson Auditory perceptual simulation: Simulating speech rates or accents? Journal Article Acta Psychologica, 168 , pp. 85–90, 2016. @article{Zhou2016b, title = {Auditory perceptual simulation: Simulating speech rates or accents?}, author = {Peiyun Zhou and Kiel Christianson}, doi = {10.1016/j.actpsy.2016.04.005}, year = {2016}, date = {2016-01-01}, journal = {Acta Psychologica}, volume = {168}, pages = {85--90}, publisher = {Elsevier B.V.}, abstract = {When readers engage in Auditory Perceptual Simulation (APS) during silent reading, they mentally simulate characteristics of voices attributed to a particular speaker or a character depicted in the text. Previous research found that auditory perceptual simulation of a faster native English speaker during silent reading led to shorter reading times that auditory perceptual simulation of a slower non-native English speaker. Yet, it was uncertain whether this difference was triggered by the different speech rates of the speakers, or by the difficulty of simulating an unfamiliar accent. The current study investigates this question by comparing faster Indian-English speech and slower American-English speech in the auditory perceptual simulation paradigm. Analyses of reading times of individual words and the full sentence reveal that the auditory perceptual simulation effect again modulated reading rate, and auditory perceptual simulation of the faster Indian-English speech led to faster reading rates compared to auditory perceptual simulation of the slower American-English speech. The comparison between this experiment and the data from Zhou and Christianson (2016) demonstrate further that the "speakers'" speech rates, rather than the difficulty of simulating a non-native accent, is the primary mechanism underlying auditory perceptual simulation effects.}, keywords = {}, pubstate = {published}, tppubtype = {article} } When readers engage in Auditory Perceptual Simulation (APS) during silent reading, they mentally simulate characteristics of voices attributed to a particular speaker or a character depicted in the text. Previous research found that auditory perceptual simulation of a faster native English speaker during silent reading led to shorter reading times that auditory perceptual simulation of a slower non-native English speaker. Yet, it was uncertain whether this difference was triggered by the different speech rates of the speakers, or by the difficulty of simulating an unfamiliar accent. The current study investigates this question by comparing faster Indian-English speech and slower American-English speech in the auditory perceptual simulation paradigm. Analyses of reading times of individual words and the full sentence reveal that the auditory perceptual simulation effect again modulated reading rate, and auditory perceptual simulation of the faster Indian-English speech led to faster reading rates compared to auditory perceptual simulation of the slower American-English speech. The comparison between this experiment and the data from Zhou and Christianson (2016) demonstrate further that the "speakers'" speech rates, rather than the difficulty of simulating a non-native accent, is the primary mechanism underlying auditory perceptual simulation effects. |
Huihui Zhou; Robert John Schafer; Robert Desimone Pulvinar-cortex interactions in vision and attention Journal Article Neuron, 89 (1), pp. 209–220, 2016. @article{Zhou2016c, title = {Pulvinar-cortex interactions in vision and attention}, author = {Huihui Zhou and Robert John Schafer and Robert Desimone}, doi = {10.1016/j.neuron.2015.11.034}, year = {2016}, date = {2016-01-01}, journal = {Neuron}, volume = {89}, number = {1}, pages = {209--220}, publisher = {Elsevier Inc.}, abstract = {The ventro-lateral pulvinar is reciprocally connected with the visual areas of the ventral stream that are important for object recognition. To understand the mechanisms of attentive stimulus processing in this pulvinar-cortex loop, we investigated the interactions between the pulvinar, area V4, and IT cortex in a spatial-attention task. Sensory processing and the influence of attention in the pulvinar appeared to reflect its cortical inputs. However, pulvinar deactivation led to a reduction of attentional effects on firing rates and gamma synchrony in V4, a reduction of sensory-evoked responses and overall gamma coherence within V4, and severe behavioral deficits in the affected portion of the visual field. Conversely, pulvinar deactivation caused an increase in low-frequency cortical oscillations, often associated with inattention or sleep. Thus, cortical interactions with the ventro-lateral pulvinar are necessary for normal attention and sensory processing and for maintaining the cortex in an active state. The pulvinar is often proposed to modulate cortical processing with attention. Zhou et al. find that beyond any role in attention, the pulvinar input to cortex seems necessary to maintain the cortex in an active state.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The ventro-lateral pulvinar is reciprocally connected with the visual areas of the ventral stream that are important for object recognition. To understand the mechanisms of attentive stimulus processing in this pulvinar-cortex loop, we investigated the interactions between the pulvinar, area V4, and IT cortex in a spatial-attention task. Sensory processing and the influence of attention in the pulvinar appeared to reflect its cortical inputs. However, pulvinar deactivation led to a reduction of attentional effects on firing rates and gamma synchrony in V4, a reduction of sensory-evoked responses and overall gamma coherence within V4, and severe behavioral deficits in the affected portion of the visual field. Conversely, pulvinar deactivation caused an increase in low-frequency cortical oscillations, often associated with inattention or sleep. Thus, cortical interactions with the ventro-lateral pulvinar are necessary for normal attention and sensory processing and for maintaining the cortex in an active state. The pulvinar is often proposed to modulate cortical processing with attention. Zhou et al. find that beyond any role in attention, the pulvinar input to cortex seems necessary to maintain the cortex in an active state. |
Lei Zhou; Yang Yang Zhang; Zuo Jun Wang; Li Lin Rao; Wei Wang; Shu Li; Xingshan Li; Zhu Yuan Liang A scanpath analysis of the risky decision-making process Journal Article Journal of Behavioral Decision Making, 29 (2-3), pp. 169–182, 2016. @article{Zhou2016c, title = {A scanpath analysis of the risky decision-making process}, author = {Lei Zhou and Yang Yang Zhang and Zuo Jun Wang and Li Lin Rao and Wei Wang and Shu Li and Xingshan Li and Zhu Yuan Liang}, doi = {10.1002/bdm.1943}, year = {2016}, date = {2016-01-01}, journal = {Journal of Behavioral Decision Making}, volume = {29}, number = {2-3}, pages = {169--182}, abstract = {In the field of eye tracking, scanpath analysis can reflect the sequential and temporal properties of the cognitive process. However, the advantages of scanpath analysis have not yet been utilized in the study of risky decision making. We explored the methodological applicability of scanpath analysis to test models of risky decision making by analyzing published data from the eye-tracking studies of Su et al. (2013); Wang and Li (2012), and Sun, Rao, Zhou, and Li (2014). These studies used a proportion task, an outcome-matched presentation condition, and a multiple-play condition as the baseline for comparison with information search and processing in the risky decision-making condition. We found that (i) the similarity scores of the intra-conditions were significantly higher than those of the inter-condition; (ii) the scanpaths of the two conditions were separable; and (iii) based on an inspection of typical trials, the patterns of the scanpaths differed between the two conditions. These findings suggest that scanpath analysis is reliable and valid for examining the process of risky decision making. In line with the findings of the three original studies, our results indicate that risky decision making is unlikely to be based on a weighting and summing process, as hypothesized by the family of expectation models. The findings highlight a new methodological direction for research on decision making.}, keywords = {}, pubstate = {published}, tppubtype = {article} } In the field of eye tracking, scanpath analysis can reflect the sequential and temporal properties of the cognitive process. However, the advantages of scanpath analysis have not yet been utilized in the study of risky decision making. We explored the methodological applicability of scanpath analysis to test models of risky decision making by analyzing published data from the eye-tracking studies of Su et al. (2013); Wang and Li (2012), and Sun, Rao, Zhou, and Li (2014). These studies used a proportion task, an outcome-matched presentation condition, and a multiple-play condition as the baseline for comparison with information search and processing in the risky decision-making condition. We found that (i) the similarity scores of the intra-conditions were significantly higher than those of the inter-condition; (ii) the scanpaths of the two conditions were separable; and (iii) based on an inspection of typical trials, the patterns of the scanpaths differed between the two conditions. These findings suggest that scanpath analysis is reliable and valid for examining the process of risky decision making. In line with the findings of the three original studies, our results indicate that risky decision making is unlikely to be based on a weighting and summing process, as hypothesized by the family of expectation models. The findings highlight a new methodological direction for research on decision making. |
Jifan Zhou; Chia-Lin Lee; Kuei An Li; Yung Hsuan Tien; Su Ling Yeh Does temporal integration occur for unrecognizable words in visual crowding? Journal Article PLoS ONE, 11 (2), pp. 1–15, 2016. @article{Zhou2016d, title = {Does temporal integration occur for unrecognizable words in visual crowding?}, author = {Jifan Zhou and Chia-Lin Lee and Kuei An Li and Yung Hsuan Tien and Su Ling Yeh}, doi = {10.1371/journal.pone.0149355}, year = {2016}, date = {2016-01-01}, journal = {PLoS ONE}, volume = {11}, number = {2}, pages = {1--15}, abstract = {? 2016 Zhou et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.Visual crowding - the inability to see an object when it is surrounded by flankers in the periphery - does not block semantic activation: unrecognizable words due to visual crowding still generated robust semantic priming in subsequent lexical decision tasks. Based on the previous finding, the current study further explored whether unrecognizable crowded words can be temporally integrated into a phrase. By showing one word at a time, we presented Chinese four-word idioms with either a congruent or incongruent ending word in order to examine whether the three preceding crowded words can be temporally integrated to form a semantic context so as to affect the processing of the ending word. Results from both behavioral (Experiment 1) and Event-Related Potential (Experiment 2 and 3) measures showed congruency effect in only the non-crowded condition, which does not support the existence of unconscious multi-word integration. Aside from four-word idioms, we also found that two-word (modifier + adjective combination) integration - the simplest kind of temporal semantic integration - did not occur in visual crowding (Experiment 4). Our findings suggest that integration of temporally separated words might require conscious awareness, at least under the timing conditions tested in the current study.}, keywords = {}, pubstate = {published}, tppubtype = {article} } ? 2016 Zhou et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.Visual crowding - the inability to see an object when it is surrounded by flankers in the periphery - does not block semantic activation: unrecognizable words due to visual crowding still generated robust semantic priming in subsequent lexical decision tasks. Based on the previous finding, the current study further explored whether unrecognizable crowded words can be temporally integrated into a phrase. By showing one word at a time, we presented Chinese four-word idioms with either a congruent or incongruent ending word in order to examine whether the three preceding crowded words can be temporally integrated to form a semantic context so as to affect the processing of the ending word. Results from both behavioral (Experiment 1) and Event-Related Potential (Experiment 2 and 3) measures showed congruency effect in only the non-crowded condition, which does not support the existence of unconscious multi-word integration. Aside from four-word idioms, we also found that two-word (modifier + adjective combination) integration - the simplest kind of temporal semantic integration - did not occur in visual crowding (Experiment 4). Our findings suggest that integration of temporally separated words might require conscious awareness, at least under the timing conditions tested in the current study. |
Huixia Zhou; Sonja Rossi; Juan Li; Huanhuan Liu; Ran Chen; Baoguo Chen Effects of working memory capacity in processing wh-extractions: Eye-movement evidence from Chinese–English bilinguals Journal Article Journal of Research in Reading, 40 (4), pp. 420–438, 2017. @article{Zhou2017a, title = {Effects of working memory capacity in processing wh-extractions: Eye-movement evidence from Chinese–English bilinguals}, author = {Huixia Zhou and Sonja Rossi and Juan Li and Huanhuan Liu and Ran Chen and Baoguo Chen}, doi = {10.1111/1467-9817.12079}, year = {2017}, date = {2017-01-01}, journal = {Journal of Research in Reading}, volume = {40}, number = {4}, pages = {420--438}, abstract = {By using the eye-tracking method, the present study explores whether working memory capacity assessed via the second language (L2) reading span (L2WMC) as well as the operational span task (OSPAN) affects the processing of subject-extraction and object-extraction in Chinese–English bilinguals. Results showed that L2WMC has no effects on the grammatical judgement accuracies, the first fixation duration, gaze duration, go-past times and total fixation duration of the critical regions in wh-extractions. In contrast, OSPAN influences the first fixation duration and go-past times of the critical regions in wh-extractions. Specifically, in region 1, (e.g., Who do you think loved the comedian [region 1] with [region 2] all his heart [subject-extraction]? versus Who do you think the comedian loved [region 1] with [region 2] all his heart? [object-extraction] ), participants with high OSPAN were much slower than those with low OSPAN in their first fixation duration in reading subject-extractions, whereas there were no differences between participants with different OSPANs in reading object-extractions. In region 2, participants with high OSPAN were much faster than those with low OSPAN in their go-past times of object-extractions. These results indicated that individual differences in OSPAN rather than in L2WMC more strongly affect processing of wh-extractions. Thus, OSPAN results to be more suitable to explore the influences of working memory while processing L2 sentences with complex syntax, at least for intermediate proficient bilinguals. Results of the study also provide further support for the Capacity Theory of Comprehension.}, keywords = {}, pubstate = {published}, tppubtype = {article} } By using the eye-tracking method, the present study explores whether working memory capacity assessed via the second language (L2) reading span (L2WMC) as well as the operational span task (OSPAN) affects the processing of subject-extraction and object-extraction in Chinese–English bilinguals. Results showed that L2WMC has no effects on the grammatical judgement accuracies, the first fixation duration, gaze duration, go-past times and total fixation duration of the critical regions in wh-extractions. In contrast, OSPAN influences the first fixation duration and go-past times of the critical regions in wh-extractions. Specifically, in region 1, (e.g., Who do you think loved the comedian [region 1] with [region 2] all his heart [subject-extraction]? versus Who do you think the comedian loved [region 1] with [region 2] all his heart? [object-extraction] ), participants with high OSPAN were much slower than those with low OSPAN in their first fixation duration in reading subject-extractions, whereas there were no differences between participants with different OSPANs in reading object-extractions. In region 2, participants with high OSPAN were much faster than those with low OSPAN in their go-past times of object-extractions. These results indicated that individual differences in OSPAN rather than in L2WMC more strongly affect processing of wh-extractions. Thus, OSPAN results to be more suitable to explore the influences of working memory while processing L2 sentences with complex syntax, at least for intermediate proficient bilinguals. Results of the study also provide further support for the Capacity Theory of Comprehension. |
Yang Zhou; Gongchen Yu; Xuefei Yu; Si Wu; Mingsha Zhang Asymmetric representations of upper and lower visual fields in egocentric and allocentric references Journal Article Journal of Vision, 17 (1), pp. 1–11, 2017. @article{Zhou2017b, title = {Asymmetric representations of upper and lower visual fields in egocentric and allocentric references}, author = {Yang Zhou and Gongchen Yu and Xuefei Yu and Si Wu and Mingsha Zhang}, doi = {10.1167/17.1.9.doi}, year = {2017}, date = {2017-01-01}, journal = {Journal of Vision}, volume = {17}, number = {1}, pages = {1--11}, abstract = {Two spatial reference systems, i.e., the observer- centered (egocentric) and object-centered (allocentric) references, are most commonly used to locate the position of the external objects in space. Although we sense the world as a unified entity, visual processing is asymmetric between upper and lower visual fields (VFs). For example, the goal-directed reaching responses are more efficient in the lower VF. Such asymmetry suggests that the visual space might be composed of different realms regarding perception and action. Since the peripersonal realm includes the space that one can reach, mostly in the lower VF, it is highly likely that the peripersonal realm might mainly be represented in the egocentric reference for visuomotor operation. In contrast, the extrapersonal realm takes place away from the observer and is mostly observed in the upper VF, which is presumably represented in the allocentric reference for orientation in topographically defined space. This theory, however, has not been thoroughly tested experimentally. In the present study, we assessed the contributions of the egocentric and allocentric reference systems on visual discrimination in the upper and lower VFs through measuring the manual reaction times (RTs) of human subjects. We found that: (a) the influence of a target's egocentric location on visual discrimination was stronger in the lower VF; and (b) the influence of a target's allocentric location on visual discrimination was stronger in the upper VF. These results support the hypothesis that the upper and lower VFs are primarily represented in the allocentric and egocentric references, respectively.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Two spatial reference systems, i.e., the observer- centered (egocentric) and object-centered (allocentric) references, are most commonly used to locate the position of the external objects in space. Although we sense the world as a unified entity, visual processing is asymmetric between upper and lower visual fields (VFs). For example, the goal-directed reaching responses are more efficient in the lower VF. Such asymmetry suggests that the visual space might be composed of different realms regarding perception and action. Since the peripersonal realm includes the space that one can reach, mostly in the lower VF, it is highly likely that the peripersonal realm might mainly be represented in the egocentric reference for visuomotor operation. In contrast, the extrapersonal realm takes place away from the observer and is mostly observed in the upper VF, which is presumably represented in the allocentric reference for orientation in topographically defined space. This theory, however, has not been thoroughly tested experimentally. In the present study, we assessed the contributions of the egocentric and allocentric reference systems on visual discrimination in the upper and lower VFs through measuring the manual reaction times (RTs) of human subjects. We found that: (a) the influence of a target's egocentric location on visual discrimination was stronger in the lower VF; and (b) the influence of a target's allocentric location on visual discrimination was stronger in the upper VF. These results support the hypothesis that the upper and lower VFs are primarily represented in the allocentric and egocentric references, respectively. |
Yang Zhou; Lixin Liang; Yujun Pan; Ning Qian; Mingsha Zhang Sites of overt and covert attention define simultaneous spatial reference centers for visuomotor response Journal Article Scientific Reports, 7 , pp. 46556, 2017. @article{Zhou2017c, title = {Sites of overt and covert attention define simultaneous spatial reference centers for visuomotor response}, author = {Yang Zhou and Lixin Liang and Yujun Pan and Ning Qian and Mingsha Zhang}, doi = {10.1038/srep46556}, year = {2017}, date = {2017-01-01}, journal = {Scientific Reports}, volume = {7}, pages = {46556}, publisher = {Nature Publishing Group}, abstract = {The site of overt attention (fixation point) defines a spatial reference center that affects visuomotor response as indicated by the stimulus-response-compatibility (SRC) effect: When subjects press, e.g., a left key to report stimuli, their reaction time is shorter when stimuli appear to the left than to the right of the fixation. Covert attention to a peripheral site appears to define a similar reference center but previous studies did not control for confounding spatiotemporal factors or investigate the relationship between overt-and covert-attention-defined centers. Using an eye tracker to monitor fixation, we found an SRC effect relative to the site of covert attention induced by a flashed cue dot, and a concurrent reduction, but not elimination, of the overt-attention SRC effect. The two SRC effects jointly determined the overall motor reaction time. Since trials with different cue locations were randomlZhou, Y., Liang, L., Pan, Y., Qian, N., & Zhang, M. (2017). Sites of overt and covert attention define simultaneous spatial reference centers for visuomotor response. Scientific Reports, 7, 1–12. https://doi.org/10.1038/srep46556y interleaved, the integration of the two reference centers must be updated online. When the cue was invalid and diminished covert attention, the covert-attention SRC effect disappeared and the overt-attention SRC effect retained full strength, excluding non-attention-based interpretations. We conclude that both covert-and overt-attention sites define visual reference centers that simultaneously contribute to motor response.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The site of overt attention (fixation point) defines a spatial reference center that affects visuomotor response as indicated by the stimulus-response-compatibility (SRC) effect: When subjects press, e.g., a left key to report stimuli, their reaction time is shorter when stimuli appear to the left than to the right of the fixation. Covert attention to a peripheral site appears to define a similar reference center but previous studies did not control for confounding spatiotemporal factors or investigate the relationship between overt-and covert-attention-defined centers. Using an eye tracker to monitor fixation, we found an SRC effect relative to the site of covert attention induced by a flashed cue dot, and a concurrent reduction, but not elimination, of the overt-attention SRC effect. The two SRC effects jointly determined the overall motor reaction time. Since trials with different cue locations were randomlZhou, Y., Liang, L., Pan, Y., Qian, N., & Zhang, M. (2017). Sites of overt and covert attention define simultaneous spatial reference centers for visuomotor response. Scientific Reports, 7, 1–12. https://doi.org/10.1038/srep46556y interleaved, the integration of the two reference centers must be updated online. When the cue was invalid and diminished covert attention, the covert-attention SRC effect disappeared and the overt-attention SRC effect retained full strength, excluding non-attention-based interpretations. We conclude that both covert-and overt-attention sites define visual reference centers that simultaneously contribute to motor response. |
Ying Zhou; Bing Li; Gang Wang; Mingsha Zhang; Yujun Pan Leftward deviation and asymmetric speed of egocentric judgment between left and right visual fields Journal Article Frontiers in Neuroscience, 11 , pp. 1–10, 2017. @article{Zhou2017d, title = {Leftward deviation and asymmetric speed of egocentric judgment between left and right visual fields}, author = {Ying Zhou and Bing Li and Gang Wang and Mingsha Zhang and Yujun Pan}, doi = {10.3389/fnins.2017.00364}, year = {2017}, date = {2017-01-01}, journal = {Frontiers in Neuroscience}, volume = {11}, pages = {1--10}, abstract = {The egocentric reference frame is essential for body orientation and spatial localization of external objects. Recent neuroimaging and lesion studies have revealed that the right hemisphere of humans may play a more dominant role in processing egocentric information than the left hemisphere. However, previous studies of egocentric discrimination mainly focused on assessing the accuracy of egocentric judgment, leaving its timing unexplored. In addition, most previous studies never monitored the subjects' eye position during the experiments, so the influence of eye position on egocentric judgment could not be excluded. In the present study, we systematically assessed the processing of egocentric information in healthy human subjects by measuring the location of their visual subjective straight ahead (SSA) and their manual reaction time (RT) during fixation (monitored by eye tracker). In an egocentric discrimination task, subjects were required to judge the position of a visual cue relative to the subjective mid-sagittal plane and respond as quickly as possible. We found that the SSA of all subjects deviated to the left side of the body mid-sagittal plane. In addition, all subjects but one showed the longest RT at the location closest to the SSA; and in population, the RTs in the left visual field (VF) were longer than that in the right VF. These results might be due to the right hemisphere's dominant role in processing egocentric information, and its more prominent representation of the ipsilateral VF than that of the left hemisphere.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The egocentric reference frame is essential for body orientation and spatial localization of external objects. Recent neuroimaging and lesion studies have revealed that the right hemisphere of humans may play a more dominant role in processing egocentric information than the left hemisphere. However, previous studies of egocentric discrimination mainly focused on assessing the accuracy of egocentric judgment, leaving its timing unexplored. In addition, most previous studies never monitored the subjects' eye position during the experiments, so the influence of eye position on egocentric judgment could not be excluded. In the present study, we systematically assessed the processing of egocentric information in healthy human subjects by measuring the location of their visual subjective straight ahead (SSA) and their manual reaction time (RT) during fixation (monitored by eye tracker). In an egocentric discrimination task, subjects were required to judge the position of a visual cue relative to the subjective mid-sagittal plane and respond as quickly as possible. We found that the SSA of all subjects deviated to the left side of the body mid-sagittal plane. In addition, all subjects but one showed the longest RT at the location closest to the SSA; and in population, the RTs in the left visual field (VF) were longer than that in the right VF. These results might be due to the right hemisphere's dominant role in processing egocentric information, and its more prominent representation of the ipsilateral VF than that of the left hemisphere. |
Peng Zhou; Weiyi Ma Children's use of morphological cues in real-time event representation Journal Article Journal of Psycholinguistic Research, 47 (1), pp. 241–260, 2018. @article{Zhou2018a, title = {Children's use of morphological cues in real-time event representation}, author = {Peng Zhou and Weiyi Ma}, doi = {10.1007/s10936-017-9530-y}, year = {2018}, date = {2018-01-01}, journal = {Journal of Psycholinguistic Research}, volume = {47}, number = {1}, pages = {241--260}, publisher = {Springer US}, abstract = {The present study investigated whether and how fast young children can use information encoded in morphological markers during real-time event representation. Using the visual world paradigm, we tested 35 adults, 34 5-year-olds and 33 3-year-olds. The results showed that the adults, the 5-year-olds and the 3-year-olds all exhibited eye gaze patterns that reflected a rapid use of morphological cues during real-time event representation. There was no difference in the time course of the eye gaze patterns of the 5-year-olds and those of the adults, indicating that 5-year-old children already have adult-like processing abilities and they can use morphological cues as effectively as adults during real-time event representation. However, a 400 ms delay was observed in the eye gaze patterns by the 3-year-olds as compared to the 5-year-olds and the adults. We proposed that the observed difference might reflect a difference in the general cognitive processing abilities between the three age groups. Due to the immature cognitive processing abilities of 3-year-olds, it took longer for them to progress their eye movements to the target pictures as compared to older children and adults.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The present study investigated whether and how fast young children can use information encoded in morphological markers during real-time event representation. Using the visual world paradigm, we tested 35 adults, 34 5-year-olds and 33 3-year-olds. The results showed that the adults, the 5-year-olds and the 3-year-olds all exhibited eye gaze patterns that reflected a rapid use of morphological cues during real-time event representation. There was no difference in the time course of the eye gaze patterns of the 5-year-olds and those of the adults, indicating that 5-year-old children already have adult-like processing abilities and they can use morphological cues as effectively as adults during real-time event representation. However, a 400 ms delay was observed in the eye gaze patterns by the 3-year-olds as compared to the 5-year-olds and the adults. We proposed that the observed difference might reflect a difference in the general cognitive processing abilities between the three age groups. Due to the immature cognitive processing abilities of 3-year-olds, it took longer for them to progress their eye movements to the target pictures as compared to older children and adults. |
Junyi Zhou; Guojie Ma; Xingshan Li; Marcus Taft The time course of incremental word processing during Chinese reading Journal Article Reading and Writing, 31 (3), pp. 607–625, 2018. @article{Zhou2018b, title = {The time course of incremental word processing during Chinese reading}, author = {Junyi Zhou and Guojie Ma and Xingshan Li and Marcus Taft}, doi = {10.1007/s11145-017-9800-y}, year = {2018}, date = {2018-01-01}, journal = {Reading and Writing}, volume = {31}, number = {3}, pages = {607--625}, publisher = {Springer Netherlands}, abstract = {In the current study, we report two eye movement experiments investigating how Chinese readers process incremental words during reading. These are words where some of the component characters constitute another word (an embedded word). In two experiments, eye movements were monitored while the participants read sentences with incremental words whose first two characters (Experiment 1) or last two characters (Experiment 2) constituted a word (referred to respectively as “head-embedded” and “tail embedded”). Reading times on these words were longer when the frequencies of the embedded words were lower. However, this was only seen on first fixation duration for head-embedded words. These results suggest that embedded words are activated when Chinese readers process incremental words, and that this activation is earlier for a head-embedded word than for a tail-embedded word. These results support a hierarchical model which assumes that the representation for whole word is activated via the representation of its constituent morphemes.}, keywords = {}, pubstate = {published}, tppubtype = {article} } In the current study, we report two eye movement experiments investigating how Chinese readers process incremental words during reading. These are words where some of the component characters constitute another word (an embedded word). In two experiments, eye movements were monitored while the participants read sentences with incremental words whose first two characters (Experiment 1) or last two characters (Experiment 2) constituted a word (referred to respectively as “head-embedded” and “tail embedded”). Reading times on these words were longer when the frequencies of the embedded words were lower. However, this was only seen on first fixation duration for head-embedded words. These results suggest that embedded words are activated when Chinese readers process incremental words, and that this activation is earlier for a head-embedded word than for a tail-embedded word. These results support a hierarchical model which assumes that the representation for whole word is activated via the representation of its constituent morphemes. |
Wei Zhou; Aiping Wang; Hua Shu; Reinhold Kliegl; Ming Yan Word segmentation by alternating colors facilitates eye guidance in Chinese reading Journal Article Memory and Cognition, 46 (5), pp. 729–740, 2018. @article{Zhou2018c, title = {Word segmentation by alternating colors facilitates eye guidance in Chinese reading}, author = {Wei Zhou and Aiping Wang and Hua Shu and Reinhold Kliegl and Ming Yan}, doi = {10.3758/s13421-018-0797-5}, year = {2018}, date = {2018-01-01}, journal = {Memory and Cognition}, volume = {46}, number = {5}, pages = {729--740}, publisher = {Memory & Cognition}, abstract = {During sentence reading, low spatial frequency information afforded by spaces between words is the primary factor for eye guidance in spaced writing systems, whereas saccade generation for unspaced writing systems is less clear and under debate. In the present study, we investigated whether word-boundary information, provided by alternating colors (consistent or inconsistent with word-boundary information) influences saccade-target selection in Chinese. In Experiment 1, as compared to a baseline (i.e., uniform color) condition, word segmentation with alternating color shifted fixation location towards the center of words. In contrast, incorrect word segmentation shifted fixation location towards the beginning of words. In Experiment 2, we used a gaze-contingent paradigm to restrict the color manipulation only to the upcoming parafoveal words and replicated the results, including fixation location effects, as observed in Experiment 1. These results indicate that Chinese readers are capable of making use of parafoveal word-boundary knowledge for saccade generation, even if such information is unfamiliar to them. The present study provides novel support for the hypothesis that word segmentation is involved in the decision about where to fixate next during Chinese reading.}, keywords = {}, pubstate = {published}, tppubtype = {article} } During sentence reading, low spatial frequency information afforded by spaces between words is the primary factor for eye guidance in spaced writing systems, whereas saccade generation for unspaced writing systems is less clear and under debate. In the present study, we investigated whether word-boundary information, provided by alternating colors (consistent or inconsistent with word-boundary information) influences saccade-target selection in Chinese. In Experiment 1, as compared to a baseline (i.e., uniform color) condition, word segmentation with alternating color shifted fixation location towards the center of words. In contrast, incorrect word segmentation shifted fixation location towards the beginning of words. In Experiment 2, we used a gaze-contingent paradigm to restrict the color manipulation only to the upcoming parafoveal words and replicated the results, including fixation location effects, as observed in Experiment 1. These results indicate that Chinese readers are capable of making use of parafoveal word-boundary knowledge for saccade generation, even if such information is unfamiliar to them. The present study provides novel support for the hypothesis that word segmentation is involved in the decision about where to fixate next during Chinese reading. |
Wenxi Zhou; Haoyu Chen; Jiongjiong Yang Discriminative learning of similar objects enhances memory for the objects and contexts Journal Article Learning & Memory, 25 (12), pp. 601–610, 2018. @article{Zhou2018d, title = {Discriminative learning of similar objects enhances memory for the objects and contexts}, author = {Wenxi Zhou and Haoyu Chen and Jiongjiong Yang}, doi = {10.1101/lm.047514.118}, year = {2018}, date = {2018-01-01}, journal = {Learning & Memory}, volume = {25}, number = {12}, pages = {601--610}, abstract = {How to improve our episodic memory is an important issue in the field of memory. In the present study, we used a discriminative learning paradigm that was similar to a paradigm used in animal studies. In Experiment 1, a picture (e.g., a dog) was either paired with an identical picture, with a similar picture of the same concept (e.g., another dog), or with a picture of a different concept (e.g., a cat). Then, after intervals of 10 min, 1 d, and 1 wk, participants were asked to perform a 2-alternative forced-choice (2AFC) task to discriminate between a repeated and a similar picture, followed by the contextual judgment. In Experiment 2, eye movements were measured when participants encoded the pairs of pictures. The results showed that by discriminative learning, there was better memory performance in the 2AFC task for the "same" and "similar" conditions than for the "different" condition. In addition, there was better contextual memory performance for the "similar" condition than for the other two conditions. With regard to the eye movements, the participants were more likely to fixate on the lure objects and made more saccades between the target and lure objects in the "similar" (versus "different") condition. The number of saccades predicted how well the targets were remembered in both the 2AFC and contextual memory tasks. These results suggested that with discriminative learning of similar objects, detailed information could be better encoded by distinguishing the object from similar interferences, making the details and the contexts better remembered and retained over time.}, keywords = {}, pubstate = {published}, tppubtype = {article} } How to improve our episodic memory is an important issue in the field of memory. In the present study, we used a discriminative learning paradigm that was similar to a paradigm used in animal studies. In Experiment 1, a picture (e.g., a dog) was either paired with an identical picture, with a similar picture of the same concept (e.g., another dog), or with a picture of a different concept (e.g., a cat). Then, after intervals of 10 min, 1 d, and 1 wk, participants were asked to perform a 2-alternative forced-choice (2AFC) task to discriminate between a repeated and a similar picture, followed by the contextual judgment. In Experiment 2, eye movements were measured when participants encoded the pairs of pictures. The results showed that by discriminative learning, there was better memory performance in the 2AFC task for the "same" and "similar" conditions than for the "different" condition. In addition, there was better contextual memory performance for the "similar" condition than for the other two conditions. With regard to the eye movements, the participants were more likely to fixate on the lure objects and made more saccades between the target and lure objects in the "similar" (versus "different") condition. The number of saccades predicted how well the targets were remembered in both the 2AFC and contextual memory tasks. These results suggested that with discriminative learning of similar objects, detailed information could be better encoded by distinguishing the object from similar interferences, making the details and the contexts better remembered and retained over time. |
Wei Zhou; Hua Shu; Kevin Miller; Ming Yan Reliance on orthography and phonology in reading of Chinese: A developmental study Journal Article Journal of Research in Reading, 41 (2), pp. 370–391, 2018. @article{Zhou2018e, title = {Reliance on orthography and phonology in reading of Chinese: A developmental study}, author = {Wei Zhou and Hua Shu and Kevin Miller and Ming Yan}, doi = {10.1111/1467-9817.12111}, year = {2018}, date = {2018-01-01}, journal = {Journal of Research in Reading}, volume = {41}, number = {2}, pages = {370--391}, abstract = {Background: Disruptions of reading processes due to text substitutions can measure how readers use lexical information. Methods: With eye-movement recording, children and adults viewed sentences with either identical, orthographically similar, homophonic or unrelated substitutions of the first characters in target words. To the extent that readers rely on orthographic or phonological cues, substitutions that contain such cues should cause less disruption reading than do unrelated substitutions. Results: On pretarget words, there was a reliable reduction in gaze duration due to homophonic substitution only for children. On target words, we observed reliable recovery effects due to orthographic similarity for adults. On post-target words, adults had better orthographic-based and phonological-based recovery abilities than children. Conclusions: The combination of eye movement recording and the error detection paradigm offers a novel implicit paradigm for studying reading development: during sentence reading, beginning readers of Chinese may rely on phonological mediation, while skilled readers have more direct access to semantics from orthography.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Background: Disruptions of reading processes due to text substitutions can measure how readers use lexical information. Methods: With eye-movement recording, children and adults viewed sentences with either identical, orthographically similar, homophonic or unrelated substitutions of the first characters in target words. To the extent that readers rely on orthographic or phonological cues, substitutions that contain such cues should cause less disruption reading than do unrelated substitutions. Results: On pretarget words, there was a reliable reduction in gaze duration due to homophonic substitution only for children. On target words, we observed reliable recovery effects due to orthographic similarity for adults. On post-target words, adults had better orthographic-based and phonological-based recovery abilities than children. Conclusions: The combination of eye movement recording and the error detection paradigm offers a novel implicit paradigm for studying reading development: during sentence reading, beginning readers of Chinese may rely on phonological mediation, while skilled readers have more direct access to semantics from orthography. |
Peiyun Zhou; Yun Yao; Kiel Christianson When structure competes with semantics: Reading Chinese relative clauses Journal Article Collabra: Psychology, 4 (1), pp. 1–16, 2018. @article{Zhou2018f, title = {When structure competes with semantics: Reading Chinese relative clauses}, author = {Peiyun Zhou and Yun Yao and Kiel Christianson}, doi = {10.1525/collabra.131}, year = {2018}, date = {2018-01-01}, journal = {Collabra: Psychology}, volume = {4}, number = {1}, pages = {1--16}, abstract = {An ongoing debate in Chinese psycholinguistics is whether subject-relative clauses or object-relative clauses are more difficult to process. The current study asks what happens when structure and plausibility are pitted against each other in Chinese relative clause processing. Chinese relative clause structures and semantic plausibility were manipulated to create both plausible and implausible versions of subject- and object-relative clauses. This method has been used in other languages (e.g., English) to elicit thematic role reversal comprehension errors. Importantly, these errors—as well as online processing difficulties—are especially frequent in implausible versions of dispreferred (noncanoncial) structures. If one relative clause structure in Chinese is highly dispreferred, the structural factor and plausibility factor should interact additively. If, however, the structures are relatively equally difficult to process, then there should be only a main effect of plausibility. Sentence reading times as well as analyses on lexical interest areas revealed that Chinese readers used plausibility information almost exclusively when reading the sentences. Relative clause structure had no online effect and small but consistent offline effects. Taken together, the results support a slight preference in offline comprehension for Chinese subject-relative clauses, as well as a central role for semantic plausibility, which appears to be the dominant factor in online processing and a strong determinant of offline comprehension.}, keywords = {}, pubstate = {published}, tppubtype = {article} } An ongoing debate in Chinese psycholinguistics is whether subject-relative clauses or object-relative clauses are more difficult to process. The current study asks what happens when structure and plausibility are pitted against each other in Chinese relative clause processing. Chinese relative clause structures and semantic plausibility were manipulated to create both plausible and implausible versions of subject- and object-relative clauses. This method has been used in other languages (e.g., English) to elicit thematic role reversal comprehension errors. Importantly, these errors—as well as online processing difficulties—are especially frequent in implausible versions of dispreferred (noncanoncial) structures. If one relative clause structure in Chinese is highly dispreferred, the structural factor and plausibility factor should interact additively. If, however, the structures are relatively equally difficult to process, then there should be only a main effect of plausibility. Sentence reading times as well as analyses on lexical interest areas revealed that Chinese readers used plausibility information almost exclusively when reading the sentences. Relative clause structure had no online effect and small but consistent offline effects. Taken together, the results support a slight preference in offline comprehension for Chinese subject-relative clauses, as well as a central role for semantic plausibility, which appears to be the dominant factor in online processing and a strong determinant of offline comprehension. |
Peng Zhou; Weiyi Ma; Likan Zhan; Huimin Ma Using the visual world paradigm to study sentence comprehension in Mandarin-speaking children with autism Journal Article Journal of Visualized Experiments, (140), pp. 1–8, 2018. @article{Zhou2018g, title = {Using the visual world paradigm to study sentence comprehension in Mandarin-speaking children with autism}, author = {Peng Zhou and Weiyi Ma and Likan Zhan and Huimin Ma}, doi = {10.3791/58452}, year = {2018}, date = {2018-01-01}, journal = {Journal of Visualized Experiments}, number = {140}, pages = {1--8}, abstract = {Sentence comprehension relies on the ability to rapidly integrate different types of linguistic and non-linguistic information. However, there is currently a paucity of research exploring how preschool children with autism understand sentences using different types of cues. The mechanisms underlying sentence comprehension remains largely unclear. The present study presents a protocol to examine the sentence comprehension abilities of preschool children with autism. More specifically, a visual world paradigm of eye-tracking is used to explore the moment-to-moment sentence comprehension in the children. The paradigm has multiple advantages. First, it is sensitive to the time course of sentence comprehension and thus can provide rich information about how sentence comprehension unfolds over time. Second, it requires minimal task and communication demands, so it is ideal for testing children with autism. To further minimize the computational burden of children, the present study measures eye movements that arise as automatic responses to linguistic input rather than measuring eye movements that accompany conscious responses to spoken instructions.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Sentence comprehension relies on the ability to rapidly integrate different types of linguistic and non-linguistic information. However, there is currently a paucity of research exploring how preschool children with autism understand sentences using different types of cues. The mechanisms underlying sentence comprehension remains largely unclear. The present study presents a protocol to examine the sentence comprehension abilities of preschool children with autism. More specifically, a visual world paradigm of eye-tracking is used to explore the moment-to-moment sentence comprehension in the children. The paradigm has multiple advantages. First, it is sensitive to the time course of sentence comprehension and thus can provide rich information about how sentence comprehension unfolds over time. Second, it requires minimal task and communication demands, so it is ideal for testing children with autism. To further minimize the computational burden of children, the present study measures eye movements that arise as automatic responses to linguistic input rather than measuring eye movements that accompany conscious responses to spoken instructions. |
Peng Zhou; Likan Zhan; Huimin Ma Predictive language processing in preschool children with autism spectrum disorder: An eye-tracking study Journal Article Journal of Psycholinguistic Research, 48 (2), pp. 431–452, 2019. @article{Zhou2019, title = {Predictive language processing in preschool children with autism spectrum disorder: An eye-tracking study}, author = {Peng Zhou and Likan Zhan and Huimin Ma}, doi = {10.1007/s10936-018-9612-5}, year = {2019}, date = {2019-01-01}, journal = {Journal of Psycholinguistic Research}, volume = {48}, number = {2}, pages = {431--452}, publisher = {Springer US}, abstract = {Sentence comprehension relies on the abilities to rapidly integrate different types of linguistic and non-linguistic information. The present study investigated whether Mandarin-speaking preschool children with autism spectrum disorder (ASD) are able to use verb information predictively to anticipate the upcoming linguistic input during real-time sentence comprehension. 26 five-year-olds with ASD, 25 typically developing (TD) five-year-olds and 24 TD four-year-olds were tested using the visual world eye-tracking paradigm. The results showed that the 5-year-olds with ASD, like their TD peers, exhibited verb-based anticipatory eye movements during real-time sentence comprehension. No difference was observed between the ASD and TD groups in the time course of their eye gaze patterns, indicating that Mandarin-speaking preschool children with ASD are able to use verb information as effectively and rapidly as TD peers to predict the upcoming linguistic input.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Sentence comprehension relies on the abilities to rapidly integrate different types of linguistic and non-linguistic information. The present study investigated whether Mandarin-speaking preschool children with autism spectrum disorder (ASD) are able to use verb information predictively to anticipate the upcoming linguistic input during real-time sentence comprehension. 26 five-year-olds with ASD, 25 typically developing (TD) five-year-olds and 24 TD four-year-olds were tested using the visual world eye-tracking paradigm. The results showed that the 5-year-olds with ASD, like their TD peers, exhibited verb-based anticipatory eye movements during real-time sentence comprehension. No difference was observed between the ASD and TD groups in the time course of their eye gaze patterns, indicating that Mandarin-speaking preschool children with ASD are able to use verb information as effectively and rapidly as TD peers to predict the upcoming linguistic input. |
Peng Zhou; Likan Zhan; Huimin Ma Understanding others' minds: Social inference in preschool children with autism spectrum disorder Journal Article Journal of Autism and Developmental Disorders, pp. 1–12, 2019. @article{Zhou2019a, title = {Understanding others' minds: Social inference in preschool children with autism spectrum disorder}, author = {Peng Zhou and Likan Zhan and Huimin Ma}, doi = {10.1007/s10803-019-04167-x}, year = {2019}, date = {2019-08-01}, journal = {Journal of Autism and Developmental Disorders}, pages = {1--12}, publisher = {Springer Science and Business Media LLC}, abstract = {The study used an eye-tracking task to investigate whether preschool children with autism spectrum disorder (ASD) are able to make inferences about others' behavior in terms of their mental states in a social setting. Fifty typically developing (TD) 4- and 5-year-olds and 22 5-year-olds with ASD participated in the study, where their eye-movements were recorded as automatic responses to given situations. The results show that unlike their TD peers, children with ASD failed to exhibit eye gaze patterns that reflect their ability to infer about others' behavior by spontaneously encoding socially relevant information and attributing mental states to others. Implications of the findings were discussed in relation to the proposal that implicit/spontaneous Theory of Mind is persistently impaired in ASD.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The study used an eye-tracking task to investigate whether preschool children with autism spectrum disorder (ASD) are able to make inferences about others' behavior in terms of their mental states in a social setting. Fifty typically developing (TD) 4- and 5-year-olds and 22 5-year-olds with ASD participated in the study, where their eye-movements were recorded as automatic responses to given situations. The results show that unlike their TD peers, children with ASD failed to exhibit eye gaze patterns that reflect their ability to infer about others' behavior by spontaneously encoding socially relevant information and attributing mental states to others. Implications of the findings were discussed in relation to the proposal that implicit/spontaneous Theory of Mind is persistently impaired in ASD. |
Wei Zhou; Yadong Gao; Yulin Chang; Mengmeng Su Hemispheric processing of lexical information in Chinese character recognition and its relationship to reading performance Journal Article Journal of General Psychology, 146 (1), pp. 34–49, 2019. @article{Zhou2019b, title = {Hemispheric processing of lexical information in Chinese character recognition and its relationship to reading performance}, author = {Wei Zhou and Yadong Gao and Yulin Chang and Mengmeng Su}, doi = {10.1080/00221309.2018.1535483}, year = {2019}, date = {2019-01-01}, journal = {Journal of General Psychology}, volume = {146}, number = {1}, pages = {34--49}, publisher = {Psychology Press}, abstract = {Hemispheric predominance has been well documented in the visual perception of alphabetic words. However, the hemispheric processing of lexical information in Chinese character recognition and its relationship to reading performance are far from clear. In the divided visual field paradigm, participants were required to judge the orthography, phonology, or semantics of Chinese characters, which were presented randomly in the left or right visual field. The results showed a right visual field/left hemispheric superiority in the phonological judgment task, but no hemispheric advantage in the orthographic or semantic task was found. In addition, reaction times in the right visual field for phonological and semantic tasks were significantly correlated with the reading test score. These results suggest that both hemispheres involved in the orthographic and semantic processing of Chinese characters, and that the left lateralized phonological processing is important for Chinese fluent reading.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Hemispheric predominance has been well documented in the visual perception of alphabetic words. However, the hemispheric processing of lexical information in Chinese character recognition and its relationship to reading performance are far from clear. In the divided visual field paradigm, participants were required to judge the orthography, phonology, or semantics of Chinese characters, which were presented randomly in the left or right visual field. The results showed a right visual field/left hemispheric superiority in the phonological judgment task, but no hemispheric advantage in the orthographic or semantic task was found. In addition, reaction times in the right visual field for phonological and semantic tasks were significantly correlated with the reading test score. These results suggest that both hemispheres involved in the orthographic and semantic processing of Chinese characters, and that the left lateralized phonological processing is important for Chinese fluent reading. |
Ying Joey Zhou; Alexis Pérez-Bellido; Saskia Haegens; Floris P de Lange Perceptual expectations modulate low-frequency activity: A statistical learning magnetoencephalographystudy Journal Article Journal of Cognitive Neuroscience, pp. 1–12, 2019. @article{Zhou2019c, title = {Perceptual expectations modulate low-frequency activity: A statistical learning magnetoencephalographystudy}, author = {Ying Joey Zhou and Alexis Pérez-Bellido and Saskia Haegens and Floris P de Lange}, doi = {10.1162/jocn_a_01511}, year = {2019}, date = {2019-12-01}, journal = {Journal of Cognitive Neuroscience}, pages = {1--12}, publisher = {MIT Press - Journals}, abstract = {Perceptual expectations can change how a visual stimulus is perceived. Recent studies have shown mixed results in terms of whether expectations modulate sensory representations. Here, we used a statistical learning paradigm to study the temporal characteristics of perceptual expectations. We presented participants with pairs of object images organized in a predictive manner and then recorded their brain activity with magnetoencephalography while they viewed expected and unexpected image pairs on the subsequent day. We observed stronger alpha-band (7–14 Hz) activity in response to unexpected compared with expected object images. Specifically, the alpha-band modulation occurred as early as the onset of the stimuli and was most pronounced in left occipito-temporal cortex. Given that the differential response to expected versus unexpected stimuli occurred in sensory regions early in time, our results suggest that expectations modulate perceptual decision-making by changing the sensory response elicited by the stimuli.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Perceptual expectations can change how a visual stimulus is perceived. Recent studies have shown mixed results in terms of whether expectations modulate sensory representations. Here, we used a statistical learning paradigm to study the temporal characteristics of perceptual expectations. We presented participants with pairs of object images organized in a predictive manner and then recorded their brain activity with magnetoencephalography while they viewed expected and unexpected image pairs on the subsequent day. We observed stronger alpha-band (7–14 Hz) activity in response to unexpected compared with expected object images. Specifically, the alpha-band modulation occurred as early as the onset of the stimuli and was most pronounced in left occipito-temporal cortex. Given that the differential response to expected versus unexpected stimuli occurred in sensory regions early in time, our results suggest that expectations modulate perceptual decision-making by changing the sensory response elicited by the stimuli. |
Peng Zhou; Weiyi Ma; Likan Zhan A deficit in using prosodic cues to understand communicative intentions by children with autism spectrum disorders: An eye-tracking study Journal Article First Language, 40 (1), pp. 41–63, 2020. @article{Zhou2020, title = {A deficit in using prosodic cues to understand communicative intentions by children with autism spectrum disorders: An eye-tracking study}, author = {Peng Zhou and Weiyi Ma and Likan Zhan}, doi = {10.1177/0142723719885270}, year = {2020}, date = {2020-01-01}, journal = {First Language}, volume = {40}, number = {1}, pages = {41--63}, abstract = {The present study investigated whether Mandarin-speaking preschool children with autism spectrum disorders (ASD) were able to use prosodic cues to understand others' communicative intentions. Using the visual world eye-tracking paradigm, the study found that unlike typically developing (TD) 4-year-olds, both 4-year-olds with ASD and 5-year-olds with ASD exhibited an eye gaze pattern that reflected their inability to use prosodic cues to infer the intended meaning of the speaker. Their performance was relatively independent of their verbal IQ and mean length of utterance. In addition, the findings also show that there was no development in this ability from 4 years of age to 5 years of age. The findings indicate that Mandarin-speaking preschool children with ASD exhibit a deficit in using prosodic cues to understand the communicative intentions of the speaker, and this ability might be inherently impaired in ASD.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The present study investigated whether Mandarin-speaking preschool children with autism spectrum disorders (ASD) were able to use prosodic cues to understand others' communicative intentions. Using the visual world eye-tracking paradigm, the study found that unlike typically developing (TD) 4-year-olds, both 4-year-olds with ASD and 5-year-olds with ASD exhibited an eye gaze pattern that reflected their inability to use prosodic cues to infer the intended meaning of the speaker. Their performance was relatively independent of their verbal IQ and mean length of utterance. In addition, the findings also show that there was no development in this ability from 4 years of age to 5 years of age. The findings indicate that Mandarin-speaking preschool children with ASD exhibit a deficit in using prosodic cues to understand the communicative intentions of the speaker, and this ability might be inherently impaired in ASD. |
Yan Zhou Psychological analysis of online teaching in colleges based on eye-tracking technology Journal Article Revista Argentina de Clinica Psicologica, 29 (2), pp. 523–529, 2020. @article{Zhou2020a, title = {Psychological analysis of online teaching in colleges based on eye-tracking technology}, author = {Yan Zhou}, doi = {10.24205/03276716.2020.272}, year = {2020}, date = {2020-01-01}, journal = {Revista Argentina de Clinica Psicologica}, volume = {29}, number = {2}, pages = {523--529}, abstract = {Eye-tracking technology has been widely adopted to capture the psychological changes of college students in the learning process. With the aid of eye-tracking technology this paper establishes a psychological analysis model for students in online teaching. Four eye movement parameters were selected for the model, including pupil diameter, fixation time, re-reading time and retrospective time. A total of 100 college students were selected for an eye movement test in online teaching environment. The test data were analyzed on the SPSS software. The results show that the eye movement parameters are greatly affected by the key points in teaching and the contents that interest the students; the two influencing factors can arouse and attract the students' attention in the teaching process. The research results provide an important reference for the psychological study of online teaching in colleges.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Eye-tracking technology has been widely adopted to capture the psychological changes of college students in the learning process. With the aid of eye-tracking technology this paper establishes a psychological analysis model for students in online teaching. Four eye movement parameters were selected for the model, including pupil diameter, fixation time, re-reading time and retrospective time. A total of 100 college students were selected for an eye movement test in online teaching environment. The test data were analyzed on the SPSS software. The results show that the eye movement parameters are greatly affected by the key points in teaching and the contents that interest the students; the two influencing factors can arouse and attract the students' attention in the teaching process. The research results provide an important reference for the psychological study of online teaching in colleges. |
Weina Zhu; Jan Drewes; Karl R Gegenfurtner Animal detection in natural images: effects of color and image database Journal Article PLoS ONE, 8 (10), pp. e75816, 2013. @article{Zhu2013, title = {Animal detection in natural images: effects of color and image database}, author = {Weina Zhu and Jan Drewes and Karl R Gegenfurtner}, doi = {10.1371/journal.pone.0075816}, year = {2013}, date = {2013-01-01}, journal = {PLoS ONE}, volume = {8}, number = {10}, pages = {e75816}, abstract = {The visual system has a remarkable ability to extract categorical information from complex natural scenes. In order to elucidate the role of low-level image features for the recognition of objects in natural scenes, we recorded saccadic eye movements and event-related potentials (ERPs) in two experiments, in which human subjects had to detect animals in previously unseen natural images. We used a new natural image database (ANID) that is free of some of the potential artifacts that have plagued the widely used COREL images. Color and grayscale images picked from the ANID and COREL databases were used. In all experiments, color images induced a greater N1 EEG component at earlier time points than grayscale images. We suggest that this influence of color in animal detection may be masked by later processes when measuring reation times. The ERP results of go/nogo and forced choice tasks were similar to those reported earlier. The non-animal stimuli induced bigger N1 than animal stimuli both in the COREL and ANID databases. This result indicates ultra-fast processing of animal images is possible irrespective of the particular database. With the ANID images, the difference between color and grayscale images is more pronounced than with the COREL images. The earlier use of the COREL images might have led to an underestimation of the contribution of color. Therefore, we conclude that the ANID image database is better suited for the investigation of the processing of natural scenes than other databases commonly used.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The visual system has a remarkable ability to extract categorical information from complex natural scenes. In order to elucidate the role of low-level image features for the recognition of objects in natural scenes, we recorded saccadic eye movements and event-related potentials (ERPs) in two experiments, in which human subjects had to detect animals in previously unseen natural images. We used a new natural image database (ANID) that is free of some of the potential artifacts that have plagued the widely used COREL images. Color and grayscale images picked from the ANID and COREL databases were used. In all experiments, color images induced a greater N1 EEG component at earlier time points than grayscale images. We suggest that this influence of color in animal detection may be masked by later processes when measuring reation times. The ERP results of go/nogo and forced choice tasks were similar to those reported earlier. The non-animal stimuli induced bigger N1 than animal stimuli both in the COREL and ANID databases. This result indicates ultra-fast processing of animal images is possible irrespective of the particular database. With the ANID images, the difference between color and grayscale images is more pronounced than with the COREL images. The earlier use of the COREL images might have led to an underestimation of the contribution of color. Therefore, we conclude that the ANID image database is better suited for the investigation of the processing of natural scenes than other databases commonly used. |
Xiao Lin Zhu; Shu Ping Tan; Fu De Yang; Wei Sun; Chong Sheng Song; Jie Feng Cui; Yan Li Zhao; Feng Mei Fan; Ya Jun Li; Yun Long Tan; Yi Zhuang Zou Visual scanning of emotional faces in schizophrenia Journal Article Neuroscience Letters, 552 , pp. 46–51, 2013. @article{Zhu2013a, title = {Visual scanning of emotional faces in schizophrenia}, author = {Xiao Lin Zhu and Shu Ping Tan and Fu {De Yang} and Wei Sun and Chong Sheng Song and Jie Feng Cui and Yan Li Zhao and Feng Mei Fan and Ya Jun Li and Yun Long Tan and Yi Zhuang Zou}, doi = {10.1016/j.neulet.2013.07.046}, year = {2013}, date = {2013-01-01}, journal = {Neuroscience Letters}, volume = {552}, pages = {46--51}, publisher = {Elsevier Ireland Ltd}, abstract = {This study investigated eye movement differences during facial emotion recognition between 101 patients with chronic schizophrenia and 101 controls. Independent of facial emotion, patients with schizophrenia processed facial information inefficiently; they showed significantly more direct fixations that lasted longer to interest areas (IAs), such as the eyes, nose, mouth, and nasion. The total fixation number, mean fixation duration, and total fixation duration were significantly increased in schizophrenia. Additionally, the number of fixations per second to IAs (IA fixation number/s) was significantly lower in schizophrenia. However, no differences were found between the two groups in the proportion of number of fixations to IAs or total fixation number (IA fixation number %). Interestingly, the negative symptoms of patients with schizophrenia negatively correlated with IA fixation number %. Both groups showed significantly greater attention to positive faces. Compared to controls, patients with schizophrenia exhibited significantly more fixations directed to IAs, a higher total fixation number, and lower IA fixation number/s for negative faces. These results indicate that facial processing efficiency is significantly decreased in schizophrenia, but no difference was observed in processing strategy. Patients with schizophrenia may have special deficits in processing negative faces, and negative symptoms may affect visual scanning parameters.}, keywords = {}, pubstate = {published}, tppubtype = {article} } This study investigated eye movement differences during facial emotion recognition between 101 patients with chronic schizophrenia and 101 controls. Independent of facial emotion, patients with schizophrenia processed facial information inefficiently; they showed significantly more direct fixations that lasted longer to interest areas (IAs), such as the eyes, nose, mouth, and nasion. The total fixation number, mean fixation duration, and total fixation duration were significantly increased in schizophrenia. Additionally, the number of fixations per second to IAs (IA fixation number/s) was significantly lower in schizophrenia. However, no differences were found between the two groups in the proportion of number of fixations to IAs or total fixation number (IA fixation number %). Interestingly, the negative symptoms of patients with schizophrenia negatively correlated with IA fixation number %. Both groups showed significantly greater attention to positive faces. Compared to controls, patients with schizophrenia exhibited significantly more fixations directed to IAs, a higher total fixation number, and lower IA fixation number/s for negative faces. These results indicate that facial processing efficiency is significantly decreased in schizophrenia, but no difference was observed in processing strategy. Patients with schizophrenia may have special deficits in processing negative faces, and negative symptoms may affect visual scanning parameters. |
Rongjuan Zhu; Yangmei Luo; Xuqun You; Ziyu Wang Spatial bias induced by simple addition and subtraction: From eye movement evidence Journal Article Perception, 47 (2), pp. 143–157, 2018. @article{Zhu2018, title = {Spatial bias induced by simple addition and subtraction: From eye movement evidence}, author = {Rongjuan Zhu and Yangmei Luo and Xuqun You and Ziyu Wang}, doi = {10.1177/0301006617738718}, year = {2018}, date = {2018-01-01}, journal = {Perception}, volume = {47}, number = {2}, pages = {143--157}, abstract = {The associations between number and space have been intensively investigated. Recent studies indicated that this association could extend to more complex tasks, such as mental arithmetic. However, the mechanism of arithmetic-space associations in mental arithmetic was still a topic of debate. Thus, in the current study, we adopted an eye-tracking technology to investigate whether spatial bias induced by mental arithmetic was related with spatial attention shifts on the mental number line or with semantic link between the operator and space. In Experiment 1, participants moved their eyes to the corresponding response area according to the cues after solving addition and subtraction problems. The results showed that the participants moved their eyes faster to the leftward space after solving subtraction problems and faster to the right after solving addition problems. However, there was no spatial bias observed when the second operand was zero in the same time window, which indicated that the emergence of spatial bias may be associated with spatial attention shifts on the mental number line. In Experiment 2, participants responded to the operator (operation plus and operation minus) with their eyes. The results showed that mere presentation of operator did not cause spatial bias. Therefore, the arithmetic–space associations might be related with the movement along the mental number line.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The associations between number and space have been intensively investigated. Recent studies indicated that this association could extend to more complex tasks, such as mental arithmetic. However, the mechanism of arithmetic-space associations in mental arithmetic was still a topic of debate. Thus, in the current study, we adopted an eye-tracking technology to investigate whether spatial bias induced by mental arithmetic was related with spatial attention shifts on the mental number line or with semantic link between the operator and space. In Experiment 1, participants moved their eyes to the corresponding response area according to the cues after solving addition and subtraction problems. The results showed that the participants moved their eyes faster to the leftward space after solving subtraction problems and faster to the right after solving addition problems. However, there was no spatial bias observed when the second operand was zero in the same time window, which indicated that the emergence of spatial bias may be associated with spatial attention shifts on the mental number line. In Experiment 2, participants responded to the operator (operation plus and operation minus) with their eyes. The results showed that mere presentation of operator did not cause spatial bias. Therefore, the arithmetic–space associations might be related with the movement along the mental number line. |
Rongjuan Zhu; Xuqun You; Shuoqiu Gan; Jinwei Wang Spatial attention shifts in addition and subtraction arithmetic: Evidence of eye movement Journal Article Perception, 48 (9), pp. 835–849, 2019. @article{Zhu2019a, title = {Spatial attention shifts in addition and subtraction arithmetic: Evidence of eye movement}, author = {Rongjuan Zhu and Xuqun You and Shuoqiu Gan and Jinwei Wang}, doi = {10.1177/0301006619865156}, year = {2019}, date = {2019-01-01}, journal = {Perception}, volume = {48}, number = {9}, pages = {835--849}, abstract = {Recently, it has been proposed that solving addition and subtraction problems can evoke horizontal shifts of spatial attention. However, prior to this study, it remained unclear whether orienting shifts of spatial attention relied on actual arithmetic processes (i.e., the activated magnitude) or the semantic spatial association of the operator. In this study, spatial–arithmetic associations were explored through three experiments using an eye tracker, which attempted to investigate the mechanism of those associations. Experiment 1 replicated spatial–arithmetic associations in addition and subtraction problems. Experiments 2 and 3 selected zero as the operand to investigate whether these arithmetic problems could induce shifts of spatial attention. Experiment 2 indicated that addition and subtraction problems (zero as the second operand, i.e., 2 + 0) do not induce shifts of spatial attention. Experiment 3 showed that addition and subtraction arithmetic (zero as the first operand, i.e., 0 + 2) do facilitate rightward and leftward eye movement, respectively. This indicates that the operator alone does not induce horizontal eye movement. However, our findings support the idea that solving addition and subtraction problems is associated with horizontal shifts of spatial attention.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Recently, it has been proposed that solving addition and subtraction problems can evoke horizontal shifts of spatial attention. However, prior to this study, it remained unclear whether orienting shifts of spatial attention relied on actual arithmetic processes (i.e., the activated magnitude) or the semantic spatial association of the operator. In this study, spatial–arithmetic associations were explored through three experiments using an eye tracker, which attempted to investigate the mechanism of those associations. Experiment 1 replicated spatial–arithmetic associations in addition and subtraction problems. Experiments 2 and 3 selected zero as the operand to investigate whether these arithmetic problems could induce shifts of spatial attention. Experiment 2 indicated that addition and subtraction problems (zero as the second operand, i.e., 2 + 0) do not induce shifts of spatial attention. Experiment 3 showed that addition and subtraction arithmetic (zero as the first operand, i.e., 0 + 2) do facilitate rightward and leftward eye movement, respectively. This indicates that the operator alone does not induce horizontal eye movement. However, our findings support the idea that solving addition and subtraction problems is associated with horizontal shifts of spatial attention. |
Hongzhi Zhu; Septimiu Salcudean; Robert Rohling The Neyman Pearson detection of microsaccades with maximum likelihood estimation of parameters Journal Article Journal of Vision, 19 (13), pp. 1–17, 2019. @article{Zhu2019b, title = {The Neyman Pearson detection of microsaccades with maximum likelihood estimation of parameters}, author = {Hongzhi Zhu and Septimiu Salcudean and Robert Rohling}, year = {2019}, date = {2019-01-01}, journal = {Journal of Vision}, volume = {19}, number = {13}, pages = {1--17}, abstract = {Despite the fact that the velocity threshold method is widely applied, the detection of microsaccades continues to be a challenging problem, due to gaze-tracking inaccuracy and the transient nature of microsaccades. Important parameters associated with a saccadic event, e.g., saccade duration, amplitude, and maximum velocity, are sometimes imprecisely estimated, which may lead to biases in inferring the roles of microsaccades in perception and cognition. To overcome the biases and have a better detection algorithm for microsaccades, we propose a novel statistical model for the tracked gaze positions during eye fixations. In this model, we incorporate a parametrization that has been previously applied to model saccades, which allows us to veridically capture the velocity profile of saccadic eye movements. Based on our model, we derive the Neyman Pearson Detector (NPD) for saccadic events. Implemented in conjunction with the maximum likelihood estimation method, our NPD can detect a saccadic event and estimate all parameters simultaneously. Because of its adaptive nature and its statistical optimality, our NPD method was able to better detect microsaccades in some datasets when compared with a recently proposed state-of-the-art method based on convolutional neural networks. NPD also yielded comparable performance with a recently developed Bayesian algorithm, with the added benefit of modeling a more biologically veridical velocity profile of the saccade. As opposed to these algorithms, NPD can lend itself better to online saccade detection, and thus has potential for human-computer interaction applications. Our algorithm is publicly available at https://github.com/hz-zhu/NPD-micro-saccade-detection.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Despite the fact that the velocity threshold method is widely applied, the detection of microsaccades continues to be a challenging problem, due to gaze-tracking inaccuracy and the transient nature of microsaccades. Important parameters associated with a saccadic event, e.g., saccade duration, amplitude, and maximum velocity, are sometimes imprecisely estimated, which may lead to biases in inferring the roles of microsaccades in perception and cognition. To overcome the biases and have a better detection algorithm for microsaccades, we propose a novel statistical model for the tracked gaze positions during eye fixations. In this model, we incorporate a parametrization that has been previously applied to model saccades, which allows us to veridically capture the velocity profile of saccadic eye movements. Based on our model, we derive the Neyman Pearson Detector (NPD) for saccadic events. Implemented in conjunction with the maximum likelihood estimation method, our NPD can detect a saccadic event and estimate all parameters simultaneously. Because of its adaptive nature and its statistical optimality, our NPD method was able to better detect microsaccades in some datasets when compared with a recently proposed state-of-the-art method based on convolutional neural networks. NPD also yielded comparable performance with a recently developed Bayesian algorithm, with the added benefit of modeling a more biologically veridical velocity profile of the saccade. As opposed to these algorithms, NPD can lend itself better to online saccade detection, and thus has potential for human-computer interaction applications. Our algorithm is publicly available at https://github.com/hz-zhu/NPD-micro-saccade-detection. |
Jing Zhu; Ying Wang; Rong La; Jiawei Zhan; Junhong Niu; Shuai Zeng; Xiping Hu Multimodal mild depression recognition based on EEG-EM synchronization acquisition network Journal Article IEEE Access, 7 , pp. 28196–28210, 2019. @article{Zhu2019b, title = {Multimodal mild depression recognition based on EEG-EM synchronization acquisition network}, author = {Jing Zhu and Ying Wang and Rong La and Jiawei Zhan and Junhong Niu and Shuai Zeng and Xiping Hu}, doi = {10.1109/ACCESS.2019.2901950}, year = {2019}, date = {2019-01-01}, journal = {IEEE Access}, volume = {7}, pages = {28196--28210}, publisher = {IEEE}, abstract = {In this paper, we used electroencephalography (EEG)-eye movement (EM) synchronization acquisition network to simultaneously record both EEG and EM physiological signals of the mild depression and normal controls during free viewing. Then, we consider a multimodal feature fusion method that can best discriminate between mild depression and normal control subjects as a step toward achieving our long-term aim of developing an objective and effective multimodal system that assists doctors during diagnosis and monitoring of mild depression. Based on the multimodal denoising autoencoder, we use two feature fusion strategies (feature fusion and hidden layer fusion) for fusion of the EEG and EM signals to improve the recognition performance of classifiers for mild depression. Our experimental results indicate that the EEG-EM synchronization acquisition network ensures that the recorded EM and EEG data require that both the data streams are synchronized with millisecond precision, and both fusion methods can improve the mild depression recognition accuracy, thus demonstrating the complementary nature of the modalities. Compared with the unimodal classification approach that uses only EEG or EM, the feature fusion method slightly improved the recognition accuracy by 1.88%, while the hidden layer fusion method significantly improved the classification rate by up to 7.36%. In particular, the highest classification accuracy achieved in this paper was 83.42%. These results indicate that the multimodal deep learning approaches with input data using a combination of EEG and EM signals are promising in achieving real-time monitoring and identification of mild depression.}, keywords = {}, pubstate = {published}, tppubtype = {article} } In this paper, we used electroencephalography (EEG)-eye movement (EM) synchronization acquisition network to simultaneously record both EEG and EM physiological signals of the mild depression and normal controls during free viewing. Then, we consider a multimodal feature fusion method that can best discriminate between mild depression and normal control subjects as a step toward achieving our long-term aim of developing an objective and effective multimodal system that assists doctors during diagnosis and monitoring of mild depression. Based on the multimodal denoising autoencoder, we use two feature fusion strategies (feature fusion and hidden layer fusion) for fusion of the EEG and EM signals to improve the recognition performance of classifiers for mild depression. Our experimental results indicate that the EEG-EM synchronization acquisition network ensures that the recorded EM and EEG data require that both the data streams are synchronized with millisecond precision, and both fusion methods can improve the mild depression recognition accuracy, thus demonstrating the complementary nature of the modalities. Compared with the unimodal classification approach that uses only EEG or EM, the feature fusion method slightly improved the recognition accuracy by 1.88%, while the hidden layer fusion method significantly improved the classification rate by up to 7.36%. In particular, the highest classification accuracy achieved in this paper was 83.42%. These results indicate that the multimodal deep learning approaches with input data using a combination of EEG and EM signals are promising in achieving real-time monitoring and identification of mild depression. |
Zhuoting Zhu; Yin Hu; Chimei Liao; Ren Huang; Stuart Keel; Yanping Liu; Mingguang He; Stuart Keell; Yanping Liu; Mingguang He Perceptual learning of visual span improves Chinese reading speed Journal Article Visual Psychophysics and Physiological Optics, 60 (6), pp. 2357–2368, 2019. @article{Zhu2019c, title = {Perceptual learning of visual span improves Chinese reading speed}, author = {Zhuoting Zhu and Yin Hu and Chimei Liao and Ren Huang and Stuart Keel and Yanping Liu and Mingguang He and Stuart Keell and Yanping Liu and Mingguang He}, year = {2019}, date = {2019-01-01}, journal = {Visual Psychophysics and Physiological Optics}, volume = {60}, number = {6}, pages = {2357--2368}, abstract = {PURPOSE. Evidence has indicated that the size of the visual span (the number of identifiable letters without movement of the eyes) and reading speed can be boosted through perceptual learning in alphabetic scripts. In this study, we investigated whether benefits of perceptual learning could be extended to visual-span size and sentence reading (all characters are presented at the same time) for Chinese characters and explored changes in sensory factors contributing to changes in visual-span size following training. METHODS. We randomly assigned 26 normally sighted subjects to either a control group (n ¼ 13) or a training group (n ¼ 13). Pre- and posttests were administered to evaluate visual-span profiles (VSPs) and reading speed. Training consisted of trigram (sequences of three characters) character-recognition trials over 4 consecutive days. VSPs are plots of recognition accuracy as a function of character position. Visual-span size was quantified as the area under VSPs in bits of information transmitted. A decomposition analysis of VSPs was used to quantify the effects of sensory factors (crowding and mislocation). We compared the size and sensory factors of visual span and reading speed following training. RESULTS. Following training, the visual-span size significantly increased by 11.7 bits, and reading speed increased by 50.8%. The decomposition analysis showed a significant reduction for crowding (?13.1 bits) but a minor increase in the magnitude of mislocation errors (1.46 bits) following training. CONCLUSIONS. These results suggest that perceptual learning expands the visual-span size and further improves Chinese text sentence-reading speed, indicating that visual span may be a common sensory limitation on reading that can be overcome with practice.}, keywords = {}, pubstate = {published}, tppubtype = {article} } PURPOSE. Evidence has indicated that the size of the visual span (the number of identifiable letters without movement of the eyes) and reading speed can be boosted through perceptual learning in alphabetic scripts. In this study, we investigated whether benefits of perceptual learning could be extended to visual-span size and sentence reading (all characters are presented at the same time) for Chinese characters and explored changes in sensory factors contributing to changes in visual-span size following training. METHODS. We randomly assigned 26 normally sighted subjects to either a control group (n ¼ 13) or a training group (n ¼ 13). Pre- and posttests were administered to evaluate visual-span profiles (VSPs) and reading speed. Training consisted of trigram (sequences of three characters) character-recognition trials over 4 consecutive days. VSPs are plots of recognition accuracy as a function of character position. Visual-span size was quantified as the area under VSPs in bits of information transmitted. A decomposition analysis of VSPs was used to quantify the effects of sensory factors (crowding and mislocation). We compared the size and sensory factors of visual span and reading speed following training. RESULTS. Following training, the visual-span size significantly increased by 11.7 bits, and reading speed increased by 50.8%. The decomposition analysis showed a significant reduction for crowding (?13.1 bits) but a minor increase in the magnitude of mislocation errors (1.46 bits) following training. CONCLUSIONS. These results suggest that perceptual learning expands the visual-span size and further improves Chinese text sentence-reading speed, indicating that visual span may be a common sensory limitation on reading that can be overcome with practice. |
Zhuoting Zhu; Yin Hu; Chimei Liao; Stuart Keel; Ren Huang; Yanping Liu; Mingguang He Visual span and cognitive factors affect Chinese reading speed Journal Article Journal of Vision, 19 (14), pp. 1–11, 2019. @article{Zhu2019d, title = {Visual span and cognitive factors affect Chinese reading speed}, author = {Zhuoting Zhu and Yin Hu and Chimei Liao and Stuart Keel and Ren Huang and Yanping Liu and Mingguang He}, doi = {10.1167/19.14.17}, year = {2019}, date = {2019-01-01}, journal = {Journal of Vision}, volume = {19}, number = {14}, pages = {1--11}, abstract = {Visual span, which is the number of recognizable letters seen without moving the eyes, has been proven to impose a sensory limitation for alphabetic reading speed (Chung, 2011; Chung, Legge, & Cheung, 2004; Lee, Kwon, Legge, & Gefroh, 2010; Legge, Ahn, Klitz, & Luebker, 1997; Legge, Hooven, Klitz, Stephen Mansfield, & Tjan, 2002; D. Yu, Cheung, Legge, & Chung, 2010). However, little is known about the effects of visual span on Chinese reading performance. Of note, Chinese text differs greatly from that of the alphabetic writing system. There are no spaces between words, and readers are forced to utilize their lexical knowledge to segment Chinese characters into meaningful words, thus increasing the relative importance of cognitive/linguistic factors in reading performance. Therefore, the aim of the present study is to explore whether visual span and cognitive/linguistic factors have independent effects on Chinese reading speed. Visual span profiles, cognitive/linguistic factors indicated by word frequency, and Chinese sentence-reading performance were collected from 28 native Chinese-speaking subjects. We found that the visual-span size and cognitive/linguistic factors independently contributed to Chinese sentence-reading speed (all ps textless 0.05). We concluded that both the visual-span size and cognitive/linguistic factors represented bottlenecks for Chinese sentence-reading speed.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Visual span, which is the number of recognizable letters seen without moving the eyes, has been proven to impose a sensory limitation for alphabetic reading speed (Chung, 2011; Chung, Legge, & Cheung, 2004; Lee, Kwon, Legge, & Gefroh, 2010; Legge, Ahn, Klitz, & Luebker, 1997; Legge, Hooven, Klitz, Stephen Mansfield, & Tjan, 2002; D. Yu, Cheung, Legge, & Chung, 2010). However, little is known about the effects of visual span on Chinese reading performance. Of note, Chinese text differs greatly from that of the alphabetic writing system. There are no spaces between words, and readers are forced to utilize their lexical knowledge to segment Chinese characters into meaningful words, thus increasing the relative importance of cognitive/linguistic factors in reading performance. Therefore, the aim of the present study is to explore whether visual span and cognitive/linguistic factors have independent effects on Chinese reading speed. Visual span profiles, cognitive/linguistic factors indicated by word frequency, and Chinese sentence-reading performance were collected from 28 native Chinese-speaking subjects. We found that the visual-span size and cognitive/linguistic factors independently contributed to Chinese sentence-reading speed (all ps textless 0.05). We concluded that both the visual-span size and cognitive/linguistic factors represented bottlenecks for Chinese sentence-reading speed. |
Jiawen Zhu; Kara Dawson; Albert D Ritzhaupt; Pavlo Pasha Antonenko Investigating how multimedia and modality design principles influence student learning performance, satisfaction, mental effort, and visual attention Journal Article Journal of Educational Multimedia and Hypermedia, 29 (3), pp. 265–284, 2020. @article{Zhu2020, title = {Investigating how multimedia and modality design principles influence student learning performance, satisfaction, mental effort, and visual attention}, author = {Jiawen Zhu and Kara Dawson and Albert D Ritzhaupt and Pavlo Pasha Antonenko}, year = {2020}, date = {2020-01-01}, journal = {Journal of Educational Multimedia and Hypermedia}, volume = {29}, number = {3}, pages = {265--284}, abstract = {This study investigated the effects of multimedia and modal- ity design principles using a learning intervention about Aus- tralia with a sample of college students and employing mea- sures of learning outcomes, visual attention, satisfaction, and mental effort. Seventy-five college students were systemati- cally assigned to one of four conditions: a) text with pictures, b) text without pictures, c) narration with pictures, or d) nar- ration without pictures. No significant differences were found among the four groups in learning performance, satisfaction, or self-reported mental effort, and participants rarely focused their visual attention on the representational pictures provid- ed in the intervention. Neither the multimedia nor the modal- ity principles held true in this study. However, participants in narration environments focused significantly more visual at- tention on the “Next” button, a navigational aid included on all slides. This study contributes to the research on visual at- tention and navigational aids in multimedia learning, and it suggests such features may cause distractions, particularly when spoken text is provided without on-screen text. The paper also offers implications for the design of multimedia learning}, keywords = {}, pubstate = {published}, tppubtype = {article} } This study investigated the effects of multimedia and modal- ity design principles using a learning intervention about Aus- tralia with a sample of college students and employing mea- sures of learning outcomes, visual attention, satisfaction, and mental effort. Seventy-five college students were systemati- cally assigned to one of four conditions: a) text with pictures, b) text without pictures, c) narration with pictures, or d) nar- ration without pictures. No significant differences were found among the four groups in learning performance, satisfaction, or self-reported mental effort, and participants rarely focused their visual attention on the representational pictures provid- ed in the intervention. Neither the multimedia nor the modal- ity principles held true in this study. However, participants in narration environments focused significantly more visual at- tention on the “Next” button, a navigational aid included on all slides. This study contributes to the research on visual at- tention and navigational aids in multimedia learning, and it suggests such features may cause distractions, particularly when spoken text is provided without on-screen text. The paper also offers implications for the design of multimedia learning |
Jing Zhu; Zihan Wang; Tao Gong; Shuai Zeng; Xiaowei Li; Bin Hu; Jianxiu Li; Shuting Sun; Lan Zhang An improved classification model for depression detection using EEG and eye tracking data Journal Article IEEE Transactions on Nanobioscience, 19 (3), pp. 527–537, 2020. @article{Zhu2020a, title = {An improved classification model for depression detection using EEG and eye tracking data}, author = {Jing Zhu and Zihan Wang and Tao Gong and Shuai Zeng and Xiaowei Li and Bin Hu and Jianxiu Li and Shuting Sun and Lan Zhang}, doi = {10.1109/TNB.2020.2990690}, year = {2020}, date = {2020-01-01}, journal = {IEEE Transactions on Nanobioscience}, volume = {19}, number = {3}, pages = {527--537}, abstract = {At present, depression has become a main health burden in the world. However, there are many problems with the diagnosis of depression, such as low patient cooperation, subjective bias and low accuracy. Therefore, reliable and objective evaluation method is needed to achieve effective depression detection. Electroencephalogram (EEG) and eye movements (EMs) data have been widely used for depression detection due to their advantages of easy recording and non-invasion. This research proposes a content based ensemble method (CBEM) to promote the depression detection accuracy, both static and dynamic CBEM were discussed. In the proposed model, EEG or EMs dataset was divided into subsets by the context of the experiments, and then a majority vote strategy was used to determine the subjects' label. The validation of the method is testified on two datasets which included free viewing eye tracking and resting-state EEG, and these two datasets have 36,34 subjects respectively. For these two datasets, CBEM achieves accuracies of 82.5% and 92.65% respectively. The results show that CBEM outperforms traditional classification methods. Our findings provide an effective solution for promoting the accuracy of depression identification, and provide an effective method for identification of depression, which in the future could be used for the auxiliary diagnosis of depression.}, keywords = {}, pubstate = {published}, tppubtype = {article} } At present, depression has become a main health burden in the world. However, there are many problems with the diagnosis of depression, such as low patient cooperation, subjective bias and low accuracy. Therefore, reliable and objective evaluation method is needed to achieve effective depression detection. Electroencephalogram (EEG) and eye movements (EMs) data have been widely used for depression detection due to their advantages of easy recording and non-invasion. This research proposes a content based ensemble method (CBEM) to promote the depression detection accuracy, both static and dynamic CBEM were discussed. In the proposed model, EEG or EMs dataset was divided into subsets by the context of the experiments, and then a majority vote strategy was used to determine the subjects' label. The validation of the method is testified on two datasets which included free viewing eye tracking and resting-state EEG, and these two datasets have 36,34 subjects respectively. For these two datasets, CBEM achieves accuracies of 82.5% and 92.65% respectively. The results show that CBEM outperforms traditional classification methods. Our findings provide an effective solution for promoting the accuracy of depression identification, and provide an effective method for identification of depression, which in the future could be used for the auxiliary diagnosis of depression. |
Mengyan Zhu; Xiangling Zhuang; Guojie Ma Readers extract semantic information from parafoveal two-character synonyms in Chinese reading Journal Article Reading and Writing, pp. 1–18, 2020. @article{Zhu2020b, title = {Readers extract semantic information from parafoveal two-character synonyms in Chinese reading}, author = {Mengyan Zhu and Xiangling Zhuang and Guojie Ma}, doi = {10.1007/s11145-020-10092-8}, year = {2020}, date = {2020-01-01}, journal = {Reading and Writing}, pages = {1--18}, publisher = {Springer Netherlands}, abstract = {In Chinese reading, the possibility and mechanism of semantic parafoveal processing has been debated for a long time. To advance the topic, “semantic preview benefit” in Chinese reading was reexamined, with a specific focus on how it is affected by the semantic relatedness between preview and target words at the two-character word level. Eighty critical two-character words were selected as target words. Reading tasks with gaze-contingent boundary paradigms were used to study whether different semantic-relatedness preview conditions influenced parafoveal processing. The data showed that synonyms (the most closely related preview) produced significant preview benefit compared with the semantic-related (non-synonyms) condition, even when plausibility was controlled. This result indicates that the larger extent of semantic preview benefit is mainly caused by the larger semantic relatedness between preview and target words. Moreover, plausibility is not the only cause of semantic preview benefit in Chinese reading. These findings improve the current understanding of the mechanism of parafoveal processing in Chinese reading and the implications on modeling eye movement control are discussed.}, keywords = {}, pubstate = {published}, tppubtype = {article} } In Chinese reading, the possibility and mechanism of semantic parafoveal processing has been debated for a long time. To advance the topic, “semantic preview benefit” in Chinese reading was reexamined, with a specific focus on how it is affected by the semantic relatedness between preview and target words at the two-character word level. Eighty critical two-character words were selected as target words. Reading tasks with gaze-contingent boundary paradigms were used to study whether different semantic-relatedness preview conditions influenced parafoveal processing. The data showed that synonyms (the most closely related preview) produced significant preview benefit compared with the semantic-related (non-synonyms) condition, even when plausibility was controlled. This result indicates that the larger extent of semantic preview benefit is mainly caused by the larger semantic relatedness between preview and target words. Moreover, plausibility is not the only cause of semantic preview benefit in Chinese reading. These findings improve the current understanding of the mechanism of parafoveal processing in Chinese reading and the implications on modeling eye movement control are discussed. |
Maryam Ziaei; William von Hippel; Julie D Henry; Stefanie I Becker Are age effects in positivity influenced by the valence of distractors? Journal Article PLoS ONE, 10 (9), pp. e0137604, 2015. @article{Ziaei2015, title = {Are age effects in positivity influenced by the valence of distractors?}, author = {Maryam Ziaei and William von Hippel and Julie D Henry and Stefanie I Becker}, doi = {10.1371/journal.pone.0137604}, year = {2015}, date = {2015-01-01}, journal = {PLoS ONE}, volume = {10}, number = {9}, pages = {e0137604}, abstract = {An age-related ‘positivity' effect has been identified, in which older adults show an information- processing bias towards positive emotional items in attention and memory. In the present study, we examined this positivity bias by using a novel paradigm in which emotional and neutral distractors were presented along with emotionally valenced targets. Thirty-five older and 37 younger adults were asked during encoding to attend to emotional targets paired with distractors that were either neutral or opposite in valence to the target. Pupillary responses were recorded during initial encoding as well as a later incidental recognition task. Memory and pupillary responses for negative items were not affected by the valence of distractors, suggesting that positive distractors did not automatically attract older adults' attention while they were encoding negative targets. Additionally, the pupil dilation to negative items mediated the relation between age and positivity in memory. Overall, memory and pupillary responses provide converging support for a cognitive control account of positivity effects in late adulthood and suggest a link between attentional processes and the memory positivity effect.}, keywords = {}, pubstate = {published}, tppubtype = {article} } An age-related ‘positivity' effect has been identified, in which older adults show an information- processing bias towards positive emotional items in attention and memory. In the present study, we examined this positivity bias by using a novel paradigm in which emotional and neutral distractors were presented along with emotionally valenced targets. Thirty-five older and 37 younger adults were asked during encoding to attend to emotional targets paired with distractors that were either neutral or opposite in valence to the target. Pupillary responses were recorded during initial encoding as well as a later incidental recognition task. Memory and pupillary responses for negative items were not affected by the valence of distractors, suggesting that positive distractors did not automatically attract older adults' attention while they were encoding negative targets. Additionally, the pupil dilation to negative items mediated the relation between age and positivity in memory. Overall, memory and pupillary responses provide converging support for a cognitive control account of positivity effects in late adulthood and suggest a link between attentional processes and the memory positivity effect. |
Imme C Zillekens; Marie Luise Brandi; Juha M Lahnakoski; Atesh Koul; Valeria Manera; Cristina Becchio; Leonhard Schilbach Increased functional coupling of the left amygdala and medial prefrontal cortex during the perception of communicative point-light stimuli Journal Article Social Cognitive and Affective Neuroscience, 14 (1), pp. 97–107, 2019. @article{Zillekens2019, title = {Increased functional coupling of the left amygdala and medial prefrontal cortex during the perception of communicative point-light stimuli}, author = {Imme C Zillekens and Marie Luise Brandi and Juha M Lahnakoski and Atesh Koul and Valeria Manera and Cristina Becchio and Leonhard Schilbach}, doi = {10.1093/scan/nsy105}, year = {2019}, date = {2019-01-01}, journal = {Social Cognitive and Affective Neuroscience}, volume = {14}, number = {1}, pages = {97--107}, abstract = {Interpersonal predictive coding (IPPC) describes the behavioral phenomenon whereby seeing a communicative rather than an individual action helps to discern a masked second agent. As little is known, yet, about the neural correlates of IPPC, we conducted a functional magnetic resonance imaging study in a group of 27 healthy participants using point-light displays of moving agents embedded in distractors. We discovered that seeing communicative compared to individual actions was associated with higher activation of right superior frontal gyrus, whereas the reversed contrast elicited increased neural activation in an action observation network that was activated during all trials. Our findings, therefore, potentially indicate the formation of action predictions and a reduced demand for executive control in response to communicative actions. Further, in a regression analysis, we revealed that increased perceptual sensitivity was associated with a deactivation of the left amygdala during the perceptual task. A consecutive psychophysiological interaction analysis showed increased connectivity of the amygdala with medial prefrontal cortex in the context of communicative compared to individual actions. Thus, whereas increased amygdala signaling might interfere with task-relevant processes, increased co-activation of the amygdala and the medial prefrontal cortex in a communicative context might represent the integration of mentalizing computations.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Interpersonal predictive coding (IPPC) describes the behavioral phenomenon whereby seeing a communicative rather than an individual action helps to discern a masked second agent. As little is known, yet, about the neural correlates of IPPC, we conducted a functional magnetic resonance imaging study in a group of 27 healthy participants using point-light displays of moving agents embedded in distractors. We discovered that seeing communicative compared to individual actions was associated with higher activation of right superior frontal gyrus, whereas the reversed contrast elicited increased neural activation in an action observation network that was activated during all trials. Our findings, therefore, potentially indicate the formation of action predictions and a reduced demand for executive control in response to communicative actions. Further, in a regression analysis, we revealed that increased perceptual sensitivity was associated with a deactivation of the left amygdala during the perceptual task. A consecutive psychophysiological interaction analysis showed increased connectivity of the amygdala with medial prefrontal cortex in the context of communicative compared to individual actions. Thus, whereas increased amygdala signaling might interfere with task-relevant processes, increased co-activation of the amygdala and the medial prefrontal cortex in a communicative context might represent the integration of mentalizing computations. |
Ulrike Zimmer; M H"ofler; Karl Koschutnig; Anja Ischebeck; Margit Höfler; Karl Koschutnig; Anja Ischebeck Neuronal interactions in areas of spatial attention reflect avoidance of disgust, but orienting to danger Journal Article NeuroImage, 134 , pp. 94–104, 2016. @article{Zimmer2016, title = {Neuronal interactions in areas of spatial attention reflect avoidance of disgust, but orienting to danger}, author = {Ulrike Zimmer and M H"ofler and Karl Koschutnig and Anja Ischebeck and Margit Höfler and Karl Koschutnig and Anja Ischebeck}, doi = {10.1016/j.neuroimage.2016.03.050}, year = {2016}, date = {2016-01-01}, journal = {NeuroImage}, volume = {134}, pages = {94--104}, abstract = {For survival, it is necessary to attend quickly towards dangerous objects, but to turn away from something that is disgusting. We tested whether fear and disgust sounds direct spatial attention differently. Using fMRI, a sound cue (disgust, fear or neutral) was presented to the left or right ear. The cue was followed by a visual target (a small arrow) which was located on the same (valid) or opposite (invalid) side as the cue. Participants were required to decide whether the arrow pointed up- or downwards while ignoring the sound cue. Behaviorally, responses were faster for invalid compared to valid targets when cued by disgust, whereas the opposite pattern was observed for targets after fearful and neutral sound cues. During target presentation, activity in the visual cortex and IPL increased for targets invalidly cued with disgust, but for targets validly cued with fear which indicated a general modulation of activation due to attention. For the TPJ, an interaction in the opposite direction was observed, consistent with its role in detecting targets at unattended positions and in relocating attention. As a whole our results indicate that a disgusting sound directs spatial attention away from its location, in contrast to fearful and neutral sounds.}, keywords = {}, pubstate = {published}, tppubtype = {article} } For survival, it is necessary to attend quickly towards dangerous objects, but to turn away from something that is disgusting. We tested whether fear and disgust sounds direct spatial attention differently. Using fMRI, a sound cue (disgust, fear or neutral) was presented to the left or right ear. The cue was followed by a visual target (a small arrow) which was located on the same (valid) or opposite (invalid) side as the cue. Participants were required to decide whether the arrow pointed up- or downwards while ignoring the sound cue. Behaviorally, responses were faster for invalid compared to valid targets when cued by disgust, whereas the opposite pattern was observed for targets after fearful and neutral sound cues. During target presentation, activity in the visual cortex and IPL increased for targets invalidly cued with disgust, but for targets validly cued with fear which indicated a general modulation of activation due to attention. For the TPJ, an interaction in the opposite direction was observed, consistent with its role in detecting targets at unattended positions and in relocating attention. As a whole our results indicate that a disgusting sound directs spatial attention away from its location, in contrast to fearful and neutral sounds. |
Eckart Zimmermann; Markus Lappe Mislocalization of flashed and stationary visual stimuli after adaptation of reactive and scanning saccades Journal Article Journal of Neuroscience, 29 (35), pp. 11055–11064, 2009. @article{Zimmermann2009, title = {Mislocalization of flashed and stationary visual stimuli after adaptation of reactive and scanning saccades}, author = {Eckart Zimmermann and Markus Lappe}, doi = {10.1523/JNEUROSCI.1604-09.2009}, year = {2009}, date = {2009-01-01}, journal = {Journal of Neuroscience}, volume = {29}, number = {35}, pages = {11055--11064}, abstract = {When we look around and register the location of visual objects, our oculomotor system continuously prepares targets for saccadic eye movements. The preparation of saccade targets may be directly involved in the perception of object location because modification of saccade amplitude by saccade adaptation leads to a distortion of the visual localization of briefly flashed spatial probes. Here, we investigated effects of adaptation on the localization of continuously visible objects. We compared adaptation-induced mislocalization of probes that were present for 20 ms during the saccade preparation period and of probes that were present for textgreater1 s before saccade initiation. We studied the mislocalization of these probes for two different saccade types, reactive saccades to a suddenly appearing target and scanning saccades in the self-paced viewing of a stationary scene. Adaptation of reactive saccades induced mislocalization of flashed probes. Adaptation of scanning saccades induced in addition also mislocalization of stationary objects. The mislocalization occurred in the absence of visual landmarks and must therefore originate from the change in saccade motor parameters. After adaptation of one type of saccade, the saccade amplitude change and the mislocalization transferred only weakly to the other saccade type. Mislocalization of flashed and stationary probes thus followed the selectivity of saccade adaptation. Since the generation and adaptation of reactive and scanning saccades are known to involve partially different brain mechanisms, our results suggest that visual localization of objects in space is linked to saccade targeting at multiple sites in the brain.}, keywords = {}, pubstate = {published}, tppubtype = {article} } When we look around and register the location of visual objects, our oculomotor system continuously prepares targets for saccadic eye movements. The preparation of saccade targets may be directly involved in the perception of object location because modification of saccade amplitude by saccade adaptation leads to a distortion of the visual localization of briefly flashed spatial probes. Here, we investigated effects of adaptation on the localization of continuously visible objects. We compared adaptation-induced mislocalization of probes that were present for 20 ms during the saccade preparation period and of probes that were present for textgreater1 s before saccade initiation. We studied the mislocalization of these probes for two different saccade types, reactive saccades to a suddenly appearing target and scanning saccades in the self-paced viewing of a stationary scene. Adaptation of reactive saccades induced mislocalization of flashed probes. Adaptation of scanning saccades induced in addition also mislocalization of stationary objects. The mislocalization occurred in the absence of visual landmarks and must therefore originate from the change in saccade motor parameters. After adaptation of one type of saccade, the saccade amplitude change and the mislocalization transferred only weakly to the other saccade type. Mislocalization of flashed and stationary probes thus followed the selectivity of saccade adaptation. Since the generation and adaptation of reactive and scanning saccades are known to involve partially different brain mechanisms, our results suggest that visual localization of objects in space is linked to saccade targeting at multiple sites in the brain. |
Eckart Zimmermann; Markus Lappe Motor signals in visual localization Journal Article Journal of Vision, 10 (6), pp. 1–11, 2010. @article{Zimmermann2010, title = {Motor signals in visual localization}, author = {Eckart Zimmermann and Markus Lappe}, doi = {10.1167/10.6.2}, year = {2010}, date = {2010-01-01}, journal = {Journal of Vision}, volume = {10}, number = {6}, pages = {1--11}, abstract = {We demonstrate a strong sensory-motor coupling in visual localization in which experimental modification of the control of saccadic eye movements leads to an associated change in the perceived location of objects. Amplitudes of saccades to peripheral targets were altered by saccadic adaptation, induced by an artificial step of the saccade target during the eye movement, which leads the oculomotor system to recalibrate saccade parameters. Increasing saccade amplitudes induced concurrent shifts in perceived location of visual objects. The magnitude of perceptual shift depended on the size and persistence of errors between intended and actual saccade amplitudes. This tight agreement between the change of eye movement control and the change of localization shows that perceptual space is shaped by motor knowledge rather than simply constructed from visual input.}, keywords = {}, pubstate = {published}, tppubtype = {article} } We demonstrate a strong sensory-motor coupling in visual localization in which experimental modification of the control of saccadic eye movements leads to an associated change in the perceived location of objects. Amplitudes of saccades to peripheral targets were altered by saccadic adaptation, induced by an artificial step of the saccade target during the eye movement, which leads the oculomotor system to recalibrate saccade parameters. Increasing saccade amplitudes induced concurrent shifts in perceived location of visual objects. The magnitude of perceptual shift depended on the size and persistence of errors between intended and actual saccade amplitudes. This tight agreement between the change of eye movement control and the change of localization shows that perceptual space is shaped by motor knowledge rather than simply constructed from visual input. |
Eckart Zimmermann; David C Burr; Concetta M Morrone Spatiotopic visual maps revealed by saccadic adaptation in humans Journal Article Current Biology, 21 (16), pp. 1380–1384, 2011. @article{Zimmermann2011, title = {Spatiotopic visual maps revealed by saccadic adaptation in humans}, author = {Eckart Zimmermann and David C Burr and Concetta M Morrone}, doi = {10.1016/j.cub.2011.06.014}, year = {2011}, date = {2011-01-01}, journal = {Current Biology}, volume = {21}, number = {16}, pages = {1380--1384}, publisher = {Elsevier Ltd}, abstract = {Saccadic adaptation [1] is a powerful experimental paradigm to probe the mechanisms of eye movement control and spatial vision, in which saccadic amplitudes change in response to false visual feedback. The adaptation occurs primarily in the motor system [2, 3], but there is also evidence for visual adaptation, depending on the size and the permanence of the postsaccadic error [4-7]. Here we confirm that adaptation has a strong visual component and show that the visual component of the adaptation is spatially selective in external, not retinal coordinates. Subjects performed a memory-guided, double-saccade, outward-adaptation task designed to maximize visual adaptation and to dissociate the visual and motor corrections. When the memorized saccadic target was in the same position (in external space) as that used in the adaptation training, saccade targeting was strongly influenced by adaptation (even if not matched in retinal or cranial position), but when in the same retinal or cranial but different external spatial position, targeting was unaffected by adaptation, demonstrating unequivocal spatiotopic selectivity. These results point to the existence of a spatiotopic neural representation for eye movement control that adapts in response to saccade error signals.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Saccadic adaptation [1] is a powerful experimental paradigm to probe the mechanisms of eye movement control and spatial vision, in which saccadic amplitudes change in response to false visual feedback. The adaptation occurs primarily in the motor system [2, 3], but there is also evidence for visual adaptation, depending on the size and the permanence of the postsaccadic error [4-7]. Here we confirm that adaptation has a strong visual component and show that the visual component of the adaptation is spatially selective in external, not retinal coordinates. Subjects performed a memory-guided, double-saccade, outward-adaptation task designed to maximize visual adaptation and to dissociate the visual and motor corrections. When the memorized saccadic target was in the same position (in external space) as that used in the adaptation training, saccade targeting was strongly influenced by adaptation (even if not matched in retinal or cranial position), but when in the same retinal or cranial but different external spatial position, targeting was unaffected by adaptation, demonstrating unequivocal spatiotopic selectivity. These results point to the existence of a spatiotopic neural representation for eye movement control that adapts in response to saccade error signals. |
Eckart Zimmermann; Concetta M Morrone; David C Burr Visual motion distorts visual and motor space Journal Article Journal of Vision, 12 (2), pp. 10–10, 2012. @article{Zimmermann2012, title = {Visual motion distorts visual and motor space}, author = {Eckart Zimmermann and Concetta M Morrone and David C Burr}, doi = {10.1167/12.2.10}, year = {2012}, date = {2012-01-01}, journal = {Journal of Vision}, volume = {12}, number = {2}, pages = {10--10}, abstract = {Much evidence suggests that visual motion can cause severe distortions in the perception of spatial position. In this study, we show that visual motion also distorts saccadic eye movements. Landing positions of saccades performed to objects presented in the vicinity of visual motion were biased in the direction of motion. The targeting errors for both saccades and perceptual reports were maximum during motion onset and were of very similar magnitude under the two conditions. These results suggest that visual motion affects a representation of spatial position, or spatial map, in a similar fashion for visuomotor action as for perception.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Much evidence suggests that visual motion can cause severe distortions in the perception of spatial position. In this study, we show that visual motion also distorts saccadic eye movements. Landing positions of saccades performed to objects presented in the vicinity of visual motion were biased in the direction of motion. The targeting errors for both saccades and perceptual reports were maximum during motion onset and were of very similar magnitude under the two conditions. These results suggest that visual motion affects a representation of spatial position, or spatial map, in a similar fashion for visuomotor action as for perception. |
Eckart Zimmermann The reference frames in saccade adaptation Journal Article Journal of Neurophysiology, 109 , pp. 1815, 2013. @article{Zimmermann2013a, title = {The reference frames in saccade adaptation}, author = {Eckart Zimmermann}, doi = {10.1152/jn.00743.2012}, year = {2013}, date = {2013-01-01}, journal = {Journal of Neurophysiology}, volume = {109}, pages = {1815}, abstract = {Saccade adaptation is a mecha- nism that adjusts saccade landing positions if they systematically fail to reach their intended target. In the laboratory, saccades can be shortened or lengthened if the saccade target is displaced during execution of the saccade. In this study, saccades were performed from different positions to an adapted saccade target to dissociate adapta- tion to a spatiotopic position in external space from a combined retinotopic and spatiotopic coding. The presentation duration of the saccade target before saccade execution was systematically varied, during adaptation and during test trials, with a delayed saccade paradigm. Spatiotopic shifts in landing positions depended on a certain preview duration of the target before saccade execution. When saccades were performed immediately to a suddenly appearing target, no spatiotopic adaptation was observed. These results suggest that a spatiotopic representation of the visual target signal builds up as a function of the duration the saccade target is visible before saccade execution. Different coordinate frames might also explain the separate adaptability of reactive and voluntary saccades. Spatiotopic effects were found only in outward adaptation but not in inward adaptation, which is consistent with the idea that outward adaptation takes place at the level of the visual target representation, whereas inward adap- tation is achieved at a purely motor level.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Saccade adaptation is a mecha- nism that adjusts saccade landing positions if they systematically fail to reach their intended target. In the laboratory, saccades can be shortened or lengthened if the saccade target is displaced during execution of the saccade. In this study, saccades were performed from different positions to an adapted saccade target to dissociate adapta- tion to a spatiotopic position in external space from a combined retinotopic and spatiotopic coding. The presentation duration of the saccade target before saccade execution was systematically varied, during adaptation and during test trials, with a delayed saccade paradigm. Spatiotopic shifts in landing positions depended on a certain preview duration of the target before saccade execution. When saccades were performed immediately to a suddenly appearing target, no spatiotopic adaptation was observed. These results suggest that a spatiotopic representation of the visual target signal builds up as a function of the duration the saccade target is visible before saccade execution. Different coordinate frames might also explain the separate adaptability of reactive and voluntary saccades. Spatiotopic effects were found only in outward adaptation but not in inward adaptation, which is consistent with the idea that outward adaptation takes place at the level of the visual target representation, whereas inward adap- tation is achieved at a purely motor level. |
Eckart Zimmermann; Concetta M Morrone; David C Burr Spatial position information accumulates steadily over time Journal Article Journal of Neuroscience, 33 (47), pp. 18396–18401, 2013. @article{Zimmermann2013b, title = {Spatial position information accumulates steadily over time}, author = {Eckart Zimmermann and Concetta M Morrone and David C Burr}, doi = {10.1523/JNEUROSCI.1864-13.2013}, year = {2013}, date = {2013-01-01}, journal = {Journal of Neuroscience}, volume = {33}, number = {47}, pages = {18396--18401}, abstract = {One of the more enduring mysteries of neuroscience is how the visual system constructs robust maps of the world that remain stable in the face of frequent eye movements. Here we show that encoding the position of objects in external space is a relatively slow process, building up over hundreds of milliseconds. We display targets to which human subjects saccade after a variable preview duration. As they saccade, the target is displaced leftwards or rightwards, and subjects report the displacement direction. When subjects saccade to targets without delay, sensitivity is poor; but if the target is viewed for 300-500 ms before saccading, sensitivity is similar to that during fixation with a strong visual mask to dampen transients. These results suggest that the poor displacement thresholds usually observed in the "saccadic suppression of displacement" paradigm are a result of the fact that the target has had insufficient time to be encoded in memory, and not a result of the action of special mechanisms conferring saccadic stability. Under more natural conditions, trans-saccadic displacement detection is as good as in fixation, when the displacement transients are masked.}, keywords = {}, pubstate = {published}, tppubtype = {article} } One of the more enduring mysteries of neuroscience is how the visual system constructs robust maps of the world that remain stable in the face of frequent eye movements. Here we show that encoding the position of objects in external space is a relatively slow process, building up over hundreds of milliseconds. We display targets to which human subjects saccade after a variable preview duration. As they saccade, the target is displaced leftwards or rightwards, and subjects report the displacement direction. When subjects saccade to targets without delay, sensitivity is poor; but if the target is viewed for 300-500 ms before saccading, sensitivity is similar to that during fixation with a strong visual mask to dampen transients. These results suggest that the poor displacement thresholds usually observed in the "saccadic suppression of displacement" paradigm are a result of the fact that the target has had insufficient time to be encoded in memory, and not a result of the action of special mechanisms conferring saccadic stability. Under more natural conditions, trans-saccadic displacement detection is as good as in fixation, when the displacement transients are masked. |
Eckart Zimmermann; S Born; Gereon R Fink; P Cavanagh Masking produces compression of space and time in the absence of eye movements Journal Article Journal of Neurophysiology, 112 (12), pp. 3066–3076, 2014. @article{Zimmermann2014a, title = {Masking produces compression of space and time in the absence of eye movements}, author = {Eckart Zimmermann and S Born and Gereon R Fink and P Cavanagh}, doi = {10.1152/jn.00156.2014}, year = {2014}, date = {2014-01-01}, journal = {Journal of Neurophysiology}, volume = {112}, number = {12}, pages = {3066--3076}, abstract = {Whenever the visual stream is abruptly disturbed by eye movements, blinks, masks, or flashes of light, the visual system needs to retrieve the new locations of current targets and to reconstruct the timing of events to straddle the interruption. This process may introduce position and timing errors. We here report that very similar errors are seen in human subjects across three different paradigms when disturbances are caused by either eye movements, as is well known, or, as we now show, masking. We suggest that the characteristic effects of eye movements on position and time, spatial and temporal compression and saccadic suppression of displacement, are consequences of the interruption and the subsequent reconnection and are seen also when visual input is masked without any eye movements. Our data show that compression and suppression effects are not solely a product of ocular motor activity but instead can be properties of a correspondence process that links the targets of interest across interruptions in visual input, no matter what their source.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Whenever the visual stream is abruptly disturbed by eye movements, blinks, masks, or flashes of light, the visual system needs to retrieve the new locations of current targets and to reconstruct the timing of events to straddle the interruption. This process may introduce position and timing errors. We here report that very similar errors are seen in human subjects across three different paradigms when disturbances are caused by either eye movements, as is well known, or, as we now show, masking. We suggest that the characteristic effects of eye movements on position and time, spatial and temporal compression and saccadic suppression of displacement, are consequences of the interruption and the subsequent reconnection and are seen also when visual input is masked without any eye movements. Our data show that compression and suppression effects are not solely a product of ocular motor activity but instead can be properties of a correspondence process that links the targets of interest across interruptions in visual input, no matter what their source. |
Eckart Zimmermann; Concetta M Morrone; David C Burr The visual component to saccadic compression. Journal Article Journal of Vision, 14 (12), pp. 13–, 2014. @article{Zimmermann2014b, title = {The visual component to saccadic compression.}, author = {Eckart Zimmermann and Concetta M Morrone and David C Burr}, doi = {10.1167/14.12.13.doi}, year = {2014}, date = {2014-01-01}, journal = {Journal of Vision}, volume = {14}, number = {12}, pages = {13--}, abstract = {Visual objects presented around the time of saccadic eye movements are strongly mislocalized towards the saccadic target, a phenomenon known as "saccadic compression." Here we show that perisaccadic compression is modulated by the presence of a visual saccadic target. When subjects saccaded to the center of the screen with no visible target, perisaccadic localization was more veridical than when tested with a target. Presenting a saccadic target sometime before saccade initiation was sufficient to induce mislocalization. When we systematically varied the onset of the saccade target, we found that it had to be presented around 100 ms before saccade execution to cause strong mislocalization: saccadic targets presented after this time caused progressively less mislocalization. When subjects made a saccade to screen center with a reference object placed at various positions, mislocalization was focused towards the position of the reference object. The results suggest that saccadic compression is a signature of a mechanism attempting to match objects seen before the saccade with those seen after.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Visual objects presented around the time of saccadic eye movements are strongly mislocalized towards the saccadic target, a phenomenon known as "saccadic compression." Here we show that perisaccadic compression is modulated by the presence of a visual saccadic target. When subjects saccaded to the center of the screen with no visible target, perisaccadic localization was more veridical than when tested with a target. Presenting a saccadic target sometime before saccade initiation was sufficient to induce mislocalization. When we systematically varied the onset of the saccade target, we found that it had to be presented around 100 ms before saccade execution to cause strong mislocalization: saccadic targets presented after this time caused progressively less mislocalization. When subjects made a saccade to screen center with a reference object placed at various positions, mislocalization was focused towards the position of the reference object. The results suggest that saccadic compression is a signature of a mechanism attempting to match objects seen before the saccade with those seen after. |
所有EyeLink论文,按第一作者名字首字母排序。您可以使用关键字搜索,比如Visual Search,Smooth Pursuit,Parkinsons等。您也可以按作者姓名搜索。如需要查阅各个研究领域的EyeLink论文,请查看各解决方案网页。如果您发现我们漏掉了部分EyeLink论文,请邮件联系!