Abstract
BACKGROUND
How do listeners manage to recognize words in an unfamiliar language? The physical continuity of the signal, in which real silent pauses between words are lacking, makes it a difficult task. However, there are multiple cues that can be exploited to localize word boundaries and to segment the acoustic signal. In the present study, word–stress was manipulated with statistical information and placed in different syllables within trisyllabic nonsense words to explore the result of the combination of the cues in an online word segmentation task.
RESULTS
The behavioral results showed that words were segmented better when stress was placed on the final syllables than when it was placed on the middle or first syllable. The electrophysiological results showed an increase in the amplitude of the P2 component, which seemed to be sensitive to word–stress and its location within words.
CONCLUSION
The results demonstrated that listeners can integrate specific prosodic and distributional cues when segmenting speech. An ERP component related to word–stress cues was identified: stressed syllables elicited larger amplitudes in the P2 component than unstressed ones.