How can the machine understand accents of people whose native mother tongue is not English? How does it handle extreme differences in accents?
The speech recognizer has been trained and validated on many samples of English speech from non-native speakers of English, representing over 100 different native languages. Differences in “accent” are thus accounted for by the acoustic models. This means that the representations of words in the system are designed to expect a wide variety of accented forms of English. While no human listener is likely to be accustomed to more than 100 different foreign accents, the speech processor has been trained on over 126 different accents and can therefore deal with all of these accents equally. If the speaker has a very heavy accent due to non-native pronunciation that as such would be assigned a low score by several human examiners, then this test taker will receive a low Pronunciation score from the machine (but this will not affect grammar or vocabulary scores, for example).
Related Questions
- If a site enrolls a subject who cannot read and understand English, does the informed consent document need to be translated into the subject’s native language?
- Do all English, French and Dutch-speaking Caribbean people speak a creole as a mother tongue?
- Why do people (espicially americans) think we english have funny accents?