A study published today showed something quite remarkable - that voices and words leave a definite readable signature in the brain - so clear, in fact, that it is readable to the extent that it is possible to consistently determine whose voice the brain is listening to, from the neural signature alone, and, moreover, it is possible to determine what was said - although this initial research only uses very simple vocalisations.
As we evolved with language as an oral/aural phenomenon , it makes good sense to assume that the oral/aural language processing system is highly developed, and that reading without aural input is perhaps not as well modelled by the brain.
My guess is that the brain has to translate the written word into an oral/aural equivalent....and with a foreign language, where the brain has nothing to work with, this is, to my mind, simply asking for trouble - effectively by not teaching a language orally, one is quite probably placing an obstacle in the path of the student.
Thus, the linguistic patterns of a foreign language would also appear in the brain.....and one could perhaps assume that the brain 'remembers' these patterns, and uses them to construct the language and the rules of the language - for after all, it is quite possible to learn a language fluently, given enough oral and aural input, without ever having been taught a single grammatical rule. The brain needs a store of information to work on to be able to achieve this feat. What is fascinating, is that we can now actually 'see' this information as it is encoded in the brain, albeit only in a very outline form at this stage.
Scientists from Maastricht University have developed a method to look into the brain of a person and read out who has spoken to him or her and what was said. With the help of neuroimaging and data mining techniques the researchers mapped the brain activity associated with the recognition of speech sounds and voices.
In their Science article "'Who' is Saying 'What'? Brain-Based Decoding of Human Voice and Speech," the four authors demonstrate that speech sounds and voices can be identified by means of a unique 'neural fingerprint' in the listener's brain. In the future this new knowledge could be used to improve computer systems for automatic speech and speaker recognition.
Seven study subjects listened to three different speech sounds (the vowels /a/, /i/ and /u/), spoken by three different people, while their brain activity was mapped using neuroimaging techniques (fMRI). With the help of data mining methods the researchers developed an algorithm to translate this brain activity into unique patterns that determine the identity of a speech sound or a voice. The various acoustic characteristics of vocal cord vibrations (neural patterns) were found to determine the brain activity.
Just like real fingerprints, these neural patterns are both unique and specific: the neural fingerprint of a speech sound does not change if uttered by somebody else and a speaker's fingerprint remains the same, even if this person says something different.
Moreover, this study revealed that part of the complex sound-decoding process takes place in areas of the brain previously just associated with the early stages of sound processing. Existing neurocognitive models assume that processing sounds actively involves different regions of the brain according to a certain hierarchy: after a simple processing in the auditory cortex the more complex analysis (speech sounds into words) takes place in specialised regions of the brain. However, the findings from this study imply a less hierarchal processing of speech that is spread out more across the brain.
The research was partly funded by the Netherlands Organisation for Scientific Research (NWO): Two of the four authors, Elia Formisano and Milene Bonte carried out their research with an NWO grant (Vidi and Veni). The data mining methods were developed during the PhD research of Federico De Martino (doctoral thesis defended at Maastricht University on 24 October 2008).