Language study offers a new twist on the mind-body connection


New research from Northeast psychology professor Iris Berent and her colleagues indicates that language and motor systems are intimately linked, but not in the way that has been widely believed.

Spoken languages ​​express words through sound patterns, some of which are preferred over others. For example, the “blog” sound model is preferred over “lbog” in English as well as in many other languages. The researchers wanted to know what explains these preferences, in particular whether they reflect abstract rules of language in the brain or whether, upon hearing speech, people try to simulate how these sounds are produced by the motor system of speech. speech.

Their findings support previous research indicating the link between language cognition and the motor system; however, this connection is different from what was previously assumed. The motor system does not directly determine language preference, they found. Rather, the abstract rules of language guide language preferences, and these abstract rules can trigger motor action. In other words, motor action is a consequence – not the cause – of language preference.

Sound patterns like “blog” are preferred over ones like “lbog” not because they are easy to produce; on the contrary, these syllables are preferred because they conform to linguistic rules and, therefore, they tend to activate the motor system, she said.

Additionally, Berent said these findings could have implications for studying language-related disorders that are linked to the motor system. One such area is dyslexia, which Berent has studied for years.

“This has huge theoretical implications,” said Berent, a cognitive scientist whose research examines the nature of language proficiency. “The idea that linguistic knowledge is fully incorporated into motor action is a hot topic in neuroscience right now. Our study shows that motor action is still very important in language processing, but we show a new twist on the mind-body connection.

The research was published Monday afternoon in the journal Proceedings of the National Academy of Sciences. Among Berent’s collaborators was Alvaro Pascual-Leone, an internationally renowned neurologist at Beth Israel Deaconess Medical Center in Boston and Harvard Medical School and whose expertise in transcranial magnetic stimulation, or TMS, has played a key role in the research. Xu Zhao, PhD’15, a doctoral candidate in the Department of Northeast Psychology, and other researchers affiliated with Beth-Israel Deaconess Medical Center, Harvard Medical School, Brigham and Women’s Hospital, and the University of Oxford co-authored the article.

Albert Galaburda, co-author of the paper and a leading neurologist at BIDMC, said: “This study helps resolve a long-standing debate in the literature: which part of speech depends on experience and which part depends on grammatical rules. relatively independent of experience., or some sort of logical system?Since my primary interest is in language-based learning disabilities, specifically dyslexia, this question can be transformed to ask if dyslexics have a primary disorder of grammar or a primary disorder of language experience, as in poor perception of speech reaching their ears as babies.”

The researchers’ findings are based on a study in which they sought to assess English-speaking adults’ sensitivity to syllable structure. Across languages, syllables like “blif” are more common than “lbif,” and previous research from Berent’s lab found that syllables like “blif” are easier to process, suggesting that these syllables are preferred. . The researchers sought to uncover the reason for this preference: do malformed syllables like “lbif” violate abstract rules, or do people have difficulty processing them because these syllables are difficult to produce?

To examine this question, the researchers used TMS, a noninvasive technique that induces focal cortical current through electromagnetic induction to temporarily inhibit specific regions of the brain. The aim was to find out whether disrupting the motor regions of the participants’ lips using TMS would eliminate the preference for “blif”.

In the experiment, participants were presented with an auditory stimulus – either a monosyllable or disyllable, eg “blif” or “belif” – and asked to indicate whether that stimulus comprised one or two syllables. Two hundred milliseconds before hearing this sound, TMS pulses were administered to temporarily disrupt the motor region of the lips. The critical comparison was for well-formed syllables (e.g., “blif”) versus poorly formed syllables (e.g., “lbif”). The researchers asked if the disruption of the motor system would disrupt the downside of “lbif”. If people dislike “lbif” because this pattern is hard to articulate, then syllables like “lbif” should be more susceptible to TMS, and so once people receive TMS, their dislike of “lbif” should be mitigated.

They found that the TMS pulses interfered with the participants’ ability to accurately determine the number of syllables. However, the results went against the realization engine hypothesis. The researchers found that malformed syllables like “lbif” were the least likely to be impaired by TMS, and a subsequent functional MRI experiment revealed that these syllables were also the least likely to engage the motor area of ​​the muscles. lips in the brain.

The results show that speech perception automatically engages the articulatory motor system, but language preferences persist even when the language motor system is disrupted. These results suggest that, despite their intimate links, language and motor systems are distinct.

“Language is designed to optimize motor action, but knowledge of it consists of disembodied and potentially abstract principles,” the researchers conclude.


Comments are closed.