Early Language Mapping: How Infants Learn Pronunciation

Why do Americans struggle with differentiating between the “shee” (“west”) and “chee” (“wife”) sounds in Mandarin?

Why do the Japanese struggle with the “l” and “r” sounds in “lake” and “rake”?

University of Washington speech professor Patricia Kuhl has the answer.

Map-Building

Having studied early language development for nearly three decades, Kuhl has a better understanding than most of how and when pronunciation and accents develop.

Before a baby even speaks her first word, a pattern of speaking has formed in the brain, based on her primary caregiver’s speech.

With American, Japanese, Swedish, and Russian infant participants, Kuhl found that vowel and consonant sounds of both native and foreign languages are clearly recognized by children between 6 to 8 months. 

That means an American infant can recognize and respond to the differences in “shee” and “chee,” while the Japanese infant will differentiate between “l” and “r” just as easily as an American.

Head-Turn Study

Kuhl used a “head-turn” study to identify whether infants could recognize these sounds.

While distracting an infant with a toy, the speaker would repeat a sound over and over – “la, la, la,” for instance.

The infant would continue watching the toy until she would hear a different sound mixed in – “la, la, ra”  – which would then light up the toy.

In anticipation of the reward, two-thirds of both Japanese and American 6- to 8-month-old infants would turn to look at the toy when the sound changed.

That ability was lost by the time the child reached one year.

Using the same sounds, a little over half of Japanese infants and nearly four-fifths of Americans would turn to look at the toy by the time the infants had reached a year.

The study concluded that this is when native sounds become the baby’s norm.

Magnet Effect

A Smithsonian article by Edwin Kiester, Jr., throws this map-building into further relief, with Kuhl describing the mapping of the baby’s language brain:

“The baby early begins to draw a kind of map of the sounds he hears. That map continues to develop and strengthen as the sounds are repeated. The sounds not heard, the synapses not used, are bypassed and pruned from the brain’s network. Eventually the sounds and accent of the language become automatic.”

A “magnet effect” further maps the native language, as prototypical sounds are absorbed and interpreted as native, while foreign sounds are discarded as “interference.” 

And what of infants born in bilingual households?

Those infant brains simply draw multiple maps, which is made easier if a specific language is spoken in the pitch, tone, and pronunciation of either caregiver.

This is why foreign languages are difficult to learn into adulthood: your language brain has long been mapped, and it’s a struggle to tune into sounds your brain wiring perceives as “interference.”

But this does not mean it’s impossible.

We’ll talk about the possibility next week.

The Myth of Spanish King Ferdinand, the Lisping King & the True Gene-Culture Coevolution of Speech

There is a common myth in Spain that King Ferdinand was born with a lisp.

As the story goes, this speech impediment led to the Spanish pronunciation of “z” and “c” with the soft “th” sound, as Ferdinand’s courtiers imitated his lisp.

This Spanish pronunciation of “z” as “th” differs from the “z” as “s” spoken in western Spanish-speaking countries.

In reality, the “s” sound exists in the Spanish language; it is just not applied to “z” or “c” (the latter, when followed by “i” or “e”). 

Thus, it follows that the differences in pronunciation across Spanish-speaking cultures are not due to a lisping king, but rather to the natural regional differences that develop in living languages.

In the same way that American pronunciation of English varies from British pronunciation, peculiarities of living languages emerge across many groups, regions, countries, etc.

While King Ferdinand’s story is nothing but an urban legend, culture and genetics really do work together to create physiological differences related to speech.

Here’s how.

Genes & Culture Interact

Herbert Gintis’ paper titled, “Gene–culture coevolution and the nature of human sociality,” defines the gene-culture coevolution theory as follows:

“Gene–culture coevolution is the application of sociobiology, the general theory of the social organization of biological species, to humans—a species that transmits culture in a manner that leads to quantitative growth across generations.”

Cultural differences have produced changes in brain size, body size, and other aspects of human anatomy across the human species.

Last week, we talked about how genes and culture worked together to alter our diet – specifically, our ability to consume milk products – and how that ability varies across cultures according to their cultural history.

In the same way, gene-culture coevolution has symbiotically shaped human speech and communication.

Speech & Communication

Gintis goes on to explain how gene-culture coevolution is readily apparent in the physiological evolution of human speech and facial communication.

He writes that genetic alterations that improve speech are propagated due to the increasing importance human society places on communication. 

In early humans, speech production was facilitated by the evolution of regions in the motor cortex, including the adaptation of muscles and nerves in the tongue, larynx, and mouth that help produce speech.

Other physical attributes that have adapted over time in humans to improve speech include a low larynx in the throat, a shorter oral cavity, and the hypoglossal canal of the tongue, all of which both help produce sounds.

The Wernicke’s and Broca’s regions in the cerebral cortex are either absent or are very small in other primates; they’re large in humans, enabling comprehension and speech.

Human facial musculature is also more highly developed, allowing the eyes and lips to impart nonverbal communication.

Considering the development of these attributes that facilitate speech in humans, you can see that genes and culture have worked closely together to evolve the human species.

Next week, we’ll talk about how these physiological aspects of speech differ across cultures.

Sociolinguistics: How Do Languages Change Across Cultures?

Cross-cultural barriers.

That’s what you’re facing when ethnocentricity enters into international communication.

You’ll run into every communication barrier imaginable, some variables of which include:

  • Language, itself
  • Nonverbal communication norms
  • Authority ranks
  • Technological environment
  • Social environment
  • Natural environment

Understanding the cultures with which you are working and studying up on these variables will help you combat your own innate ethnocentricity, allowing cross-cultural communication to go infinitely more smoothly.

Let’s take a look at how these misunderstandings arise.

Linguistic Misunderstandings

It goes without saying that language is paramount to communication.

But when you work cross-culturally, you may not speak the same language, which means you and your counterpart will be relying on translators to assist communication.

Hiring a good translator can make or break communication, especially considering, even without a language barrier per se, linguistic understandings can still occur.

Take American versus British English, for instance.

Both cultures speak English, with minor differences in vocabulary, so you might assume communication would be cut and dry. But the culturally-grounded differences in vocabulary, phrasings, and accents have the potential to throw a wrench in communication.

Sociolinguistics

Enter, sociolinguistics.

Sociolinguistics creates rifts in cross-cultural communications via the social patterning that sometimes distinguishes class, inflates stereotypes, or highlights other national prejudices.

In fact, the differences between American and British English actually stem from class distinction, itself.

In the 16th and 17th centuries, the British exported the English language to America.

Those who settled in America pronounced the ‘r’ in words, something known as “rhotic speech.”

Meanwhile, in the UK, to distinguish themselves from the commoners, the upper classes began softening their ‘r’s. But the distinction didn’t last long as the masses naturally followed, thus creating a profound difference in pronunciation between British and American English.

The change in spelling and vocabulary was more intentional.

Without standardized spelling, dictionaries were necessary to preserve the pronunciation of words.

Those in the UK were created by scholars in London, while those in the US were compiled by lexicographer, Noah Webster.

According to some, in order to establish cultural independence from the motherland, Webster changed the way American words were spelled (no ‘u’ in colour, for instance), thus creating further differences in the English language across the two cultures.

Minor Details are of Major Importance

Minor details are crucial when it comes to business negotiations, therefore the fine print might be blurred by minor differences in language.

The more minor the detail, the more difficult it is to correct.

For instance, you can spot a major translation error from a mile away. Although correcting such errors may consume a lot of time, look unprofessional, and put stress on negotiations, at least they’re easy to catch.

However, accents, dialects, and cultural language choices can strain international negotiations between two cultures who are, more or less, linguistically on the same page.

We’ll talk more about this next week.