Health

Finding the seat of language?

4 min read

Researchers look into Broca’s brain

A team of Harvard and University of
California, San Diego (UCSD), researchers report having pinpointed an area of
the brain where three essential components of language — word identification,
grammar, and word pronunciation — are processed.

Ned T. Sahin, a postdoctoral fellow in
both Harvard’s Psychology Department and UCSD’s Radiology Department, and
colleagues at both schools, have used a technique called intracranial
electrophysiology (ICE) to gain access, with unprecedented precision, to
Broca’s area, a region of the cerebral cortex long-suspected to be the seat of
language.

Their results are reported in the journal
Science.

In the ICE method, “direct recordings of
brain signals are made using electrodes placed inside the human brain, allowing
extremely high spatial, temporal, and physiological resolution, concurrently,”
says Sahin. For years, neurobiologists had lacked the techniques to pinpoint
the areas of the brain responsible for language processing.

Because there are no animal models for
human language processing and because it would be ethically unacceptable to
insert electrodes into the brains of healthy human subjects, scientists have up
to now been stymied in their attempts to locate the precise area of the brain
responsible for processing the various components of language.

Sahin and his colleagues have done their
ICE studies using patient volunteers who were undergoing neurosurgery for
epilepsy.

“What captures my interest is being
able to record what sets of brain cells are saying to each other while the
person [in whose brain the cells reside] is saying something to me. It’s
amazing,” Sahin says. He points out, however, that it is not yet possible
to interpret what the brain cells are saying. Sahin likens the difficulty to
that of a French-speaking child listening to a conversation in Greek. “He knows
something important is being communicated, but he just can’t unlock it
yet.”

Sahin and his team listened to signals
from the “forest of neurons” required to compute language as they
came “yelling and screaming down” electrodes that passed through or
near Broca’s area of the brain. Each electrode recorded input from a collection
of about 10,000 cells. Careful analysis of the brain signals allows the
researchers to determine which signals come from the local area as opposed to
echoes from signals originating elsewhere.

With the electrodes in place, the
researchers asked patients to read a word on a laptop computer screen while
they are in their hospital beds and connected to the recording computers. Words
like “come” are first read, then used to complete a sentence like “Yesterday
they _____.”  In this case, the
participants would need to supply the word “came.” “Subjects do not speak
out loud, but rather think the word to themselves, then press a button to
signal completion,” Sahin says.

Sahin’s task requires the patients to
recognize both frequent and infrequent words, to find the appropriate tense of
a verb or plural/singular form of a noun, and to change or retain the
pronunciation of the base form of the word (as presented). Patients repeat the
exercises in different combinations. “If the word is ‘house,’ for
instance, and the sentence requires the plural form, a patient’s brain must
compute the abstract concept we call plural, and additionally must prepare and
articulate the added syllable for the ‘-es’ ending,” Sahin explains. The
idea, he says, is to get a person to either repeat the words they see verbatim,
or make a transformation to use them properly in context.

Within a fifth of a second, information
about the word’s identity arrives at the patient’s Broca’s area, Sahin and
colleagues reported.

The experiment has yielded evidence that
language processing takes place in a small part of Broca’s area as a sequence
of three different stages, each ending before the next one begins. All three
are necessary in order to complete a simple task such as looking at a word and
uttering it, though there may be other stages beyond these three. “We
found a tightly timed sequence of brain activity associated with aspects of
identifying the word, grammatically transforming it into the right form for the
context, and then preparing to pronounce the final sound form,” Sahin
explains.

“What surprised me, I think,” said Eric Halgren, a professor of radiology at UCSD and the senior author on the study, “was that there were three separate components, very tightly timed, that were clearly different aspects of
processing a word, modulating, and producing it. It was surprising that
there was that level of temporal and spatial organization.”