"Our ability to communicate is fundamental to being human," says Greg Hickok. "Imagine not being able to speak or write, not being able to understand a conversation or the evening news, not being able to text, email or tweet. Communication is the foundation of our relationships and society; aphasia can take it all away." Steve Zylius / UCI

Each year, nearly 800,000 people in the U.S. experience a stroke – that’s one person every 40 seconds. Stroke is the fifth-leading cause of death, killing almost 130,000 Americans annually – that’s one person every four minutes. But there are far more who survive. In fact, the Centers for Disease Control & Prevention estimate that there are more than 7 million of them, making treatment of stroke-related disabilities essential.

Enter Greg Hickok, University of California, Irvine professor of cognitive sciences. Over the past 15 years, he’s received $16 million in funding from the National Institutes of Health – $4 million in the last year alone – to support research on how neural abnormalities affect speech and language in an area of the brain tied to stroke-induced aphasia.

“About 1 million people in the U.S. suffer from aphasia, caused most often by stroke,” Hickok says. “A stroke can cause damage to networks in the brain that enable language, which – from a scientific standpoint – is the system that translates thought into speech and speech into thought.”

The result can be devastating.

“Our ability to communicate is fundamental to being human,” he says. “Imagine not being able to speak or write, not being able to understand a conversation or the evening news, not being able to text, email or tweet. Communication is the foundation of our relationships and society; aphasia can take it all away.”

In March, Hickok and researchers at the University of South Carolina and Johns Hopkins University got a Clinical Research Center Grant from the NIH to study the nature of various forms of aphasia, the prognosis for recovery and how best to treat it.

The work relies on Hickok’s dual-stream model, which posits that speech is processed in the brain along two different neural pathways. One, called the “ventral pathway,” relates acoustic speech information to meaning and is used to understand speech; the other, called the “dorsal pathway,” relates acoustic speech information to action and is used to produce speech, he explains.

“The existence of a sensorimotor stream is easy to imagine for a visuomotor task like reaching for a cup, where we use visual information about its shape and location to guide our reach,” he says. “It’s less obvious in language, but studies have shown that in the same way, a word’s sound guides our speech production.”

The director of UCI’s Center for Language Science, Hickok first began seeing this in action at a neural level when utilizing fMRI to study brain processes linked to speech production. He noticed that, in addition to the expected motor regions, auditory areas of the brain “lit up,” or were activated, when subjects named pictures – even if they only thought about and didn’t actually vocalize the words.

“Stroke-based research found that these activations reflected the critical involvement of auditory areas in speaking. When these regions are damaged, patients tend to struggle to come up with words, and when they do speak, they make a lot of errors,” Hickok says.

He has since been using fMRI and stroke-based methods to zero in on the planum temporale and, in particular, the Sylvian parietal-temporal region of the brain – which, he discovered, is where regulation of auditory-motor processes occurs.

Researchers from the University of South Carolina will be utilizing Hickok’s dual-stream model to test whether measures of proportional damage to the two different pathways lead to better aphasia diagnoses and predictions for treatment response, beyond biographical and cognitive/linguistic factors.

The collaborative study is directly tied to clinical practice; at its end, the researchers will know more about why some patients respond better than others to aphasia treatment. They’ll also be using treatment approaches routinely employed in clinical practice, allowing for immediate translation of the findings into patient management.

At the same time, Hickok is working with the University of Texas Health Science Center at Houston on a project utilizing electrocorticography – which takes direct cortical recordings in neurosurgical patients – to explain the organization and dynamics of the dorsal pathway in great detail.

“FMRI and stroke-based methods can help us map the location of regions, but they tell us little about the millisecond-by-millisecond dynamics of how the brain actually carries out a given task,” he says. “ECoG, in contrast, records brain activity with excellent time and spatial resolution. This allows us to untangle how complex cortical networks interact with each other moment by moment to give rise to behavior.”

Hickok also received renewal funding from the NIH this year to continue his five-year, multisite fMRI study to understand subdivisions of the brain’s planum temporale, including the Sylvian parietal-temporal region. The previous work yielded 40 publications on its functional organization in healthy young people. Now Hickok and his team – UCI faculty members Kourosh Saberi, John Middlebrooks and Fan-Gang Zeng – will be looking at stroke- and hearing-impaired patients to see what happens when people have damage to various portions of this system.

“We’re focusing on speech production, audiovisual integration and spatial hearing,” he says. “This work will both refine our knowledge of these circuits and the planum temporale and help us understand the sources of speech and language difficulties following stroke.”

Based on these findings, he says, neural prostheses – brain implants – could one day be used to compensate for lost function in some aphasia cases. The idea may seem far-fetched, but considering the advancements they’ve made in aphasia-based research over the past decade and a half, and considering that neural prostheses are becoming a reality for some neurological disorders, the sheer possibility is welcome news to the millions who live with stroke-induced language deficits.