Scientists are developing mind-to-machine technology that could enable people who cannot speak to communicate using their own brain waves and a computer screen.
Recently, neuroscientists at the Mayo Clinic campus in Jacksonville, Fla., demonstrated that brain waves, focusing on a matrix of letters, can project letters onto a monitor - with the goal of eventually typing out words and sentences. For example, by concentrating on the letter "q," that "q" will appear on the screen.
The technique requires a craniotomy - that is, a surgical incision into the skull - to place electrodes directly onto the surface of the brain. The implanted devices then record electrical activity produced by the firing of nerve cells.
The research is preliminary. However, it potentially could help millions of people with disorders that result in speech loss, such as, Amyotrophic Lateral Sclerosis - Lou Gehrig's disease - a progressive deterioration of voluntary muscle movement; spinal cord injuries that require a breathing tube; and "locked-in syndrome," where patients are awake and aware - but are paralyzed everywhere but the eyes. Even certain stroke patients could benefit, the researchers said.
"We want to emphasize that this really is at the beginning," said Jerry Shih, MD, the Mayo scientist who, with Dean Krusienski, PhD., of the University of North Florida, studied the technology in six epilepsy patients. Still, "I think the potential applications are wide-ranging," he added. "We want to make this a practical type of device that will have an impact on their function, and quality of life."
While researchers have used the technology in the past to see whether such signals could control prosthetic devices, such as arms, this is believed to be the first research that examines its potential for language.
The researchers studied the technique in epilepsy patients with existing electrodes that had been implanted earlier to monitor their seizures. "These people already had electrodes in their brains, so we didn't have to subject them or anyone else to invasive brain surgery," Shih said.
The scientists wanted to see whether the process was more effective when electrodes were implanted directly onto the surface of the brain - electrocorticography (ECoG) - compared with those placed only on the scalp, known as electroencephalography (EEG.) The ECoG technique proved faster and more accurate.
"There was a big difference in the quality of information," Shih said. "With EEG, the electrical signals are significantly distorted as they pass through the skin, the scalp fat, the bony skull - all those layers. There just wasn't as clear a signal, compared to ECoG. Imagine someone with Lou Gehrig's disease trying for five minutes to type out a word, and finding errors in that word. The accuracy wasn't optimal."
In the study, patients sat in front of a monitor that was hooked to a computer running the researchers' software, which was designed to interpret electrical signals coming from the electrodes. The patients looked at the screen, which contained a 6-by-6 matrix with a single letter inside each square. Each time the square with a certain letter flashed, and the patient focused on it, the computer recorded the brain's response to the flashing letter. Then the patients focused a specific letters and the computer software recorded the information.
The computer then calibrated the system with the individual patient's specific brain wave and - when the patient then focused on a letter - the letter appeared on the screen.
If perfected, the technology could help people "with any neurologic disease that impacts a person's ability to effectively communicate, as long as they can attend to the task," Shih said, adding: "It won't work, for example, in people who are demented."
Even certain stroke patients - those whose attack occurred in the brain stem - could benefit, Shih said. "Those patients can't talk because the pathway to the speech area has been interrupted by the stroke, but the actual center that produces language - the cerebral cortex - is not involved," he said.
It might even be possible to apply the technology to stroke victims whose language center has been injured. Other areas of the brain may be able to compensate, Shih said. "We actually found that these types of signals can be reported not only from the language center, but from other portions of the brain," he said. "So this still might work even for people whose language area has been destroyed."