For the first time ever, neuroengineers at Columbia University have created a system that successfully takes signals from the brain and translates them into speech.
The results were published this week in the journal Scientific Reports. The study was led by Nima Mesgarani, Ph.D., senior author, and a principal investigator at Columbia University’s Mortimer B. Zuckerman Mind Brain Behavior Institute.
Mesgarani is also an associate professor of electrical engineering at Columbia’s Fu Foundation School of Engineering and Applied Science; and co-author Ashesh Dinesh Mehta, MD, Ph.D., a neurosurgeon at Northwell Health Physician Partners Neuroscience Institute Columbia’s Fu Foundation School of Engineering and Applied Science.
Engineers at Columbia University developed a system that uses artificial intelligence to monitor brain activity, figuring out how to reconstruct the words a person hears. Then, combining this artificial intelligence with powerful speech synthesizers, the system is able to translate brain signals into recognizable speech with unprecedented clarity.
“This is the same technology used by Amazon Echo and Apple Siri to give verbal responses to our questions,” said Dr. Mesgarani.
Siri was actually developed by Siri Inc., which was a spinoff company of SRI International (formerly Stanford Research Institute), and was originally developed from the DARPA-funded CALO project, described by SRI as the largest artificial intelligence project ever launched. Apple acquired Siri in April 2010.
After decades of research, neurologists were able to decipher patterns of activity that occur when people speak. Similarly, recognizable patterns also occur when people listen. But most surprisingly, these activity patterns also emerge when people imagine speaking.
After years of working at this problem, researchers have now successfully mapped out and decoded these patterns, and have used artificial intelligence and speech synthesizers to convert thought into speech.
These experts say that – in the future – our thoughts can be translated directly into speech.
The neuroengineers involved in the research say that this stunning breakthrough could lead to the development of computers that can communicate directly with the human brain.
Further, it holds enormous promise for people who are unable to speak. Scores of people each year lose speech function due to strokes or diseases such as amyotrophic lateral sclerosis (ALS).
When any kind of new technology is introduced, it’s also important to think about other ways such technology can be used apart from its intention or even alleged intention.
Considering that parts of this technology was originally developed and/or funded by DARPA, the Defense Advanced Research Projects Agency, which is an agency of the United States Department of Defense that is responsible for the development of emerging technologies for use by the military – the first thing one should consider is: What are the military applications of such technology?
The most obvious is an interrogation. If scientists can translate thoughts into words, then this technology could, theoretically, be used for obtaining secrets that someone may be trying to hide. Interrogators could potentially take someone who refuses to talk, tap into their brain, and extract their thoughts, translating them into speech.
We also have to consider that this element of the technology may have already been developed for the military first and foremost, and is only now being released for use to the private sector for other applications. This has almost always been the case with technology. New technologies are first developed for military usage, and then they are allowed to trickle down to other applications.
Examples include; GPS navigation, digital cameras, night vision, drones, the Internet, computers, superglue, duct tape, microwave ovens, and much more.