Speech is now flowing from the brains in real time • Record

Some smart cookies implemented a brain-computer interface that could synthesize the conversation in real time.
Described a paper This week, neuroprosthesis, published in Nature Neuroscience, aims to allow patients with serious paralysis and anarters – speech loss into synthesized words by transforming brain signals into synthesized words.
“Our flow approach brings the same fast speech code -solving capacity of devices such as Alexa and Siri to neuroprosthesis.” He said. A statement.
“Using a similar type of algorithm, we found that we can solve neural data and activate the synchronized sound flow for the first time. The result is more natural, fluent, synthesis of speech.”
The project recovered in 2023, reducing the delay to solve and transform thought into a speech, and it takes about eight seconds to produce a sentence at that time.
As shown in this video, below, the new process works about 8 times faster and works almost in real time.
After the intention of speech occurs, but the thought begins by reading the patient’s electrical brain signals before producing a vocal muscle response.
“In fact, we are dealing with signals in which thought is converted into articulation and the midst of this engine control, C He said.
“So, after a thought has come true, after deciding what to say, after deciding which words to use and how to move our vocal direction.”
Neuroprosthesis works using a deep learning nervous network converter model to pass the 80MS electrocorticogram (ECOG) data over a neural coder and then convert brain signals into sounds. The researchers used a record of the patient’s voice before injury to make the model’s output more as natural speech.
Although this special neuroprosthesis requires a direct electrical connection to the brain, researchers believe that surgical approaches can be generalized for other interfaces, including implanted microelectrrot sequences (MEAS) and non -invasive surface electromyography (SEMG).
It is based on business Research financed by Facebook That social media left four years ago to follow Semg wrist sensors. Edward Chang, Head of Neurosurgy in UCSF, which controls the project financed by Facebook, is a senior researcher for this latest study.
Code for Flow brain2 Speech Code Solver If everyone wants to reproduce the results of researchers, he was sent to Github. ®