An extensively paralyzed man, his voice silenced for years, is able to communicate using technology that deciphers electrical impulses generated by his brain when he attempts to speak, researchers reported Wednesday.

The advance, announced by the University of California at San Francisco, is believed to mark the first time anyone has restored the power to communicate in words and short sentences to someone who had lost it because of neurological damage.

The 38-year-old man, who chose to remain anonymous but is dubbed BRAVO-1 in the study, suffered a brain stem stroke 15 years ago that severed the neural connection between his brain and his vocal cords. He is paralyzed from the neck down and has been communicating by painstakingly tapping letters on a keyboard with a pointer attached to the bill of a baseball cap.

Now, merely by trying to utter words, he has 50 at his disposal and can create short sentences that primarily concern his well-being and care. A computer decodes his brain activity and displays the sentences on a screen with a median accuracy of about 75 percent, at a rate of more than 15 words per minute. Average conversational speech occurs at about 150 words per minute.

Christian Herff, an assistant professor of neural engineering at Maastricht University in the Netherlands who was not involved in the new work, called the progress described in the study “gigantic.” Previous research had demonstrated the same technique in test subjects who still were able to speak.

“It’s actually quite a big deal,” Herff said. “This is the first study that really does it in a patient who is not able to speak.”

Edward F. Chang, chairman of UCSF’s Department of Neurological Surgery and leader of the research team, said the advance would not have been possible even five years ago. Since then, progress in artificial intelligence and the decoding of neural signals led to the result published Wednesday in the New England Journal of Medicine. The researchers described the technology as a “neuroprosthesis.”

Chang said in an interview that he has been working in this area for 10 years, motivated by the patients he saw who had lost the ability to speak. Thousands of people suffer that fate each year as a result of strokes, trauma and diseases such as amyotrophic lateral sclerosis — ALS — and cerebral palsy.

“I just see every day how devastating it is for our patients who have lost the ability to speak after a stroke or a brain injury,” Chang said. “It’s part of what makes us human. When you’ve lost it, it’s really devastating.”

Chang implanted a grid of electrodes on the sensorimotor cortex of the patient’s brain, which controls the production of speech. A wire carries the electrical signal from the electrodes to a port that is permanently attached to the top of his head and can be connected by a cable to a computer.

During 48 sessions that lasted 22 hours, the scientists recorded the brain signals BRAVO-1 produced as he attempted to say 50 words flashed on a screen. They then used “deep-learning algorithms to create computational models for the detection and classification of words from patterns in the recorded cortical activity,” according to their paper. They employed other models to generate the probable next words in sentences the man was trying to say.

Chang said mere chance would have resulted in a 2 percent accuracy rate with a vocabulary of 50 words. The researchers were able to produce correct words and sentences as much as 93 percent of the time.

Most of the research in this part of the field of brain-computer interface has been conducted on patients with epilepsy who volunteer after having electrodes implanted to diagnose the source of their seizures. Chang and other scientists believed a person with anarthria — the inability to speak — still would be able to generate the same brain activity, but it wasn’t certain until his team succeeded.

A decade ago, researchers showed that sounds, or phonemes — rather than full words — could be deciphered, but with much less accuracy than was achieved in Chang’s effort. In May, Stanford University researchers published work that showed they had developed a way to allow a paralyzed man to write whole sentences with similar technology by imagining himself writing the letters.

Voice-recognition software that is ubiquitous on cellphones, computers and elsewhere was developed with many more hours of repetition and refinement than Chang’s group was able to put in with a severely disabled patient, other experts said. To expand BRAVO-1’s vocabulary and determine whether the technology works for others will require providing much more data for algorithms to decode, they said. Improving accuracy also could be another goal.

“While this was impressive, there still is substantial room for improvement in terms of the accuracy of single-word decoding and sentence decoding,” said Marc W. Slutzky, a professor of neurology at Northwestern University’s Feinberg School of Medicine. Another major step forward would be fully implantable devices that communicate with decoding devices wirelessly, he said.

Herff said others may want to try routing the deciphered language through a voice synthesizer rather than onto a screen, to allow for intonation and expression that make speech such an important human trait.

“This is really just the beginning,” Chang said. “We’re not saying we’ve accomplished anything. . . . It’s really just the start.”