campus_researchA new algorithm developed by Ohio State researchers has led to a breakthrough in helping hearing-impaired people better distinguish speech from background noise.

By discarding noisy data and amplifying speech data, the algorithm has shown improvements in hearing-impaired people, said Yuxuan Wang, a computer science and engineering Ph.D. student who worked on the study. The algorithm tries to predict where speech dominates sound and where noise dominates, he said.

“At the noise-dominate places, we basically discard the signal noise. We only retain the places where speech dominates,” Wang said. “Once we can do this, we can basically restore the intelligible hearing.”

At its current stage, though, there is no usable prototype.

“Right now, this technology requires a supercomputer to run,” said Sarah Yoho, a speech and hearing science Ph.D. student who worked on the project.

To test the algorithm, 12 hearing-impaired people were put in a sound booth where they listened to noisy speech — speech with background noise — through headphones, Yoho said. Afterward, the same speech was played again after being processed by the algorithm that had removed background noise.

“The headphones were essentially doing what a hearing aid would do if it had this technology in it,” Yoho said.

The people tested did poorly recognizing speech during the noisy portion of the test because they couldn’t pick out the speech, she said. But after the sounds were processed through the algorithm, people were able to understand what was being said.

“It’s pretty amazing. In our first study, we had subjects going from something like 10 percent in noise, so they’re only getting 10 percent of the words — maybe one word every couple of sentences — to 85 percent of the time getting all the words,” Yoho said. “These gigantic improvements were really, really amazing.”

A control group of 12 normal-hearing undergraduates was also used in the test.

Wang said one person didn’t recognize any words without the speech-processing algorithm but identified about 75 percent with it.

“It’s like day and night,” he said.

Additionally, the hearing information can be processed in real time, Wang said.

The National Institutes of Health recently provided a $1.8 million grant to researchers to help continue the research and develop a device that could be implemented in a hearing aid, Yoho said.

The grant was given because the project was the first time an algorithm showed promising enhancements in hearing-impaired listeners, she said.

DeLiang “Leon” Wang, a professor of computer science and engineering who helped develop the algorithm, said the technology solves one of the biggest issues hearing-impaired people face.

“The speech separation problem is considered the ‘holy grail’ of all the (hearing-impaired) problems,” he said.

He said his lab has been working on this project for about 12 years.

In order to learn what data to keep and what data to throw away, the machine-learning algorithm was implemented, Leon Wang said.

“If you throw away too much, you don’t improve peoples’ ability to hear. If you throw away too little, you also don’t improve,” he said. “Knowing where to throw away and where to keep — that’s the essence of this technology.”

Going forward, the researchers will work on refining their technology, which Leon Wang said will include focusing on distinguishing between certain letters that sound similar.

“We’ve just made the first crack at this very, very tough knot,” he said. “That’s the breakthrough … now we’ll see to what extent we can push this algorithm and to what extent we can make the algorithm implementable in a device.”