close
close

Leap Ears: AI storms the headset room

Leap Ears: AI storms the headset room

They folded in a microphone, took away the wires, added noise cancellation and some admittedly puzzling controls.

(Clockwise from top left) hearing aids from Orka, Starkey, Oticon, Phonak, Signia, Widex and Apple.
(Clockwise from top left) hearing aids from Orka, Starkey, Oticon, Phonak, Signia, Widex and Apple.

Where do the earphones go? The segment was fairly quiet.

There are plans for noise cancelers to let parts of the audio world back in, but selectively. Plans for earplugs that can serve as hearing tests and part-time hearing aids. AI-led programs aim to sift through environmental sounds, “like a human brain would,” and call on which elements to emphasize or suppress.

Twitter update

The big change happening today involves selectively reversing noise cancellation.

Experimental new hearing aids use deep learning models and neural networks (which are machine learning algorithms that try to mimic the brain) to let listeners move seamlessly between the real world around them and the virtual worlds they try to immerse themselves in through their devices to immerse.

“Just like with virtual reality and augmented reality, a lot happens with audio. The goal for many companies is to build augmented audio products,” says technology analyst Kashyap Kompella.

This would mean, for example, that a walker could choose to hear birdsong, but nothing else, over their music; record an ambulance siren sharply above other sounds.

The idea is to “teach noise cancellation systems who and what we want to hear,” says Bandhav Veluri, a PhD student in computer science at the University of Washington.

He and other researchers at Washington and Carnegie Mellon universities have been working in this area of ​​”semantic hearing.”

“Neural networks are good at categorizing data,” says Veluri. “Whether it’s cats meowing or people talking, the networks can recognize sound clusters.” This ability can be used to teach them which clusters to suppress and what types of sounds to let in.

Listeners might want to adjust the settings: someone with a pet might want all cat sounds to come in; someone else living near a hospital might want all sirens turned off.

We can expect such features to appear on our devices within one or two years, says Kompella.

Look now…

In a related project, Veluri and his fellow researchers also want to use movement as a signal.

An experimental system called Look Once to Hear allows the listener to turn their head toward a person to hear their voice above other ambient sounds. (There are a number of restaurants we could mention where this would be so helpful.)

“If we want to hear a specific person with existing techniques, we need a voice sample,” says Veluri. “The program then maps the voice characteristics in real time, using the fixed set of voice samples in memory, to determine which voice to tune to.”

A prototype of the Look Once to Hear headphones. (Courtesy of Bandhav Veluri)
A prototype of the Look Once to Hear headphones. (Courtesy of Bandhav Veluri)

With the experimental program, the intention is that the direction of the wearer’s head will provide the signal instead, with the system also being able to distinguish completely unknown speakers.

What happens in a full room? The next step will be to fine-tune the system for such scenarios, Veluri says.

Researchers at the University of Washington are trying to teach the system how to essentially monitor a conversation. It could use turn-taking patterns to recognize which voices are part of the chat, Veluri says.

Stop…

What is the farthest sound you can hear? Why weren’t you aware of that a second ago?

Volume is just one of the signals the brain uses when listening. It continually prioritizes and elaborates a hierarchy of sounds.

Could headphones possibly ‘listen’ in this way, the way humans do?

In February, researchers at Ohio State University developed a machine learning model that prioritizes input sounds based on “judgment.”

The goal is for the model to make reasonable assumptions in real time about which sounds are likely to be important. The first findings were published this year in the journal IEEE/ACM Transactions on Audio, Speech and Language Processing.

What makes this so challenging, of course, is that listening is extremely subjective. “It depends on your hearing abilities and your hearing experiences,” said Donald Williamson, associate professor of computer science and engineering, in a statement.

It’s even possible that no one hears your world exactly as you do.

Listen…

Elsewhere, AI chips are changing the way hearing aids work. Deep neural networks sift through layers of noise to represent individual sounds more clearly.

The More range of hearing aids from the Danish company Oticon is said to be the first to integrate such a network (the company has named it the BrainHearing system). Prices start at $1,530 (Rs 1.3 lakh) for a single earpiece.

Companies such as Orka, Widex, Starkey, Signia and Phonak, meanwhile, use AI and machine learning to make their hearing aids also monitor heart rate, blood pressure and step counts and provide fall warnings. The prices for this vary from 25,000 to approx 3 lakhs.

Some mainstream products are also entering rudimentary hearing aid territory.

Apple’s AirPods now come with a Live Listen feature that helps drown out background noise and amplify closer sounds if you place your phone near the source of the sound you want to hear. This can serve as a hearing aid for people with mild hearing loss.

Some AirPods come with a hearing test feature that can loosely determine hearing levels through an audio test. Some AirPods are of course known for their fall detection, with sensors sending an alert to an emergency contact in the event of such an event.

The sensors also play a crucial role here. For example, they determine when an AirPod was dropped while it was in the wearer’s ear, rather than simply bouncing on the floor (as happens all too often).