blitzrest.blogg.se

Long delay radio echoes probe
Long delay radio echoes probe











long delay radio echoes probe long delay radio echoes probe

“We’re not just interested in locating something. “Our approach is similar, but not exactly the same,” Zhang explains. From those echoes, they locate objects in their surroundings. They send a sound into their environment and listen for echoes. “Imagine the sonar system that whales or submarines use,” says Zhang. “I am excited to see this novel acoustic approach,” she says. She’s eager to see how this one compares in areas such as usability, privacy and accuracy. Maes wasn’t involved in the new work, but she has developed other types of silent speech interfaces. Developing “silent, hands-free and eyes-free approaches” could make digital interactions more accessible while keeping them confidential, she says. She works at the Massachusetts Institute of Technology in Cambridge. She’s an expert in human-computer interactions and artificial intelligence (AI). Today, voice commands aren’t private, says Pattie Maes. Zhang presented this work April 19 at the ACM Conference on Human Factors in Computing Systems in Hamburg, Germany. It uses acoustics - sound - to recognize silent speech. His team built the new lip-reading tech on a pair of eyeglasses. In future versions, this equipment could be totally hidden inside the frames. On his left, two speakers hang down a bit more. On his right, two tiny microphones peek out from underneath the lens. Ruidong Zhang built the EchoSpeech prototype on an inexpensive, off-the-shelf pair of glasses.













Long delay radio echoes probe