MIT, Microsoft, and Adobe have been working together to create this little wonder.
Researchers from MIT, Microsoft, and Adobe have created an algorithm that can intelligibly reconstruct an audio signal from minute vibrations in objects. In one experiment, the team was able to recover speech from the vibrations of a potato chip bag, photographed at a 15 foot distance through soundproof glass.
“When sound hits an object, it causes the object to vibrate,” says Abe Davis, MIT grad student and first author of the research paper. “The motion of this vibration creates a very subtle visual signal that’s usually invisible to the naked eye. People didn’t realize that this information was there.”
In addition to the potato chip grab, the team was able to recover useful audio from aluminium foil, the surface of a glass of water, and the leaves of a potted plant. The success of the experiment depends on decent video samples, and in some experiments the researchers used a high-speed camera capable of capturing 2,000 to 6,000 frames per second. However the team was able to obtain useful data even from an ordinary digital camera at 60 frames per second; it wasn’t perfect, but the 60 frame solution is good enough to identify the gender of the speaker, the number of speakers, and possibly even their identities.
While the spycraft connection is pretty clear-cut, its inventors hope it can be used for what is described as a new kind of imaging. The team is very interested in the material properties of the objects it examines, and their response to bursts of sound. The previously invisible – a baby’s breathing, the pulse in your wrist – suddenly becomes visible, making for an entirely new kind of video capture.
“I’m sure there will be applications that nobody will expect,” says Davis. “I think the hallmark of good science is when you do something just because it’s cool and then somebody turns around and uses it for something you never imagined. It’s really nice to have this type of creative stuff.”
The team will present its findings in a paper at this year’s Siggraph computer graphics conference.
Source: MIT
Published: Aug 4, 2014 01:53 pm