A team led by Cheng Zhang, assistant professor of information science, and François Guimbretière, professor of information science, designed the system, named EarIO. It transmits facial movements to a smartphone in real time and is compatible with commercially available headsets for hands-free, cordless video conferencing.
Devices that track facial movements using a camera are “large, heavy and energy-hungry, which is a big issue for wearables,” said Zhang. “Also importantly, they capture a lot of private information.”
The team described their earable in “EarIO: A Low-power Acoustic Sensing Earable for Continuously Tracking Detailed Facial Movements”.
The EarIO works like a ship sending out pulses of sonar. A speaker on each side of the earphone sends acoustic signals to the sides of the face and a microphone picks up the echoes. As wearers talk, smile or raise their eyebrows, the skin moves and stretches, changing the echo profiles. A deep learning algorithm developed by the researchers uses artificial intelligence to continually process the data and translate the shifting echoes into complete facial expressions.
By collecting sound instead of data-heavy images, the earable can communicate with a smartphone through a wireless Bluetooth connection, keeping the user’s information private. With images, the device would need to connect to a Wi-Fi network and send data back and forth to the cloud, potentially making it vulnerable to hackers.
“People may not realize how smart wearables are – what that information says about you, and what companies can do with that information,” Guimbretière said. With images of the face, someone could also infer emotions and actions. “The goal of this project is to be sure that all the information, which is very valuable to your privacy, is always under your control and computed locally.”
Media note: Video of the earphone device can be viewed and downloaded here: https://cornell.box.com/v/cornellearable
For more information, see this Cornell Chronicle story.
-30-