National Science Foundation Convergence Accelerator awards $1 million grant to team

A Rochester Institute of Technology researcher is part of a team that has been awarded a National Science Foundation grant to use artificial intelligence to better understand the role of facial expressions in signed and spoken languages.

As part of the project, researchers will develop an application to teach American Sign Language learners about associated facial expressions and head gestures. They will also develop an application that makes the facial expressions of a signer anonymous, when privacy is a concern.

The nearly $1 million grant is part of the NSF Convergence Accelerator, a program that supports use-inspired, team-based, and multidisciplinary research to produce solutions to national-scale societal challenges.

The project, called Data and AI Methods for Modeling Facial Expressions in Language with Applications to Privacy for the Deaf, American Sign Language (ASL) Education and Linguistic Research, is co-led by a multidisciplinary team of researchers at three universities. The team includes Matt Huenerfauth, professor and expert in computing accessibility research at RIT; Dimitris Metaxas, Distinguished Professor of computer science at Rutgers University; Mariapaola D’Imperio, Distinguished Professor of linguistics at Rutgers University; and Carol Neidle, professor of linguistics and French at Boston University.

“This is an exciting opportunity to learn how to transition fundamental research to applied technology development,” said Huenerfauth, who is also director of RIT’s School of Information (iSchool). “The way in which this program integrates outreach to end-users of technology early in the research and development process aligns with best-practices in the field of human-computer interaction and user-experience, which emphasize a user-centered approach to design.”

The project looks at facial expressions and head gestures, which are an essential component of spoken languages and signed languages, including ASL. In the U.S., ASL is the primary means of communication for more than 500,000 people and the third most studied “foreign” language.

Throughout the nine-month project, the researchers aim to create tools that facilitate and accelerate research into the role of facial expressions in both signed and spoken language.

The first application researchers are building will help ASL second-language learners produce the facial expressions and head gestures that convey grammatical information in the language. Experts said this is one of the most challenging aspects of learning ASL as a second language.

They will also develop an application that de-identifies the signer in video communications, while preserving the essential linguistic information expressed non-manually. This will enable ASL users to have private conversations about sensitive topics. To do this, researchers will build four-dimensional face-tracking algorithms that can be used to separate facial geometry from facial movement and expression.

“This is a real problem for ASL users who seek private communication in their own language,” said Neidle. “Obscuring the face is not an option for hiding the signer’s identity, since critical linguistic information expressed non-manually would be lost.”

At RIT, Huenerfauth is a director of the Center for Accessibility and Inclusion Research (CAIR), where he and a team of student and faculty researchers are working to change the culture of development in an effort to make technology accessible for all. Students doing research in the CAIR Lab will be involved in the NSF Convergence Accelerator project.

Huenerfauth said that RIT will play an important role in the human-computer interaction aspects of the project.

“We’ll be conducting interviews and initial tests of prototypes (demos) of the software being considered, to gather requirements and preferences from users of the technology,” said Huenerfauth. “People using the technology would include students learning ASL, as well as fluent ASL signers who are deaf and hard-of-hearing.”

###

This part of information is sourced from https://www.eurekalert.org/pub_releases/2020-10/riot-nsf101620.php

withyou android app