sciencenewsnet.in

Marketing and legal experts caution seeing is not believing with everything online

Two West Virginia University experts with extensive knowledge of deepfakes and AI-assisted technologies are sounding the alarm about their prevalence in our daily lives at a time when headlines about potential AI-generated photos and videos, and questions by the public about what’s real and what’s not persist. 

Deepfakes — digitally manipulated synthetic images or recordings made to look like a person is doing something they didn’t do or to sound like a person saying something they didn’t say — can be created by computers or by people with even a small amount of tech savvy.

Laurel Cook, an associate professor of marketing in the WVU John Chambers College of Business and Economics, and WVU College of Law lecturer and legal scholar Amy Cyphert, weigh in on just how complicated the subject is.

Quotes:

“When navigating content, particularly online, adopting a perspective that treats deepfakes and other tech-altered information akin to spam — similar to our approach with junk mail or pop-up ads — proves advantageous.

“Common engagement-baiting approaches include using emotional appeals, especially those that incite anger, splitting the audience and being outlandish like using purposeful, yet incorrect pronunciation. Additionally, this type of content may include the theft or distortion of publicly available information. Such tactics are often combined with deepfake content designed to provoke an immediate response, usually in the form of word-of-mouth which search engines and social media algorithms richly reward. This outcome may also negatively affect the bottom line for those who publish original content. When these tactics are used, healthy consumption of such content should include fact-checking and/or triangulating sources.

“Regularly exercising discernment is also an important part of this discussion. Currently, deepfakes and other tech-enabled content are error prone in the visual representation of faces. Deepfakes can be perceived as credible, overall, but small details in the eyes, skin and other parts of the face reveal flawed transformations. Looking at other static visual elements like shadows and hair, and dynamic visual elements like lip and eye movement, will help viewers determine if content is natural and credible. By adopting these practices, we are attenuating the effects of deceptive tactics and contributing to a more informed and resilient online community.” — Laurel Cook, associate professor, marketing, WVU John Chambers College of Business and Economics

“Deepfakes are very concerning because we tend to trust video and the sound of people’s voices as evidence sufficient to believe what we’re hearing or seeing to be real. That’s just not the case anymore. I’ve been worried about this for years and I’m really worried about it in terms of election interference.

“There is evidence AI and deepfakes will play a role in the upcoming presidential election. We already saw it happen during the New Hampshire primary when a robocall using President Biden’s voice urged voters to stay home and not go to the polls.

“Many people are trying to take action. Some states have already passed bills to regulate the use of AI and creation of deepfakes. But, despite our best efforts to regulate it, there will still be problems not addressed by regulation. There are also potential constitutional issues that have to be addressed when we’re considering new laws to address deepfakes. There are important First Amendment issues — is this art, is this parody — and there is a subjective nature to those answers. Then you have to figure out who the law will target. Will we prosecute the person who made the deepfake? Can we find them and are they subject to our jurisdiction? Do you also prosecute the people who helped spread the deepfake? What if they did not know it was a deepfake? Even with all these thorny questions, I think some regulation, if carefully crafted, could be better than none. Maybe we can’t stop people from making harmful deepfakes, but we can stop those deepfakes from going viral. It might require working with social media companies, though locking these things down can sometimes fuel their virality and interest by the public. It’s also about getting rid of the bad without getting rid of the good. This gets really complicated.

“Education on the topic is important and there are some attempts at technological fixes like watermarks that humans can’t necessarily see but that could serve as some kind of detector to establish an image or recording’s authenticity. You also can’t really teach people to not believe anything they see or hear. Because, in a world where anything can be a deepfake then we might accept nothing as truth which is problematic in itself. I don’t think the only answer can be to teach people to be skeptical because then they might be skeptical about everything, including reality.” — Amy Cyphert, Lecturer in Law, WVU College of Law, director, WVU ASPIRE Office