A video shared by President Donald Trump that was edited to make it appear presidential candidate Joe Biden was endorsing his re-election during a campaign rally Saturday was deemed manipulated content by Twitter—a first for the social media company. But Facebook did nothing to flag the video as false content.
Biden had stumbled over some words and the video stopped short of including his correction. The video is the latest cheap fake to raise controversy in recent weeks.
University of Michigan School of Information Professor Clifford Lampe explains cheap fakes and the difficulty in getting the platforms to police them.
Deepfake, cheap fake, dumb fake—terms for misinformation. Can we start with definitions of these?
Lampe: We’ve certainly heard a lot of these terms recently. So deepfake is a kind of a special class of false information. It’s where you use machine learning or artificial intelligence to map a person’s features on their face. And then you can overlay basically one face on the other. So I could, for instance, basically steal your face. The computer uses advanced computational tools to be able to create very realistic fake content that makes it so that a person would say something they wouldn’t normally say. That’s very different from a cheap fake or dumb fake, which are basically two names for the same thing. Those are where you just use common editing practices to make a video that’s misleading.
When it comes to deepfakes, we have some people working on technology to help spot them. What is happening to monitor cheap fakes?
Lampe: The groups that are most effective at monitoring these are the fact check organizations. So FactCheck.org and PolitiFact and Snopes, all of which are working overtime to try to sort out what is real versus not real in the current media environment. A lot of this ends up being amateur detectives who if something looks too good to be true or looks like that can’t possibly be true, they’ll go back and find the original and do side-by-side comparisons themselves.
But at this point, we’re entirely too dependent, I think, and kind of amateur sleuths in the media environment to try to determine what’s fake or what’s not fake. The platforms, Facebook and Twitter and groups like that have mostly washed their hands of this. They’re not willing to have a strong role.
Some of these groups like Snopes are offering solutions that are dependent on me going to their sites. Platforms aren’t taking them down. So it’s really pretty ineffective, isn’t it?
Lampe: Pretty much. In fact, a lot of my conservative friends, as an example, have given up on Snopes. They find a lot of the fact-checking by sites has a liberal leaning bias. A lot of people don’t seek fact-checking because the goal of these fake news stories and deepfake images and videos isn’t to inform. It’s to persuade and to create an emotional state.
One of the most important parts of disinformation, and misinformation more broadly, is that it’s not necessarily about the information itself.
This sort of thing is not new. It’s historical. The media environment always has been kind of a partisan mess.
There are all sorts of fake stories about Thomas Jefferson and his time in France, and being too close to the French aristocracy and wanting to have a French king as part of the American system. His opponent John Adams said Jefferson had sold out to the British and that he was somebody who is in league with the devil. Any story, you can imagine, makes our current environment actually look kind of tame. And then the other time we’ve seen this level of hyperpartisan divide in the United States was right before the Civil War. Those elections, right before the war were also rife with lots and lots of really strong hyperpartisan news stories.
But with the rise of social media, it appears we’re back in an environment of hyperpartisan media production.
As you say, we came to this idea that news coverage should be objective. Can we get to that place where people say enough is enough? Do you think it will police itself eventually?
Lampe: I don’t think it will police itself. I mean, partially because it does attack identity and emotion. So social media were invented in an environment where they’re intended to foster interpersonal relationships.
If you look at all the reaction buttons on Facebook, it’s like and love and sad and laughing. And all of these are very emotional responses. It was not designed to be a civic debate platform. And it obviously is not a civil debate platform. Same with Twitter.
We don’t have any mechanisms in there to slow down thinking. Instead, we do exactly what you shouldn’t do when it comes to political deliberation. We trigger those emotional responses. Depending on the groups that you follow, your own kind of homophobic social network, and the memes that you get shown, it’s very easy to create an environment where you’re not exposed to alternative viewpoints, or if you are, it is to a caricature of alternative viewpoints that are presented by people who think like you.
Let’s bring this down to the individual. What can I do to make myself savvy?
Lampe: I think it’s tempting to think that this is a special crew of people who are particularly susceptible to cheap fakes or deepfakes or false content. More broadly on the internet, the consistent research shows us that everybody at some point shares bad information and gets basically activated by emotional content as opposed to rational content. If you feel too emotionally happy or angry or anything when you’re sharing content, at least double-check that. Why are you having such a strong emotional reaction to the content that you’re sharing?
Other things you can do, of course, are very common and tried and true media literacy things. So one is to check your sources. Is it a reputable source?
If it’s too good to be true, if it confirms a bias you hold or belief that you think you want to share with people, you should be a little suspicious of that as well. So always think again, why is this story being shared? And can I confirm this across multiple sources at the same time?
And then I think the other thing that you can do is read the better news articles and better news sources on the side of the partisan divide that you don’t particularly ascribe to; have a broad ecology of news.
Original post https://alertarticles.info