Artificial intelligence is quickly becoming part of our social media world on our cellphones and computers. Text, images, audio and video are becoming easier for anyone to create using new generative AI tools.
As AI-generated materials become more pervasive, it’s getting harder to tell the difference between what is real and what has been manipulated.
“It’s one of the challenges over the next decade,” said Kristian Hammond, a professor of computer science who focuses on artificial intelligence at Northwestern University.
AI-generated content is making its way into movies, TV shows and social media on Facebook, TikTok, Snapchat and other platforms.
AI has been used to change images of former President Donald Trump and Pope Francis. The winner of a prestigious international photo competition this year used AI to create a fake photo.
Victor Lee, who specializes in AI as an associate professor in the Graduate School of Education at Stanford University, said, “people need to exercise caution when looking at AI-generated materials.”
Whether it’s text, video, an image or audio, with generative AI we are seeing things that look like actual news or an image of a particular person but it’s not true, Lee said.
AI is also being used to create songs that sound like popular musical artists and replicating images of actors.
Recently, an anonymous person on TikTok used artificial intelligence to create a song with a beat, lyrics and voices that fooled many people into believing it was a recording by pop stars Drake and The Weeknd.
Among the demands of television and film actors and writers currently on strike in the U.S. are protections against the use of AI, which has advanced to replicate faces, bodies and voices on movies and TV.
“I think the Avatar movies have been so successful because people were able to identify with the animation of the simulated characters,” said Bernie Luskin, director of the Luskin community college leadership initiative at the University of California, Los Angeles.
Luskin, who does research on media psychology, thinks that as the use of AI becomes a worldwide phenomenon, it will affect people psychologically and influence their behavior.
“It’s definitely going to have a dramatic impact on social media,” he said. “As AI becomes more common, it will become increasingly deceptive, and abusers will abuse it.”
On a positive note, Hammond said AI will promote additional artistic elements.
“We’re going to have a new view of what it means to be creative,” he said, “and there will be a different kind of appreciation because the AI systems are generating things in partnership with a human.”
A major concern, however, is that people are already being duped by AI, and as the technology becomes even more sophisticated, it will be even more difficult to discern its imprint.
Krishnan Vasudevan, assistant professor in visual communication at the University of Maryland, worries that people may become immune to AI-generated materials and won’t care if they are real or not.
“They’ll be wanting visuals that reinforce their viewpoints, and they’ll use the tool as a way to discredit or make fun of political opponents,” he said.
Experts say norms, regulations and guardrails must be considered to keep AI in line.
“Does AI receive credit as a co-author?” Lee asked.
“I think there will be legal battles about using somebody’s voice or likeness,” Vasudevan said.
“We have to start looking hard at exactly what is going out there,” said Hammond. “For example, there should be regulations that say your image should not be associated with anything pornographic.”
Lee said artificial intelligence will create big changes the public will get used to, much like the Internet and social media have done.
“The Internet is not inherently a good or bad thing, but it changed society,” he said. “AI is also not good or bad, and it is going to do something similar.”