When X/Twitter users asked the artificial intelligence chatbot Grok to “put her in red lingerie,” it quickly obliged. It took a photo of pop star Sabrina Carpenter that was posted by a pop culture page, removed her winter coat and showed her in lingerie. This was not an image that Carpenter herself posed for.
According to an article by Rolling Stone, The artificial intelligence chatbot Grok creates at least one nonconsensual sexualized image every minute. This incident illustrates how easy it is for AI and AI tools to generate or manipulate images to make them highly convincing. Now, the technology has advanced and we have synthetic media known as deepfakes.
Researchers Saiyad Mahammadakram Hanif and Vivek Dave from Parul University in India did a study on deepfake technology. They explained that the combination of “deep learning” and “fake content,” deepfakes, show hyper-practical recordings or images that are carefully controlled to portray an individual saying or doing something.
Hanif and Dave also explained in their research that the technology relies on neural networks. Neural networks are a type of AI system that were designed to work similarly to the human brain. It is made up of several small processing units called nodes or neurons that are connected together and pass information to each other. The networks then analyze large sums of data to figure out how to copy a person's looks, body movement, quirks, and voices.
Ogeechee Technical College cybersecurity student, Eli Adams, said specifically random access memory, or RAM, plays a major role in how fast AI systems can create content.
“RAM is essentially the deciding factor of how fast your PC runs because it takes temporary memory,” Adams said. “The more RAM you have, the faster your computer can run, the better the AI is going to look.”
The technology has been around for years but the term “deepfake” was first coined in 2017 by a Reddit user according to an article from the Massachusetts Institute of Technology.
Journalism professor Dr. Angela Misri of Toronto Metropolitan University, does not think that the media and newsrooms are fully prepared to fully identify manipulated media.
“Newsrooms are working on shoe-string budgets with diminishing budgets and numbers of actual subscribers,” Misri said.”This is just an extra hurdle journalists must somehow incorporate into their workflows.”
There are a few major organizations like the Investigative Bureau in Canada better known as
Reuters fact check team and BBC Verify have dedicated resources to identifying fake content. However, Misri said this cannot be extended to smaller newsrooms once again due to budget constraints.
One of the biggest problems with deepfakes is that they can spread misinformation that could potentially damage reputations or create confusion.
“We need to focus on situations where deepfakes are causing harm, like nudifying AI tools or politically motivated fakery meant to incite the public,” Misri said.
Misri also compared the current rise in deepfake usage to the early days of the internet and the development of credibility issues.
“We went from everyone consulting the same books in the library . . . to an ever-growing list of content we will never live long enough to access,” Misri said. “One person’s lived opinion equals fact if they have enough followers nowadays.”
The challenges with AI and deepfakes go beyond journalism and the media. Law enforcement agencies are also grappling with the evolving technology.
According to the United Nations Educational, Scientific, and Cultural Organization, fraud experts have on average encountered 37% voice deepfakes and 29% video deepfakes during their investigations in 2024.
Rick Kelley, the White County, Georgia sheriff, witnesses this statistical reality frequently.
“Over the last five years, the amount of scams and fraud cases have gone through the roof,” Kelley said.”There’s less of the typical property crimes like theft and burglary, and more identity fraud cases.”
Kelley also said that these crimes are difficult to investigate because it usually takes law enforcement up to three years to catch up with technological advances.
Though the rise of deepfakes and AI can cause embarrassing and invasive experiences for individuals, they are also affecting how policies are being made and journalistic fact checking.
Misri said the core responsibility of journalists and media remains the same, a dedication to providing the public with the truth. She quoted journalist Jonathan Foster to reiterate her point.
“If someone says it’s raining and another person says it’s dry, it’s not your job to quote them both,” Misri said. “Your job is to look out the f*** window and find out which is true.”
From journalists verifying sources and content to law enforcement investigations, this technology is changing how we see reality. Adams said advances in deepfake technology will likely continue in the next couple of years. This leaves journalist and investigators to keep their responsibility to question digital media and find the truth.
