‘Cheapfakes’, not deepfakes, spread election lies in India

June 2, 2024 - 9:07 AM
2724
Supporters of India's Prime Minister Narendra Modi wear masks of his face, as they attend an election campaign rally in Meerut, India, March 31, 2024. Modi is now seeking a third consecutive term in general elections. If, as widely expected, Modi wins the polls, which conclude on June 1 with vote-counting set for June 4, he will be only the second person after Indian independence hero and first prime minister Jawaharlal Nehru to serve three consecutive terms. (Reuters/Anushree Fadnavis)
  • AI-driven deepfakes still rare on Indian social media
  • Other forms of misleading videos, images more common
  • Fact-checkers say platforms’ new policies fall short

— India’s election was in full swing when hundreds of social media users shared a video that appeared to show Home Minister Amit Shah saying the ruling party wanted to scrap a quota system aimed at undoing centuries of caste discrimination.

The controversial comments caused a brief furore before fact-checkers stepped in and declared the video a fake that had been made using old footage that was doctored with the help of basic editing tools – a so-called cheapfake.

In the run-up to the ongoing election, the results of which are due on June 4, politicians and digital rights groups voiced concern that voters could be swayed by misinformation contained in AI-driven “deepfake” videos.

But fact-checkers say most the falsified pictures and videos posted online during the six-week election have not been made using artificial intelligence (AI), instead using relatively cheap and simple techniques such as footage editing or mislabelling to present content in a misleading context.

“Maybe 1% of the content we have seen is AI-generated,” said Kiran Garimella, an assistant professor at Rutgers University who researches WhatsApp in India. “From what we can tell, it’s still a very small percentage of misinformation.”

Whether cheapfakes or deepfakes, the result can be equally convincing, fact-checkers say, putting the onus on social media companies to do more to root out all forms of misinformation being spread on their platforms.

“You can resurrect dead leaders using AI but people realize its propaganda… However, if you mislabel a video or clip it out of context, people are more likely to believe it,” said Pratik Sinha from Alt News, an Indian nonprofit fact-checking website.

“Rather than getting into the binary of deepfakes and cheapfakes, there is a need for finding a way to tackle misinformation more effectively,” Sinha told the Thomson Reuters Foundation.

Both Meta Platforms Inc, which owns Facebook and Instagram, and X, formerly Twitter, introduced new policies to crack down on different forms of misinformation in a big year for global elections, but fact-checking groups say the results have been disappointing.

Updated guidelines

Responding to criticism from its oversight board, Meta updated its guidelines in April to add prominent labels to all forms of misinformation. Meta’s earlier policy only applied to content altered or created using AI.

“We agree with the Oversight Board’s argument that our existing approach is too narrow since it only covers videos that are created or altered by AI to make a person appear to say something they didn’t say,” Monika Bickert, the company’s vice president of content policy, wrote in a blog post last month.

Under the new approach, which took effect before the Indian election started on April 19, fact-checkers working with Meta review and rate posts, including ads, articles, photos, videos, reels, audio on its social media network under six labels to provide more information to users.

They can use the labels False, Partly False, Altered, Missing Context, Satire and True.

Sinha questioned the policy’s effectiveness in tackling false and misleading digitally manipulated posts over the election period.

“I’m not sure how effective Meta’s labelling has been,” he said, calling for the company to release data on its fact-checking programs.

An analysis by the Thomson Reuters Foundation found many fact-checked videos on Facebook had not been labelled correctly, or carried no warning label at all.

In one video doctored through editing, Prime Minister Narendra Modi appears to ask supporters to vote for a rival party. Rather than being labelled Altered, it is labelled Partly False – meaning it contains “some factual inaccuracies”.

X’s introduction in April of a new feature for Indian users to combat misinformation has also fallen short, said Karen Rebelo, deputy editor at fact-checking website Boom Live.

According to X, its Community Notes feature is designed to combat misinformation by inviting users from diverse backgrounds to contribute as note authors to set the record straight.

But Rebelo said different note authors often contradict each other, creating further confusion as no clear consensus arises on the veracity of the post in question.

“A lot of misinformation has notes on it but it’s not surfacing because other contributors don’t agree with it. X needs to find a way to work this out because otherwise it defeats the purpose of community notes,” she said.

The Thomson Reuters Foundation found a cheapfake video of Mallikarjun Kharge, president of the opposition Congress, could be found on X and had no notes on it despite being debunked by fact-checking websites.

In the mislabelled footage, viewed 43,000 times, Kharge appears to say his party would distribute Hindus’ wealth to minority Muslims.

Broader threat

Even when doctored videos have been labelled as fakes by social media platforms, they often still spread unabated on messaging apps such as WhatApp, Garimella said.

“Forty percent of the viral content being forwarded has already been fact-checked many times, but that hasn’t stopped it from spreading because there is no moderation as such on the messaging app,” Garimella said.

“That tells us that people perhaps aren’t aware (it is fake),” he said, warning that without tough controls by platforms that would likely continue.

Ahead of the election, Meta, which owns WhatsApp, launched a fact-checking helpline on the app with the Misinformation Combat Alliance (MCA) to combat AI-generated misinformation in India.

Most content flagged to the helpline had been manipulated using simple methods, not AI-driven tools, said Pamposh Raina, head of the MCA’s Deepfake Analysis Unit.

But the alarm about deepfakes may have distracted platforms from the broader threat of misinformation, Sinha said.

“We’ve hardly seen any deepfake videos that spread misinformation … But (social media) put its money and resources into debunking deepfakes. It should have researched the market better,” he said.

—Reporting by Adnan Bhat; Editing by Helen Popper.