Meta approved political ads inciting violence in India: Report

As per the report by India Civil Watch International and Ekō, an advertisement calling for the execution of an opposition leader was also approved by the parent company of Instagram and Facebook

By
  • Storyboard18,
| May 22, 2024 , 2:01 pm
ANDP said that there is "imminent risk of serious and irreparable or difficult-to-repair damage to the fundamental rights of affected holders.” (Representative Image: Dima Solomin via Unsplash)
ANDP said that there is "imminent risk of serious and irreparable or difficult-to-repair damage to the fundamental rights of affected holders.” (Representative Image: Dima Solomin via Unsplash)

Meta, the parent company of Instagram and Facebook, approved a series of AI-manipulated political advertisements during the ongoing Lok Sabha Elections in India to spread misinformation and incite religious violence, revealed The Guardian citing a report by India Civil Watch International (ICWI) and Ekō.

A Meta spokesperson told Storyboard18, “As part of our ads review process—which includes both automated and human reviews—we have several layers of analysis and detection, both before and after an ad goes live. Because the authors immediately deleted the ads in question we can not comment on the claims made.

The report revealed that Facebook allowed anti-Muslim content on its platform which contained slurs and also contained “Hindu supremacist” language and disinformation about political leaders. One of the ads called for the execution of an opposition leader.

In November last year, Meta in the US barred political campaigns and advertisers in other regulated industries from using its new generative AI advertising products, denying access to tools that lawmakers have warned could turbo-charge the spread of election misinformation.

However, according to the report, during the Lok Sabha elections, the approved ads by Meta were placed in English, Hindi, Bengali, Gujarati and Kannada, and each ad featured manipulated images created using common AI tools such as Stable Diffusion, Midjourney, and Dall-e. Between May 8-13, the study organisations received publishing approvals for 14 self-submitted highly inflammatory advertisements, meaning that Meta’s auto-check systems were unable to pick up on the posts’ disturbing content. The investigation, spanning the third and fourth phases of India’s seven-phase election, targeted 189 contentious constituencies during the election’s 48-hour “silence period.” This period mandates a halt on all election-related advertising, in order to grant citizens some breathing room to make a considered voting decision. However, researchers found that Meta failed to enforce these restrictions, allowing the dissemination of harmful political advertising, mainly by the ruling BJP.

Political parties are investing substantial sums in advertising campaigns on social media platforms, utilising hyper-targeted digital tools. In fact, a significant chunk of Meta and Google’s revenues comes from political ads.

Another study conducted by ICWI, Ekō and The London Story titled ‘Slanders, Lies and Incitement: India’s million-dollar election meme network” found that two dozen “shadow advertisers” have spent $1 million to amplify scores of memes, short video content, and cartoon to dehumanise minorities and opposition parties over the first three months of this year. This shadow advertising accounts for over 22% of all political ads– of them 36 ads had reached 65 million impressions each.

According to another report (published last year) by ADL (an anti-hate organisation), Meta has accepted large sums of money for ads on hateful topics including antisemitism and transphobia, in the past. In some cases, it accepted money for ads that violated its hate speech policy.

Leave a comment