The big techs are under the scanner yet again. This time for allegedly spreading misinformation. According to a report by Eko and India Civil Watch International, during the ongoing general election, WhatsApp, Instagram, and other social media platforms carried political ads with misinformation, religious hate speech, and insidious content, increasing the risk of electoral manipulation.
As per the report, between May 8th and May 13th, Meta approved 14 highly inflammatory ads. The ads were placed in English, Hindi, Bengali, Gujarati, and Kannada.
“These ads called for violent uprisings targeting Muslim minorities, disseminated blatant disinformation exploiting communal or religious conspiracy theories prevalent in India’s political landscape, and incited violence through Hindu supremacist narratives. One approved ad also contained messaging mimicking that of a recently doctored video of Home Minister Amit Shah threatening to remove affirmative action policies for oppressed caste groups, which has led to notices and arrests of BJP opposition party functionaries,” said the report.
Accompanying each ad text were manipulated images generated by AI image tools, proving how quickly and easily this new technology can be deployed to amplify harmful content. Meta’s systems did not block researchers posting political and incendiary ads during the election ‘silence period’. The process of setting up the Facebook accounts was extremely simple and researchers were able to post these ads from outside of India.
The report also said that five ads were rejected for breaking Meta’s Community Standards policy issues regarding hate speech and violence and incitement. An additional three ads were submitted and rejected on the basis that they may qualify as social issue, electoral or politics ads, but they were not rejected on the basis of hate speech, inciting violence, or for spreading disinformation.
Meta requires accounts running political ads to get authorised first by confirming their identity and creating a disclaimer that lists who is paying for the ads.
However, Ekō, ICWI, and Foundation the London Story’s recent investigation revealed that far-right networks aligned with the BJP are not only failing to comply with the Election Commission’s regulations outlined in India’s Model Code of Conduct, but are also using loopholes to avoid restrictions around these disclaimers, indicating a breach of Meta’s ad transparency policy.
In April, 2024 Storyboard18 reported almost similar violations where an audit by Mozilla Foundation CheckFirst revealed Meta, TikTok, LinkedIn, Alphabet, X and many others, have shortcomings in their ad libraries, hindering transparency during the election season.
As per the report, not one out of the of the ad transparency tools created by 11 of the world’s largest tech companies to aid watchdogs in monitoring advertising are operating as effectively as needed, leaving voters worldwide vulnerable to misinformation and manipulation.
Read More: Elections 2024: Big tech’s ad transparency tools not so transparent?
Mozilla Foundation and CheckFirst run a stress test on platforms where they assess whether the available ad repositories are ready for action. The results were not as expected.
Some of the major findings of the audit report included missing ads, where ads in the user interface were not found in the ad repository, accessibility issues, lack of filtering and sorting option and missing repositories for paid influencer content or branded content.