MeitY’s CERT-In trials anti-deepfake tech to combat AI-driven scams

Recently, Sunil Bharti Mittal, Chairman of Bharti Enterprises, shared a shocking account of an AI-driven scam that targeted his company.

By
  • Imran Fazal,
| October 31, 2024 , 9:02 am
In December 2023, CERT-In issued an advisory informing citizens about the rising use of AI and deepfake technology by scammers.
In December 2023, CERT-In issued an advisory informing citizens about the rising use of AI and deepfake technology by scammers.

The Indian Computer Emergency Response Team (CERT-In), part of the Ministry of Electronics and Information Technology (MeitY), is currently testing anti-deepfake technology to combat the malicious use of Artificial Intelligence (AI) in scamming unsuspecting individuals. Recently, Sunil Bharti Mittal, Chairman of Bharti Enterprises, shared a shocking account of an AI-driven scam that targeted his company.

At the NVIDIA AI Summit in Mumbai, a source familiar with CERT-In’s efforts confirmed that the agency is actively testing this anti-deepfake technology. “The technology will not only detect deepfakes but will also support prosecution of such offenders in a court of law.”

Referring to the Airtel Chairman’s experience, the source added, “The software can detect fake audio as well as video, which will aid in combating fake news that stirs public alarm. MeitY immediately takes down such videos in collaboration with social media intermediaries.”

One recent case involved SP Oswal, Chairman of Vardhman Group, who lost ₹7 crore after scammers, posing as government officials and using fake documents along with virtual settings, persuaded him to transfer funds.

The Election Commission expressed serious concerns about Deepfake AI and warned of strict action against those spreading fake narratives. Chief Election Commissioner Rajiv Kumar warned of strict measures against those using Artificial Intelligence and deepfakes to spread misinformation about elections.

In December 2023, CERT-In issued an advisory informing citizens about the rising use of AI and deepfake technology by scammers. According to the advisory, scammers harvest social media profiles and other websites to gather videos, photos, and other personal details, then create convincing deepfakes that mimic real voices and faces of the victims’ friends or family. These deepfakes are used in schemes like fake kidnapping or ransom calls, creating urgency to coerce victims into sending money, often in the form of gift cards or cryptocurrency.

In November 2023, Minister of Information and Broadcasting Ashwini Vaishnaw announced that the government would begin drafting regulations specifically for deepfakes. After consultations with multiple stakeholders and platforms, Vaishnaw stated, “We will start drafting the regulations today, and very soon, we will have specific regulations for deepfakes.”

These forthcoming regulations are anticipated to include penalties for creating or sharing deepfakes and may also establish guidelines to help users identify deepfake content.

Additionally, Google has partnered with the Election Commission of India (ECI) to share crucial voting information via Google Search and YouTube. Google also announced its support of Shakti, The India Election Fact-Checking Collective, and joined the Coalition for Content Provenance and Authenticity (C2PA) as a steering committee member. C2PA is a global standards organization dedicated to certifying the authenticity of digital content.

Meta, in collaboration with the Misinformation Combat Alliance (MCA), launched a dedicated WhatsApp helpline to tackle AI-driven misinformation, particularly deepfakes. Furthering its consumer education efforts, Meta also rolled out initiatives like the ‘Know What’s Real’ campaign on WhatsApp and Instagram to educate users on recognizing and reporting suspicious content.

Leave a comment