Artificial intelligence, heralded for its transformative impact on productivity and operational efficiency, is now being leveraged by cybercriminals to create increasingly sophisticated scams. While AI tools are often praised for their ability to streamline processes and reduce human error, their misuse in cybercrime is raising alarms.
A particularly concerning case emerged recently from Jamtara district in Jharkhand, where a group of fraudsters exploited AI to develop counterfeit mobile applications that mimicked government schemes. These fraudulent apps, including those claiming to offer services like PM Kisan Yojna and PM Fasal Bima Yojna, ultimately led to losses totaling Rs 11 crore. Authorities report that the cybercriminals used AI tools, including ChatGPT, to create malicious software disguised as legitimate apps.
Ritesh Bhatia, a cybercrime investigator and cybersecurity consultant, highlighted how AI has made it easier for scammers to craft fake applications with minimal technical expertise. “Tools like ChatGPT and GitHub Copilot have enabled cybercriminals to generate malicious code with startling efficiency,” Bhatia explained.
The proliferation of fake apps is not confined to unofficial marketplaces. These fraudulent apps have found their way into official platforms like the Google Play Store and Apple’s App Store, where they evade detection due to inadequate security measures on certain third-party sites. Once downloaded, the malicious software can compromise users’ phones, steal sensitive information like banking details and one-time passwords (OTPs), and facilitate unauthorized transactions.
The extent of this threat has been starkly illustrated by recent reports. In 2023 alone, Google removed more than 173,000 developer accounts linked to the distribution of harmful apps. In response, Google has introduced a “Government” badge on the Play Store to help users identify authentic government apps, such as Digilocker and mAadhaar, which now display this certification.
However, cyber experts warn that despite these efforts, the rapidly evolving tactics of fraudsters remain a serious challenge. A report from Zscaler ThreatLabz reveals that India has surpassed the United States and Canada as the most targeted country for mobile malware attacks, with over 200 malicious apps identified on the Play Store in 2024. These apps have been downloaded more than 8 million times, highlighting the scale of the problem.
Traditional security measures, such as VPNs and firewalls, are increasingly ineffective against these fast-moving threats, experts say. “AI amplifies the threat by automating code generation, creating realistic interfaces, and even personalizing phishing attacks,” said Pratik Shah, Managing Director for India and SAARC at F5. As attacks grow more sophisticated, Bhatia points out that fraudsters now use platforms like WhatsApp, Telegram, and social media ads to spread links to these fake apps, further complicating the fight against them.
In light of these growing concerns, calls for stronger regulatory measures are intensifying. Zerodha co-founder Nikhil Kamath recently emphasized the urgency of addressing this issue, suggesting multi-layered app store policies and real-time AI detection to help users distinguish legitimate services from fraudulent ones. The Reserve Bank of India (RBI) has already introduced a new “bank.in” domain to bolster cybersecurity and combat fraud.
As cyber threats become more advanced, experts like Suvabrata Sinha, CISO-in-residence at Zscaler India, emphasize the need for a proactive, multi-layered approach to cybersecurity. “Fighting AI-powered attacks requires a defense strategy that uses AI against itself,” Sinha said. “Advanced threat intelligence and real-time monitoring are critical to safeguarding both individual users and critical infrastructure in an increasingly interconnected world.” Experts agree that it will take a concerted effort from government, tech companies, and users to stay ahead of the evolving threat.