While addressing copyright and the rise of deepfakes, MeitY Secretary S Krishnan gave an insightful talk on policy frameworks addressing these challenges at Storyboard18 DNPA Conclave 2025, in New Delhi on February 27.
There are two key aspects to consider, according to Krishnan. First is deepfakes – India’s stance on AI balances innovation with safeguards against potential harms. We are closely studying regulations in other jurisdictions, including the European Union, to determine appropriate measures. India already has legal frameworks under the IT Act and the Bharatiya Nyaya Sanhita (BNS) that address misrepresentation and misinformation.
The Indian Constitution’s Article 19(2) imposes reasonable restrictions on freedom of expression, including security of the state, public order, and relations with friendly states. Sections like 69A of the IT Act allow intervention in specific cases. Our current laws provide sufficient protections.
While discussing regulation and policy approach, he said, “The advisory committee issued guidelines on March 15, 2024, addressing some concerns.
If needed, the government may introduce stronger legislation, but this will be an extensive process involving stakeholder discussions. “Our emphasis is on enabling AI-driven innovation while ensuring accountability through clear liability provisions. Global AI summits, such as those in the UK, Seoul, and Paris, highlight a shift toward AI safety and governance. India aligns with these discussions while prioritizing innovation.”
AI regulation needs effective monitoring. In India, there are around 4,500 licensed news channels and 1,460 registered publications, but platforms like YouTube host over 150 million channels. Monitoring this vast content landscape is challenging. For every violator caught, hundreds go unnoticed and reappear under new names. Making AI regulations truly effective is the need of the hour.
Krishnan acknowledged that the above is a valid concern. According to him, the challenge lies in enforcement, not just regulation. He suggested a few key approaches can help: AI-driven monitoring enables automated detection of deepfakes and misinformation at scale, while public-private partnerships foster collaboration between platforms, regulatory bodies, and tech developers. Stronger accountability mechanisms, including clear deterrents and penalties, help prevent repeated violations. Additionally, raising user awareness through education empowers individuals to identify and report AI-generated misinformation. To ensure both innovation and accountability, regulations must evolve in step with technological advancements.
Krishnan emphasized that good-quality content naturally drives out bad-quality content, highlighting the importance of relying on curated, verified information that aligns with one’s needs.
In a marketplace of ideas, the goal is not necessarily to be perfect but to be effective. In a free society, content should be diverse and thought-provoking. Over the years, one thing remains clear: it’s insightful to observe what gets published in democratic nations like the United States, the United Kingdom, and much of Europe. In contrast, certain other jurisdictions produce different types of content, and the variations are revealing. Ultimately, the responsibility of ensuring high-quality content lies with news publishers.
Beyond quality, fair compensation is essential for the effort that goes into curating content. This is not a negative stance; rather, it recognizes the value of journalism and media. Certain partnerships—such as those in the UK—have reinforced this point, underscoring that content is both a subject matter and an industry requiring careful consideration, Krishnan explains.
Another challenge is the presence of restrictions. Some restrictions are necessary to uphold legitimacy and ethical journalism. However, when restrictions exceed legitimate boundaries, they must be questioned. Over the past three years, enforcement mechanisms have proven effective, sometimes even exceeding expectations.
The sheer volume of content makes it impossible to scrutinize everything. Instead, attention naturally gravitates toward certain pieces, and concerns arise when necessary. Enforcement mechanisms often rely on complaints—if no one raises an issue, it may go unnoticed. That said, when media functions properly, it allows us to address concerns effectively. Ultimately, fostering high-quality journalism is both socially desirable and necessary for a well-informed society.
Read More:Without fair compensation, quality journalism will be compromised: Sanjay Jaju