Elon Musk’s X Corp. (formerly Twitter) has recently challenged the Central government before the High Court of Karnataka over its use of Section 79(3)(b) of the Information Technology Act, 2000 to issue takedown orders as a bypass of the website blocking powers under Section 69A. The platform argues that the government is misusing the provision for content moderation and removal orders.
In its petition, filed on March 5, 2025, the social media platform has mentioned ‘Sahyog’ as creating a parallel procedure for blocking content on a large scale and spotlighted it as ‘illegal censorship portal’.
It is to be noted that Section 69A is the main law governing online content moderation. It empowers the Centre to block content hosted on any digital platform if deemed necessary under Article 19(2) of the Constitution.
Additionally, Section 79 of the IT Act provides “safe harbour” protection to intermediaries (such as X), shielding them from liability for third-party content; however, Section 79(3)(b) states that intermediaries can be held liable if they fail to remove unlawful content after receiving actual knowledge or government notification.
In 2023), the Ministry of Electronics and Information Technology (MeitY) issued a directive allowing ministries, state governments, and police to issue blocking orders under Section 79(3)(b). The following year, in 2024, to create a safer space for Indian citizens, the Ministry launched ‘Sahyog’- a portal that enables authorities to issue and upload blocking orders.
While ‘Sahyog’ was launched to create a safe harbour, experts tell Storyboard18 how the platform is being turned into a censorship apparatus.
X’s petition raises some extremely serious and valid concerns about the increasing opacity of the government’s digital regulation. Labelling the ‘Sahyog’ portal as a censorship tool isn’t far-fetched when you closely examine how it operates and how it’s been brought into the system without a clear legal mandate, remarked experts.
Sonam Chandwani, managing partner, KS Legal & Associates, points out that the fact that the portal seems to have been quietly introduced, with little to no public consultation or legislative backing, makes it problematic from both constitutional and procedural standpoints.
The portal essentially centralises the process of content takedown but without a clear legislative framework or statutory safeguards like those under Section 69A of the IT Act. Section 69A at least provides a limited structure; it requires a reasoned order, examination by a committee, and in some cases, a right to be heard. But ‘Sahyog’ functions as a digital backend for censorship where even these basic procedural safeguards appear to be absent or bypassed. This goes against the principles of natural justice and violates Articles 14 and 19(1)(a) of the Constitution, especially the latter, which guarantees freedom of speech and expression.
“The core issue here is that ‘Sahyog’ seems to be facilitating blocking orders without any visibility to the public or even the intermediaries sometimes.”
There is no official data publicly available on how many URLs or accounts have been blocked using ‘Sahyog’, and that very lack of disclosure reflects the opacity of the process.
“If intermediaries are compelled to act on takedown notices routed via ‘Sahyog’, without the usual procedural checks, it amounts to coercive enforcement that bypasses due process. Worse, it creates a chilling effect where platforms might over-comply to avoid government scrutiny leading to self-censorship,” explains Chandwani.
In essence, according to her, this system creates an executive-controlled, backend censorship infrastructure with minimal legal accountability, which is deeply problematic in a democracy.
“Sahyog’s functioning, as alleged, is not giving affected users or intermediaries a chance to contest blocking directions, not publishing orders, and having no oversight mechanism.”
While the portal is stated to streamline content blocking and information requests between government, law enforcement agencies and intermediaries, it is not part of the stipulated process under the IT Act and various rules for such actions, points out Ujval Mohan, manager – public policy, The Quantum Hub (TQH).
As a result, it is a separate and ad-hoc initiative that has recently been spotlighted in the context of judicial proceedings. It remains unclear whether content-blocking orders on this portal will follow the Section 69A process, he adds.
The X Files- Feud Over Grok
The whole issue has stemmed from X’s AI chatbot, Grok 3, which has been under scrutiny for using Hindi slang and posting government-critical responses. While the Centre has reportedly contacted the company, the argument is whether AI-generated content would fall under ‘third-party’ content for safe harbour protection under Section 79.
Rohit Kumar, the founding partner of the public policy firm shares that Grok appears to be designed with wit, humor, and a rebellious streak. It also offers an ‘unhinged’ mode for premium users, which can produce unpredictable and even outrageous responses.
However, Grok’s integration – specifically the way it has been done – introduces complexities regarding content moderation.
When users tag Grok in their public timelines, its responses are also public, potentially disseminating unfiltered and harmful content directly on X. Additionally, Grok may choose to respond to user queries by searching X public posts to provide “up-to-date information and insights.” This could mean that the harms posed by Grok-generated content can be very similar to those posed by user-generated content — misinformation, hateful speech, etc. These harms, according to Kumar, may get amplified if users believe AI responses to be credible – which many unfortunately do – potentially reinforcing or creating systemic biases.
“While it is true that AI responses ultimately depend on training models and datasets, context and use cases matter. The question of whether AI developers should be held accountable for their models is a difficult one. Developers and deployers must conduct due diligence and implement red-teaming efforts to prevent harmful outputs. However, if we become too eager to ascribe liability, we risk severely inhibiting innovation—no developer in their right mind would want to face the inevitable wave of lawsuits,” he explains.
The regulatory solution lies in striking a balance—mandating due diligence and process checks while ensuring better platform design to minimize misuse.
“The biggest issue in the Grok case is not its output but its integration with X, which allows direct publishing onto a social media platform where content can spread unchecked, potentially leading to real-world harm, such as a riot.”
This case spotlights important questions about how different government authorities should interact with social media networks, especially when a dedicated MeitY plays that central role.
“…blocking powers in general have to balance the legitimate governance interest in preventing the spread of harmful content with the constitutional guarantees of free expression and accountability,” adds Mohan.
Regulating AI is a complex challenge that regulators worldwide are grappling with.
While factors like innovation and foreign policy will ultimately shape AI governance, the Grok controversy is likely to influence India’s approach through the lens of online safety. Kumar highlights that AI-generated content must be factored into assessments of online risks, including spreading misinformation and creating hateful content.
Policing chatbots is challenging, and policymakers must carefully balance freedom of speech with concerns about narrative control and online safety.
“Over-regulation risks enabling censorship, while under-regulation can lead to real-world harms.”
Going forward, Indian law should land on a clear, balanced and practical process for content blocking where it is indeed necessary.