Tech giants Meta Platforms, Snapchat, and TikTok have urged the Australian government to reconsider its decision to exempt YouTube from the country’s new social media law that bans users under the age of 16, reports Reuters.
The legislation, which is among the world’s strictest regulations for minors on social media, was passed in November and mandates platforms to bar underage logins or face hefty fines of up to $49.5 million AUD ($31 million).
While most major social media platforms are required to comply, YouTube is set to be excluded from the ban, as it is classified as an education tool.
It is also the only platform permitted for children under parental supervision within family accounts. However, rival tech firms argue that this exemption contradicts the law’s intent.
Meta, the parent company of Facebook and Instagram, criticized the decision, stating that YouTube provides the same features that led to the ban, such as algorithmic content recommendations, social interactions, and exposure to potentially harmful content, the report added.
“YouTube’s exemption is at odds with the purported reasons for the law and we call on the government to ensure equal application of the law across all social media services,” Meta said in a blog post on Wednesday.
TikTok echoed the sentiment, sighting the exemption “illogical, anticompetitive, and short-sighted” in a submission to the government. The company urged authorities to enforce consistent regulations across all platforms.
Similarly, Snapchat emphasized the need for fairness, arguing that “no specific company should receive preferential treatment” and that “all services should be held to the same standard.”
Experts in mental health and extremism have also raised concerns about YouTube’s role in exposing children to addictive and harmful content. Critics argue that the platform hosts content similar to that of other banned social media sites and should not be treated differently.
YouTube, for its part, has defended its moderation efforts, stating that it is intensifying its content regulation policies and expanding the scope of its automated detection systems to flag harmful material.