Evan as hoax bomb threats to airlines continue, the Indian government’s advisory to social media platforms to voluntarily remove “misinformation” related to nation security will test free speech and the legal immunity that intermediaries have in such situations.
The Ministry of Electronics and Information Technology (Meity)’s advisory on 25 October urged intermediaries “to promptly remove such misinformation that affects public order and security of the state.” A copy of it was reviewed by Mint. At the heart of the concerns is a “judgement call” that the advisory urged social media intermediaries to make to remove such content. Social medial platforms never had to define what constitutes national security on their own.
Safe harbour protection, which provides legal immunity to social media intermediaries such as Meta Platforms’ Instagram and Google’s YouTube, may also come under the threat. Meity’s advisory underlined that the protection, under Section 79 of Information Technology Act, 2000 “shall not apply, if such intermediaries do not follow the due diligence obligations as prescribed under the Act… or abetted or aided in the commission of unlawful acts.”
Social media platforms, the advisory said, “shall be liable for consequential action as provided under any law”.
The advisory was issued amid a series of hoax bomb threats to Indian airlines, which have received 300 such warnings in three weeks.
Meta Platforms and Google’s YouTube did not immediately respond to queries on their stance on the matter.
Isha Suri, research lead for public policy at policy think-tank, Centre for Internet and Society (CIS), said the government’s intent with this advisory and the role it can play is unclear.
"Such an order can have a harmful impact on free speech because private entities will over-comply to err on the side of caution. More importantly, the greater ambiguity is around what the Centre is trying to achieve here—private entities cannot have the sovereign power of deciding whether a particular post made on its platform could lead to a potential threat to public order and the State—which is very wide in its scope,” Suri said. “There is also no clarity about how private entities are meant to ascertain the veracity of a particular post.”
A senior executive at one of the biggest intermediary platforms, who spoke on the condition of anonymity since the advisory and the issue are both under scrutiny, said, “We have an established enforcement mechanism where we take down content under national security interest as per government interest. But the reverse mechanism does not exist right now as a coordinated effort—even as our content moderation team undertakes multiple checks in sub-clauses such as hate speech, violence, misinformation and more. While the bombing threats cover a bit of all, we’re yet to understand the correct effective mechanism to enforce checks on such an issue.”
The advisory in question fundamentally differs from previous ones served on the use of artificial intelligence by not enforcing penalties for non-compliance. It could lay the groundwork for a framework to tackle issues such as bomb threats.
The advisory is best read “in spirit of a reminder of an intermediary’s key responsibilities when it comes to content moderation and compliance”, Kazim Rizvi, founder of policy think tank The Dialogue. “The key factor here is for intermediaries to be vigilant on any instance of misinformation or related issues on their platforms—the advisory itself does not impose threats to safe harbour protection.”
Yet, it could set a precedence for any upcoming reporting framework for Big Tech in India.
“Under rule 3(1)(d) of IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, issues of national security involve a special clause of compliance,” said Ameen Jauhar, a technology policy lawyer. “This provision requires an order from a competent agency of the government following which social media intermediaries must timely remove or disable access to information that threatens national security.”
“However, per the advisory, if social media platforms are to proactively or voluntarily remove information as hoax or unlawful, that would be an unprecedented shifting of sovereign powers and functions of investigation to a private entity—which is highly suspect and potentially dangerous,” Jauhar said.
Catch all the Industry News, Banking News and Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
MoreLess