Wednesday, December 25, 2024
154,225FansLike
654,155FollowersFollow
0SubscribersSubscribe

Time to be Accountable

IT Intermediaries Guidelines (Amendment) Rules, 2018, will push for “traceability” of content. But intermediaries are averse to this due to privacy and freedom of expression issues. By Na Vijayashankar

On December 2018, the central government proposed to issue an amendment to the Intermediary Guidelines under Section 79 of the Information Technology Act, 2000 (ITA 2000). This was neither a new Act nor a new rule. It was only a proposed amendment to a rule placed for public comments.

However, it was challenged as unconstitutional by some activists and referred to the Supreme Court. The government is now expected to present a new version of the rule in the Supreme Court and the industry lobby is already mounting pressure on the centre to bend the rules to their advantage.

Section 79 and the rules therein are meant to bring accountability to intermediaries to prevent certain crimes such as defamation, spreading of hatred and disharmony, inciting violence and such through information posted on websites, blogs and messaging platforms. The role of intermediaries in fuelling such crimes and assisting law enforcement agencies in detecting and bringing to book the perpetrators is undisputed. However, these business entities are averse to accepting any responsibility for preventing their platforms from being used for fake news to disturb the community and as a tool for anti-social elements.

An internet intermediary, incidentally, provides services that enable people to use the internet. They include network operators; network infrastructure providers such as Cisco, Huawei and Ericsson, internet access providers, internet service providers, hosting providers and social networks such as Facebook, Twitter, Linkedin, etc.

The use of fake videos and Artificial Intelligence (AI)-based content for posting malicious material has made the problem more acute since the amendment was first proposed. Two of the most contentious aspects of the proposed amendments are that the intermediary is required to trace the originator of a message that flows through his platform and that he should deploy technology-based automated tools for proactively identifying, removing or disabling public access to unlawful information.

Objections have been raised on the ground that the intended measures are “technically infeasible”, infringe on “privacy” and put restrictions on “freedom of expression”. Given the propensity of courts to react favourably whenever activists quote Articles 21 and 19 of the Constitution, the industry lobby expects a climbdown from the government. After all, the government had buckled under their pressure when it diluted data sovereignty principles in the personal data protection act by dropping “data localization”.

The challenge before the Court is now two-fold. The first is to realise that excuses based on technical infeasibility are false and such measures are already being used by the industry for compliance with other international laws such as General Data Protection Regulation (GDPR). The second is that “national security” is as much the duty of the government and a fundamental right of citizens as the protection of privacy or freedom of expression of certain other individuals. The law should not allow disruption in the lives of innocent persons while protecting the rights to privacy and freedom of expression of some activists.

At present, most large intermediaries do scan the messages that pass through their services to identify the nature of content so that appropriate advertisements can be displayed when the receiver of the message reads them. Most leading companies, including Facebook, also use AI to read the messages and profile the users. Hosted content is also moderated and scanned for malicious codes as part of information security measures. Hence, the claim that it is impossible to make a reasonably effective check and flag objectionable content is not acceptable, particularly in the case of large intermediaries like Google and Facebook. As regards the proactive removal of content which is “unlawful”, this involves the judgment of intermediaries. However, if they are ready to proactively identify potentially objectionable content, the government can always suggest a mechanism for reviewing the tagged content and get it moderated.

Most data managing companies undertake a similar “discovery” exercise when it comes to complying with laws such as GDPR. There is no reason why they should not apply similar “data discovery” tools to identify offensive content and flag it for manual supervision. The technology is available and being used by the same companies who are resisting the request of the government. The Court should reject such claims. Their bluff needs to be called out.

We may also note that the Personal Data Protection Act, which is expected to be a law soon, has also brought in a provision whereby social media intermediaries have to provide an option to users to get them “verified” and the “verification” should be visibly presented with the account.

In other words, it will be mandatory for social media companies to identify the owner of a message and therefore make him accountable. In the case of WhatsApp, it must be mentioned that what is required is not “reading of the message” which is objected to from the “privacy” angle as the information may be encrypted, but only to identify the origin of a message. This can be technically achieved by tweaking the header information of the message and incorporating a checksum identity of the message. This can be identified at the server whenever it is forwarded.

In view of the above, the technical infeasibility objections for not being able to trace the origin of a message is unsustainable in the current age of technology using AI. These are false excuses.

However, while issuing the new guidelines, the government may have to recognise that some views on Section 79 have been expressed by the Supreme Court in Google India Private Limited vs Visakha Industries and the proposed amendment has to be compatible with the views expressed therein. This case involved a complaint of defamation and the non-removal of the content by Google when demanded. It also opened a discussion on the concept of “due diligence” as per the version of Section 79 in ITA 2000 and an amendment made in 2008 which became effective from October 27, 2009.

The final outcome of this judgment was focused more on the applicability of the law with reference to the date of the incident. But during the course of the judgment, some important principles of international jurisdiction and the scope of “due diligence” emerged. These would be relevant in analysing the proposed intermediary guidelines. It may be noted that the original version of Section 79 required “due diligence” to be exercised to “prevent the commission of offence”. The due diligence under the old Section 79 had not been expanded with any notification of rules and hence was an open-ended responsibility.

In the case of the amended Section 79, which is applicable now, the law requires that “the intermediary observes due diligence while discharging his duties under this Act and also observes such other guidelines as the Central Government may prescribe in this behalf”. It, therefore, extends beyond “prevention” when the data enters the control of the intermediary and monitoring throughout its lifecycle.

Additionally, the concept of “due diligence” has been detailed in the intermediary guidelines on April 11, 2011, which is now proposed to be replaced with an amended version. The Court recognised that the amended Section 79 provided protection from liability not only in res­pect of offences under ITA 2000 but other laws as well which was welcomed by the industry as an expansion of the safe harbour provisions.

At the same time, we need to observe that the scope of Section 79 has expanded significantly in terms of how the government may exercise its regulatory powers and also the level of control that the intermediary is expected to implement as part of the compliance requirements.

In view of the vindication of the current version of Section 79 in the Visakha judgment and the lack of sustainability of technical infeasibility objections raised by the intermediaries, they seem to have no option but to accept accountability that the amended guidelines prescribe. The challenge mounted in the Supreme Court may, therefore, end up only with a clarification on the procedures related to content removal.

However, the Court could suggest some standard measure to ensure that between the period when the victim notices the harm and brings it to the knowledge of the intermediary and until a Court comes to a decision, he would get some interim relief which is fair to both parties. Hence, if a notice for removal is received by an intermediary, pending an order from a Court, he should exercise caution to prevent continuation of the alleged damage. Ignoring the knowledge of alleged damage would neither be legally wise nor ethically justifiable.

In such cases, the content may continue but it should be flagged as “reported objectionable vide notice received from ….” with a hyperlink to the copy of the notice. The flag may be removed after a reasonable period such as 90 days if no court order is received.

This measure will ensure that the delay in obtaining court orders does not continue to harm the victim to the same extent as it otherwise would. If such a measure is not available, every complainant will seek relief in the form of an interim order to block the content.

If such a request is agreed to by the trial court, the content remains blocked until the case is settled which may last for years. It would be good if the suggested procedure of dispute management is included as part of the intermediary guidelines.

Lead Illustration: Anthony Lawrence

The writer is a cyber law and techno-legal information security consultant based in Bengaluru

spot_img

News Update