Online Abuse: Tackling Vengeful Trolls

964
Union minister Sushma Swaraj was trolled for taking measures to ensure that interfaith couples get their passports/Photo: UNI

Above: Union minister Sushma Swaraj was trolled for taking measures to ensure that interfaith couples get their passports/Photo: UNI

As social media platforms fail to self-regulate, high-profile victims of abuse respond in varying degrees depending on the political support they receive against their harassers

~By Venkatasubramanian

The incessant online trolling of External Affairs Minister Sushma Swaraj over the transfer of a passport officer for allegedly humiliating an inter-faith couple brought to the fore the absence of a legal regime to deal with cyber hate speech. Swaraj was trolled for her public display of concern to ensure that an interfaith couple gets their passports.

Swaraj did not complain to the police, but chose to block an online abuser who stalked and challenged her after her initial gentle response seeking decent expression of dissent by her followers on Twitter became counter-productive. She held an online poll to mobilise opinion against such abuse, but found to her dismay that 43 percent approved it.

That the trolls are nurtured by her own party and the climate of intolerance prevailing in the country is a source of concern. None of her ministerial colleagues came to her defence immediately and the continued silence of Prime Minister Narendra Modi, himself an avid social media follower, fuelled speculation whether the campaign against her enjoyed proxy support.

Online hate speech was criminalised by Section 66A of the Information Technology Act, 2000. In 2015, the Supreme Court struck it down as unconstitutional in the Shreya Singhal case because it found it overbroad and vague, restricting freedom of expression.

The centre, according to a report, is looking at amending Sections 153A and 505 of the IPC to include provisions specified under Section 66A. Section 153A seeks to punish acts which promote enmity between different groups on grounds of religion, race, place of birth, residence, language, etc., and doing acts prejudicial to the maintenance of harmony. Section 505 deals with punishment to makers of statements conducing to public mischief.

In December 2015, the parliamentary standing committee on home affairs recommended changes to the IT Act in its 189th report. The report proposes that issues of online hate speech and spoofing be dealt with separately through two new sections under the Act. Specifically, it recommends amendments to the IT Act to criminalise online content that promotes ill-will, hatred and enmity amongst communities, race, religions, etc. similar to Sections 153A and 153B of the IPC.

The report also advocates stricter penalties than those prescribed under Sections 153A and 153B. It  recommends that any transmission of information by a person claiming to only “innocently forward” such information should also be charged with the same offence as the originator of the information.

Section 69A of the IT Act empowers the centre to direct the blocking of access to online information. Section 79 of the IT Act exempts intermediaries from liability for content subject to certain conditions. This section, read along with the Information Technology (Intermediaries Guidelines) Rules, 2011 creates a mechanism to ensure that online intermediaries take down “unlawful” content.

The Supreme Court has held that intermediaries are only required to take down content upon receiving actual knowledge from a court order, or on being notified by the appropriate government or its agency, and not on the basis of user complaints.

Google’s Transparency Report indicates that it received 466 content removal requests from January to June 2017. The highest number, 116, (25 percent) related to defamation and 10 (two percent) to hate speech. Facebook’s Government Requests Report indicates that 1,228 pieces of content were restricted between January and June 2017.  It states that the majority of content restricted was alleged to violate local laws relating to defamation of religion and hate speech.

Under the Intermediaries Guidelines by the government, if an intermediary fails to disable access to prohibited information upon “actual knowledge”, it will not be granted the safe harbour protection. (This provision protects websites from legal liability for infringing content posted by their users—so long as they promptly remove such content at the request of the rights holders). This will leave it open to prosecution under various laws criminalising hate speech.

Section 79 of the IT Act specifies “safe harbour” protection to online intermediaries. It says that intermediaries are only absolved if they function as platforms and not speakers, and if they do not “initiate, select the receiver or modify” information being transmitted. Additionally, intermediaries are required to observe “due diligence”, the standards for which are specified in the Intermediaries Guidelines.

In 2016, Facebook, Twitter, YouTube and Microsoft signed the European Union Code of Conduct on Hate Speech. According to this code, these social media companies have made “public commitments” to curtail hate speech on their platforms.

In 2017, Germany passed an anti-hate speech law, which imposes fines on social media companies for their failure to take down hate speech within a time period. Companies affected by this law, like Facebook, have criticised it stating that the platform should not be tasked with state responsibilities. On the contrary, platforms have been criticised for not taking into account cultural sensitivities and failing to “reflect the interests of individuals at risk” as a part of their regulation policies.

Twitter’s general policy prohibits “hateful conduct” on their platform.  This includes speech directed against a user on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability or serious disease. Examples of hateful conduct also includes “behaviour that incites fear about a protected group” and repeated and/or non-consensual slurs, epithets, racist and sexist tropes, or other content that degrades someone.

Reports of violation are accepted from users. However, Twitter states that sometimes the reports need to be verified by the person targeted. Depending on the severity of the violation and previous violations by the user, Twitter may ask him to remove the content before being allowed to tweet again or even suspend his account. In December 2017, Twitter Rules were updated to state that abuse or threats, directed through a user’s profile information, could lead to permanent suspension of accounts.

But how far have these intermediaries and platforms fulfilled their responsibilities and mandates? On July 5, the Mumbai police arrested a 36-year-old man in Gujarat for allegedly giving a rape threat on Twitter to the 10-year-old daughter of Congress spokes-person Priyanka Chaturvedi. Following a directive from the Union home ministry, the police registered a case under Section 509 against the Twitter user for issuing the threat. Relevant sections of the IT Act and Protection of Children from Sexual Offences Act were also used after Chaturvedi filed a complaint. It all began when a fake quote attributed to her on the Mandsaur rape case went viral. It stated that she supported the accused in the heinous crime.

Whether the police action in this case will deter online trolls is to be seen.

The author acknowledges the report, Hate Speech Laws in India, published by the Centre for Communication Gover­nance, National Law University, Delhi, in April 2018 for writing this article