Thursday, February 20, 2025
154,225FansLike
654,155FollowersFollow
0SubscribersSubscribe

The Privacy Paradox: AI, Surveillance, and the Legal Dilemma of the Digital Age

As AI accelerates at an unprecedented pace, fundamental questions about privacy, data ownership and legal accountability remain unanswered. With global powers divided on regulation, the fight to protect individual rights in the algorithm age has never been more urgent

By Dilip Bobb

French President Emmanuel Macron used artificial intelligence to showcase his lighter side, featuring in AI-generated videos the day before co-chairing the AI Summit in Paris, last week. In a series of clips posted on Instagram, Macron was seen dancing, starring in a spy comedy film, and even rapping. Goofy, entertaining and irreverent, it was nonetheless, an example of deepfakes, one of the darker aspects of the AI revolution which is often used to scam people. What was also on display was the disagreement on the use, or misuse, of AI with both America and the UK refusing to sign the declaration on “inclusive and sustainable” artificial intelligence at the Summit, a major setback for a concerted approach to developing and regulating the technology. This comes amid an unpredictable global scramble to develop AI. US President Donald Trump had already revoked former President Joe Biden’s executive order for AI guardrails and is replacing it with his own policy designed to maintain America’s global leadership by reducing regulatory barriers.

We are in a hugely transformative digital age which AI has taken to another uncertain level and raised crucial legal and intellectual property issues. Who owns AI generated/ produced works or inventions that impinge on individual privacy rights? Who should be liable for creativity and innovation generated by AI, if they intrude upon others’ rights or other legal provisions? AI is built on large language machines which provide data on a user’s prompt. How does one hold a machine responsible for invasion of privacy, bias or wrongful abuse? At the heart of the debate on AI is the question about the lack of algorithmic transparency which is at the forefront of legal discussions on AI and its impact on privacy. The issue of legal responsibility of autonomous machines is based on the argument that “autonomous machines cannot be granted the status of legal beings”. One prime example is wrongful medical treatment diagnosis by an AI software programme. AI algorithms are now so all-pervasive that even Siri, Alexa and Amazon know exactly what time you wake up, what your favourite food is and even the brand of toothpaste you use. This data trawling and data-acquisition is at the heart of the raging debate over the digital age and its abuse of privacy rights.

The Delhi High Court recently asked Google to immediately take down videos referring to the health and well-being of Aaradhya Bachchan, daughter of Abhishek Bachchan and Aishwarya Bachchan. These videos were wrongfully claiming that Aaradhya was in critical health, and though a complaint was filed with Google, the videos were not immediately taken down. Referring to the publishers/uploaders of the videos as morbidly perverse persons, the Court asked Google to immediately take the videos down. It also asked Google to take down any other videos of the like nature that are brought to its attention by the petitioner. 

Like most intermediaries, Google argued in the case that it had no control over the videos, and unless videos fall within particular categories such as rape, obscenity, etc, they do not proactively take them down. The Court stated that such a response is unacceptable, and granted relief to the petitioner. The case was primarily argued based on Rules 3 and 4 of the 2021 Intermediary Guidelines, which require intermediaries like Google to take down content expediti­ously based on complaints relating to harm to children, privacy, copyright infringement, and defamation, among others.

The General Data Protection Regulation and its compliance requirements for data retention state that any individual’s “personal data must not be retained by any entity for longer than necessary for the purposes for which the personal data is processed”. The principle has been violated in many cases such as the recent Apple IOS update where deleted photos of users resurfaced, indicating retention of data in contravention to a user’s actions. Data breaches can be devastating, causing financial loss, reputation damage and possible legal suits over violation of one’s privacy and personality rights.

Several high-profile cases in the recent times illustrate the severity of the problem such as the Domino’s leak where data from 18 million customers was released, the Covid-19 information breach where 81 crore Indians’ test data was made public, the Air India data breach in which unauthorized access was granted to user credentials, leading to full access to the payment methods and GST invoices of the company’s users, and also the Big Basket leak where a hacker leaked 20 million user records in the public domain.

There is in India, no standard enactment for data protection and the courts have largely depended on the Information Technology Act (2000) and the Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules (2011) to comply with data privacy issues. However, a comprehensive legal framework and implementation is yet to be structured through the Digital Personal Data Protection Act (DPDPA) (passed on August 11, 2023) and the proposed Digital India Act (to replace the existing Information Technology Act, 2000) which is intended to enhance protection of an individual’s personal data.

With the breakneck development of AI, the issue of personal data protection has become even more acute. The rise of AI-enabled social media’s influence on the judiciary demands a more comprehensive legal framework to govern its use and mitigate potential harms. Currently, India does not have specific regulations addressing the intersection of social media and judicial proceedings. However, existing laws such as the Contempt of Courts Act, 1971, and the Information Technology Act, 2000, provide some degree of oversight. 

The judiciary faces several challenges in navigating the landscape of algorithms and machine-generated content. Judges and legal practitioners are caught between balancing the benefits of technology with the need to maintain judicial impartiality. According to recent rulings by the Supreme Court, judges should largely avoid using social media accounts and refrain from expressing opinions on judgments online, as the Court views the judicial profession as requiring a “hermit-like” lifestyle with dedicated focus on their work, meaning judges should not be active on platforms like Facebook. 

The Supreme Court case of Karmanya Singh Sareen & Anr. vs Union of India & Ors reflects the legal battle on the matter of WhatsApp’s data policy to protect user data and privacy rights. Facebook had approached the Supreme Court seeking to transfer a set of cases regarding Aadhaar-social media linking. It asked the Supreme Court to tag and transfer to itself four cases pending before three High Courts: two before the Madras High Court, one before the Bombay High Court and one before the Madhya Pradesh High Court. Facebook contends that all four cases involve substantially similar reliefs and questions of law. Further, they involve the interpretation of central legislations, like the Aadhaar Act, 2016. Facebook is concerned that the High Courts may produce conflicting judgments, which will hinder its ability to operate a uniform platform across India. It contends that conflicting judgments may result in the fundamental right to privacy being available unequally across the Union. In response, the cases were all transferred to the Supreme Court, but the issue remains extant: right to privacy in the age of Big Data.

The right to privacy is protected as an intrinsic part of the right to life and personal liberty under Article 21 of the Constitution. On August 24, 2017, the Supreme Court of India delivered its landmark privacy verdict. In the case of Justice KS Puttaswamy (Retd.) and Anr. vs Union of India and Ors, the Supreme Court held that the right to privacy is a fundamental right protected under Article 21 and Part III of the Indian Constitution. The Puttaswamy case verdict stated that right to privacy attaches to the person covering all information about that person and the choices that they make. 

In cases before the Indian courts, the defence taken by tech giants has always boiled down to the technological phrase: end-to-end encryption. This is ostensibly meant to ensure privacy, but, in an age where your smart phone is a virtual diary of life, relationships, preferences and even political ideology, it is basically legal subterfuge. In 2018, the Home Ministry authorised ten central agencies to intercept, monitor and decrypt any information generated by or stored on any computer on the grounds of “national security”. This clearly runs counter to the verdict in the Puttaswamy case.

In today’s AI-enhanced digital age, the right to privacy is jeopardised by the growing dependence of individuals on the internet, computers, tablets and the smart phone. Right now, millions of users are using AI from various sources—Amazon, Google, Meta, Apple, Microsoft, OpenAI and now, the Chinese upstart, DeepSeek, to get access to information, create images and write essays. This heightens the risk that arises from the increased interaction of individuals with technology, which is basically being used by tech companies to gather, archive, and mine information for the purpose of profiling individuals. The utilisation of “electronic tracks” by various social networking platforms to gather data from users for personalisation or targeted advertising poses an enormous threat to individual privacy. 

Concerns regarding user data were dramatically substantiated after the data privacy breach incident in the Cambridge Analytica scandal in 2018, where the data and records of millions of people were harvested from Facebook leading to infringement of the “right to privacy” of the users. 

The Supreme Court is currently reviewing the privacy regulations that WhatsApp notified in 2016 and in 2021 after its take­over by Facebook in the case of Karmanya Singh Sareen & Anr. vs Union of India & Ors. The lawsuit seeks to uphold Indian residents’ data and “right to privacy.” According to WhatsApp’s 2016 privacy policy, any consumer data published with the app will also be transmitted to Facebook, the parent organisation. The amended policy in 2021 stipulated that consumers would be unable to opt out of transferring data with Facebook if they intended to keep using the app; otherwise, their profile would be terminated. The present situation constitutes an imminent risk to an individual’s “right to privacy.”

In fact, previous Chief Justice D Y Chandrachud had raised concerns about the vast amount of personal data being collected by companies and the potential for misuse of this information and called for “robust regulations to govern data collection and usage, ensuring individuals have control over their personal information.” Later, he stated that “disinformation has the power of impairing democratic discourse forever, pushing a marketplace of free ideas to the point of collapse under the immense weight of fake stories.

As the world moves online, our battles to uphold civil liberty must also follow suit,” he added, speaking on the topic “Upholding civil liberties in the digital age: Privacy, surveillance and free speech.”

The Digital Personal Data Protection Act 2023 outlines a structure for addressing digital personal data which preserves citizens’ right to privacy while also embracing that the handling of such data is crucial for legitimate objectives, which can be variously interpreted by existing governments. The Act entrusts data principals with control over their personal data, forbidding the storage and use of their data without express authorization, with the exception of some admissible situations in which an innovative concept known as “deemed consent” is incorporated. It also gives individuals the right to seek redress for grievances and the authority to decide who will obtain their data. In addition, the Act incorporates the “right to erasure,” enabling users the ability to ask for deletion of their personal information, offering them greater control over their online identity while also outlining commitments to entities identified as “data fiduciaries,” which are in charge of gathering, archiving, and utilising digital personal data. However, the Act does not explicitly incorporate the “right to be forgotten” as a separate provision, despite its acknowledgment as a crucial element within Article 21 under the ambit of the “Right to Privacy”.

A recent UN report has warned that people’s right to privacy is coming under ever greater pressure from the use of modern networked digital technologies whose features make them formidable tools for surveillance, control and oppression. This makes it all the more essential that these technologies are reined in by effective regulation based on international human rights law and standards. The report, the latest on privacy in the digital age by the UN Human Rights Office, looks at three key areas: the abuse of intrusive hacking tools (“spyware”) by State authorities; the key role of robust encryption methods in protecting human rights online; and the impacts of widespread digital monitoring of public spaces, both offline and online. The report details how surveillance tools such as the “Pegasus” software can turn most smartphones into “24-hour surveillance devices”, allowing the “intruder” access not only to everything on our mobiles but also weaponizing them to spy on our lives. Established notions of information privacy are based on the idea that humans are the primary handlers of information and were not designed to contend with the computational “God-like” ability of AI that does not conform to traditional ideas of data collection and handling. The way we currently think about concepts such as informed consent, notice, and what it means to access or control personal information have never before been so fundamentally challenged as they are by AI. The current binary notion of personal information was already being challenged by mainstream technologies, but AI blurs the distinction to the point where what is and is not “personal information” is becoming considerably more difficult to define. The increased emergence of AI is likely to lead to an environment in which all information that is generated by or related to an individual is identifiable. 

In 2006, British mathematician Clive Humby created the famous phrase, “data is the new oil”. AI has proved that beyond any conceivable matrix. The legal dilemma that overrides everything else is: who does that data belong to? Is it the users, or is it the companies mining it and to what purpose? It is the existential question of our times. As Prime Minister Narendra Modi said at the AI Action Summit in Paris: “AI is writing the code for humanity in this century.” 

—The writer is former Senior Managing Editor, India Legal, and the author of Artificial Intelligence: The Coming Revolution

spot_img

News Update