By Sujit Bhar
In late July, Denmark announced a radical proposal: a new law that recognises each individual’s body, facial features, and voice as their own intellectual property in the age of deepfakes. If passed, this legislation would make Denmark the first country in Europe to give everyone—not just celebrities—the right to their digital identity.
On the surface, this may sound like a symbolic gesture against a distant technological problem. But the rapid proliferation of AI-generated deepfakes—synthetic media that realistically mimic people’s likeness—has already begun to blur the line between reality and fabrication. From political misinformation to sexual exploitation to fraudulent scams, deepfakes represent one of the most serious threats to personal identity, reputation, and even democratic institutions.
Denmark’s initiative is a recognition that existing laws are inadequate to deal with the new wave of technological misuse. By giving individuals rights such as immediate removal of deepfaked content, compensation for damages without needing to prove malice, and holding platforms strictly liable under the EU’s Digital Services Act framework, it aims to reset the legal playing field.
Meanwhile, in India, the courts are only beginning to grapple with this reality—most notably in high-profile cases involving celebrities such as Aishwarya Rai Bachchan, Abhishek Bachchan and filmmaker Karan Johar, all of whom have recently sought protection of their personality rights before the Delhi High Court. While their fight may seem limited to the sphere of ambush marketing and unauthorised commercial exploitation, the underlying issue is the same: technology now enables anyone to steal your face, voice, or identity with the click of a button.
The juxtaposition of Denmark’s forward-looking approach and India’s reactive, case-by-case handling raises urgent questions. Are our privacy laws already out of date? Can legislation ever be written in a way that survives the relentless speed of technological change? And in countries like India, can a notoriously slow judicial system ever hope to keep up with a technology that evolves by the month?
THE EXISTING LAWS ALREADY PASSÉ?
The short answer is yes. Most privacy and intellectual property frameworks worldwide were drafted in the late 20th or early 21st century, at a time when the internet was a novelty, not an ecosystem that mediated every aspect of human life.
Take India as an example. Until 2023, India did not even have a comprehensive data protection law. The Digital Personal Data Protection Act (DPDP) passed in August 2023 provides a basic framework for how personal data can be collected, stored, and processed. But it is almost entirely silent on emergent threats like deepfakes or non-consensual AI-generated content.
Similarly, older concepts such as “publicity rights” or “right to privacy” were originally developed to protect celebrities from unauthorised commercial endorsements, paparazzi intrusion, or defamation. They were never designed to address situations where your face can be inserted into a fake video within minutes, distributed across the world, and monetised on dozens of platforms.
Even in the European Union, whose General Data Protection Regulation (GDPR) is considered the gold standard of privacy law, deepfakes present a fresh challenge. While GDPR gives individuals the “right to be forgotten” and control over personal data, it does not explicitly cover synthetic identity theft at the level of voice or facial replication.
Technology has outpaced law, and it continues to do so at breakneck speed. What was once science fiction—creating a video of someone saying or doing things they never did—is now an everyday reality, accessible to anyone with an internet connection and easily available software or app. In this sense, privacy laws are not just out of date; they are functionally obsolete against the onslaught of generative AI.
CAN LEGISLATION BE “AGE-PROOF”?
The Danish proposal is an attempt to anticipate future threats by recognising identity itself as intellectual property. But even this forward-thinking move faces a fundamental dilemma: how do you write laws that can remain relevant in the face of unpredictable technological innovation?
Laws are inherently reactive. They are written to regulate known problems, based on existing technologies. Legislators can try to anticipate future trends, but the pace of change in AI is so rapid that any law risks becoming outdated within a few years.
For instance, today’s deepfakes rely on video and audio synthesis. Tomorrow, we may see AI tools that replicate not just visual likeness, but entire behavioural patterns—digital “clones” of individuals that can interact autonomously in virtual environments. How would existing laws apply then? Would the right to remove or claim damages still be enforceable if thousands of AI-generated “you” are spread across decentralised platforms?
Another problem is enforcement. Technology is global, but laws are national. Even if Denmark passes its pioneering legislation, what happens when a deepfake generated in another jurisdiction is circulated on platforms hosted in yet another country? Unless there is broad international cooperation, enforcement may remain patchy at best.
This points to the core difficulty: laws that aim to be “age-proof” or “technology-proof” must be principle-based rather than technology-specific. Instead of regulating specific tools (deepfake videos, AI-generated voices), they must enshrine broader rights, such as the universal right to one’s digital identity, consent, and dignity. Denmark seems to be moving in this direction, but whether others will follow remains to be seen.
THE GREAT INDIAN SLOTH
In India, the contrast is stark. The Delhi High Court’s intervention in Aishwarya Rai Bachchan’s case is significant—it shows that judges recognise the dangers of identity theft in the age of AI. Justice Tejas Karia’s observations that unauthorised use of an actor’s name, image, or signature can dilute their goodwill echoes the concerns raised in Denmark.
However, the Indian judicial system suffers from a chronic problem: sloth. Cases drag on for years, sometimes decades. By the time a judgment is delivered, the technological context may have completely changed.
For instance, the hearings in the Bachchan cases are scheduled months apart, with the next one set for January 2026. But AI technology does not wait. In the intervening months, deepfake tools will become more sophisticated, more widespread, and harder to regulate. A legal remedy that comes in 2026 may feel irrelevant to harms already suffered in 2025.
Moreover, Indian courts tend to focus on high-profile cases involving celebrities, leaving ordinary citizens with little recourse. While Denmark’s proposed law extends
protection to every individual, Indian jurisprudence around “personality rights” has so far been limited to the famous. For the average person whose likeness is misused in a scam, meme, or pornographic deepfake, the path to justice remains unclear and prohibitively slow.
The result is a widening gap: technology races ahead, courts inch forward, and citizens are left vulnerable. Unless India invests in fast-track mechanisms, specialised tribunals, or AI-aware regulatory bodies, it risks being perpetually behind the curve.
THE USAIN BOLT OF TECH-PACE
Perhaps the most unsettling possibility is that technology might evolve to a point where legal protections are meaningless. Imagine a world where AI can instantly replicate anyone’s likeness, generate convincing fake content, and distribute it through decentralised, censorship-resistant platforms. In such a world, even the strongest laws may be unenforceable.
The implications for business and commerce are profound. Today, brand endorsements, advertising, and influencer economies are built on the authenticity of identity. If anyone’s face or voice can be faked, how do you know if an endorsement is real? If fraudulent digital avatars can sign contracts, appear in meetings, or make financial transactions, what happens to the very idea of trust in commerce?
Some analysts warn that this could lead to an “authenticity crisis,” where nothing can be taken at face value. The collapse of trust could destabilise markets, politics, and even interpersonal relationships. In such a scenario, the law would no longer be a shield; it would be a relic of a bygone era of slower technological change.
At the same time, it is possible that technology itself may offer solutions. Just as blockchain promises secure verification of identity and ownership, AI-detection tools may help distinguish real from fake content. But the arms race between fakers and detectors is ongoing, and the outcome is uncertain.
A POSSIBLE GLOBAL FRAMEWORK
What Denmark is attempting, and what India is only beginning to address, ultimately points to the need for a global framework. Deepfakes are not bound by borders, and neither should the protections against them be. International treaties, much like those governing cybercrime or intellectual property, may be necessary to create baseline rights around digital identity.
At the very least, countries need to update their laws to reflect the reality of generative AI. India’s DPDP Act could be expanded to explicitly cover likeness misuse, while courts could establish precedents recognising every individual’s identity as a protected right, not just celebrities.
Public awareness will also be critical. As long as people remain unaware of the dangers of deepfakes, demand for legal reform will remain weak. Celebrities may lead the charge, but ordinary citizens must see themselves as stakeholders too.
The cases of Aishwarya, Abhishek and Johar in India and Denmark’s proposed legislation together illuminate the crossroads at which we stand. On one side, the rapid advance of AI threatens to outstrip the law entirely, rendering existing privacy protections obsolete. On the other side, forward-looking reforms offer the possibility of reclaiming control over our identities in the digital age.
The challenges are formidable: writing laws that can withstand technological change, accelerating judicial responses, and building global frameworks. Yet the alternative—a world where business, politics, and personal relationships collapse under the weight of synthetic deception—is too dire to ignore.
Privacy laws are already outdated. The pace of technological development makes “age-proof” legislation nearly impossible. India’s judicial system, unless radically reformed, risks being perpetually behind. And yes, there is a real possibility that unchecked technological growth could hollow out the very foundations of law, commerce, and trust.
In this context, Denmark’s move is more than symbolic. It is a recognition that identity is the new frontier of intellectual property—and that in the age of deepfakes, protecting it is not a luxury, but a necessity for democracy, commerce, and human dignity.