images

Deepfakes & Consent: Who Owns Your Face Online?


Deepfakes & Consent: Who Owns Your Face Online?

We’re living in a time when AI can copy your face, mimic your voice, and even instantly replicate your expressions. And that’s brought up a pretty unsettling question:

Who owns your face in the digital world?

Not long ago, identity theft meant someone stole your credit card or forged your signature. Now? It’s become way more personal—and way more invasive.


What Exactly Are Deepfakes?

Deepfakes are videos, images, or audio made using AI, where someone’s face or voice is swapped in so realistically that it’s hard to tell what’s real. What started as harmless filters or jokes online has become more serious. 

These tools are powerful—and they’re out there for anyone to use.

And the risks? They’re not just theoretical.

Celebrities have been inserted into explicit videos without their consent.

Politicians have appeared in fake clips, supposedly saying things they never said. Regular people are being targeted, too, with phoney revenge content and online harassment.


Stuck in a Legal Grey Zone

India’s laws are scrambling to keep up with this new tech. Right now, we’re mostly relying on Section 666 of the IT Act (for privacy violations)

Section 67 (about obscene digital content), Defamation laws under the IPC

Data rules under the DPDP Act, 2023, butt there’s a catch: These laws weren’t built with deepfakes or digital identities in mind. 

So, key questions remain unanswered: What constitutes consent in a digital context?

Who’s responsible—the person using the AI, the tool itself, or the platform hosting the content? And how do you even prove harm when the content is synthetic?


This Isn’t Just a Celebrity Problem. You don’t need to be famous to get caught up in this.

A young woman from Delhi recently appeared in a fake video shared on WhatsApp. Even after the clip was proven false, the damage to her reputation had already been done. 

So, Where Do We Draw the Line?

Should people need permission to use someone’s face, even in memes or jokes?

Can we treat digital impersonation in the same way we do in real life?

Do we need new rules, such as a watermark or labelling AI-generated content?


Where Do We Go From Here?

Stopping deepfakes isn’t just about better tech. It’s about laws, ethics, and human rights.


What we need:

✅ Stronger laws that clearly say who’s accountable

✅ A proper consent system that sees your face, voice, and expressions as part of your data

✅ Faster ways to get justice when you’re targeted

✅ More public awareness to help spot and report deepfakes


One Last Thing


In real life, no one can wear your face without your say-so.

Why should it be any different online?

As AI continues to become more sophisticated, we must also become more informed about privacy, identity, and consent. In the age of deepfakes, protecting your face might be the most personal battle yet.


By Dr. Payal Arya Sah

Published: 19 April 2025