Because she is a public figure, a deepfake video of Telugu actress Rashmika Mandanna has led to a great deal of conversation on the use of AI to create fake videos of real people. “Damaging form of misinformation”, said information technology minister Rajeev Chandrasekhar reminding social media platforms of their legal obligations under the IT rules.

“Strong case for legal [sic],” tweeted Amitabh Bachchan. And on social media platforms, there was, more or less, a very welcome outpouring of support for the violation of Mandanna’s bodily integrity.

An article by Boom Live points to a rash of sexualised videos on X (formerly Twitter) that literally steal the faces and identities of scores of actresses.

Morphing faces onto naked bodies is not new. The difference between then and now is the technology—faster, easier, cheaper and nearly impossible to tell fake from real. “It’s insane,” says journalist Adrija Bose of Boom Live. “All it takes is one photograph.”

Anyone can be a target—the girlfriend who snubbed you or the school maths teacher who failed you. BBC recently reported on loan sharks sending out “nude” photographs of a woman to her contact list.

Women and girls who are already grappling with a gender digital divide where men are twice as likely as women to have used the internet. Now, they must deal with the fact that the simple fact of uploading a selfie makes them more vulnerable to sexual abuse. In patriarchal societies, this can lead to families further controlling the movements of girls and women in the name of safety.

More than 90% of malicious deepfake videos are pornographic but that’s not their only misuse. The potential to disseminate fake news in a world that is already buffeted by misinformation is frightening. That viral video on October 31 of model Bella Hadid, whose father is Palestinian, expressing support for Israel? Fake. It went viral on X with 20 million views.

Or consider that 2024 is a high-stakes election year in two major democracies. We don’t know yet what role AI could play in spreading disinformation about political opponents and parties, compromising election integrity and further driving divisions. Seeing is no longer believing. But does every voter know that?

“The challenge posed by deepfake is broader than the face of a celebrity being morphed on another body,” says advocate Apar Gupta. “It has the potential to gaslight society, making us believe things that are simply not true.”

To ignore AI’s potential for good is to ignore half the story: Ongoing cutting-edge research in medicine using AI for inexpensive, fast and accurate diagnosis of disease. Communications apps that can level out socio-economic disparities that enable, for example, non-English speakers to have the advantage of communicating in English. Virtually limitless applications in learning, from translating inspiring speeches into local languages to using virtual teaching in remote areas.

The law cannot keep up with the pace of technology. The first AI safety summit that concluded on November 2 and was held rather aptly at Bletchley Park, home to the legendary code breakers led by Alan Turing, hosted representatives from 28 governments, including India. The concluding statement speaks of the protection of human rights, transparency, accountability and regulation.

But how to regulate? No one, not even Joe Biden’s 200-page document, has the answer. You can put the onus on social media platforms; India gives 36 hours to take doswn content. With a viral video that’s a lifetime. How do you undo what you’ve already seen? And how do you draw the line between censorship and government orders to take down content?

Perhaps we need what Gupta calls a “whole of society approach”. It’s one that enhances women’s access to the internet, educates men about gender issues, funds fact checkers, trains police and judges to understand the harms and respond sintelligently, and sets up a multi-stakeholder expert body which apprises the government and makes recommendations.

Above all, it’s one where all of us, individuals adrift on the world wide web, look out for each other, simply refusing to share and forward what we know to be fake.

Namita Bhandare writes on gender. The views expressed are personal

Namita Bhandare writes on gender and other social issues and has 25 years of experience in journalism. She has edited books and features in a documentary on sexual violence. She tweets as @namitabhandare ...view detail

QOSHE - Dealing with deepfakes: Regulation & education - Namita Bhandare
menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

Dealing with deepfakes: Regulation & education

3 0
10.11.2023

Because she is a public figure, a deepfake video of Telugu actress Rashmika Mandanna has led to a great deal of conversation on the use of AI to create fake videos of real people. “Damaging form of misinformation”, said information technology minister Rajeev Chandrasekhar reminding social media platforms of their legal obligations under the IT rules.

“Strong case for legal [sic],” tweeted Amitabh Bachchan. And on social media platforms, there was, more or less, a very welcome outpouring of support for the violation of Mandanna’s bodily integrity.

An article by Boom Live points to a rash of sexualised videos on X (formerly Twitter) that literally steal the faces and identities of scores of actresses.

Morphing faces onto naked bodies is not new. The difference between then and now is the technology—faster, easier, cheaper and nearly impossible to tell fake from real. “It’s insane,” says journalist Adrija Bose of Boom Live. “All it takes is one photograph.”

Anyone can be a target—the girlfriend who snubbed you or the school maths teacher who failed you. BBC........

© hindustantimes


Get it on Google Play