shot-button
Subscription Subscription
Home > Sunday Mid Day News > Did they smile Experts dissect the growing concern of AI powered deepfakes

Did they smile? Experts dissect the growing concern of AI-powered deepfakes

Updated on: 04 June,2023 08:57 AM IST  |  Mumbai
Gautam S Mengle | gautam.mengle@mid-day.com

The manipulation of a picture of the protesting women wrestlers in Delhi has demonstrated just how powerful the monster of AI-powered editing has become. Will we figure out how to tame it?

Did they smile? Experts dissect the growing concern of AI-powered deepfakes

Wrestlers Vinesh and Sangeet Phogat in the actual picture taken inside the police vehicle after they were detained on Mat 28 for marching in protest towards Naya Sansad. The picture to the right is morphed using an Artificial Intelligence-powered software called Face App

Last week, a picture of the wrestlers protesting against BJP Member of Parliament and Wrestling Federation of India chief Brijbhushan Sharan Singh went viral. Vinesh and Sangeeta Phogat were seated in row one inside a police vehicle, and smiling. Vinesh is a two-time Olympian, having won two Commonwealth Games golds, one Asian Games gold and was crowned Asian champion in 2021. She also has a World Championships bronze which she won in 2019. Singh is accused by 10 women complainants, including a minor, of groping, harassment. Details of the two FIRs emerged last Friday, highlighting shocking details, including the account of one complainant said: “While I was lying down on the mat, the accused [Singh] came near me and to my shock and surprise leaned in and, in the absence of my coach, without seeking my permission pulled up my T-shirt, placed his hand on my breast and slid it down my stomach on the pretext of examining/checking my breathing”


The wrestlers, after over a month of sitting on dharna at Delhi’s Jantar Mantar, had decided to hold a Mahila Samman Mahapanchayat or women’s assembly on the same day as the inauguration of the New Parliament building when they were “detained for violating law and order”. As pictures of the Delhi police physically stopping the women wrestlers and forcefully boarding them into buses began to circulate on social media, the selfie of them smiling emerged. Within minutes, the narrative shifted to how their suffering was hogwash.


Wrestlers Vinesh and Sangeet Phogat  in the actual picture taken inside the police vehicle after they were detained on Mat 28 for marching in protest towards Naya Sansad. The picture to the right is morphed using an Artificial Intelligence-powered software called Face App


Soon enough, multiple users began to demonstrate in detail how the original picture was tampered with using an Artificial Intelligence power software 
called Face App. 

“What’s important to note is that the morphing was done in a matter of seconds using an app that is widely available on platforms like Google Play. While the free version lets you edit one face in any given picture, the paid version opens up a host of possibilities. Even more significant is the fact that the smiles are entirely convincing. The app doesn’t just turn a grimace into a smile, it also alters the facial features to make it look convincing that the person is in fact, happy to be in the picture,” cyber expert Ritesh Bhatia tells mid-day.

Ritesh Bhatia, Mudit Bansal and Vikas KunduRitesh Bhatia, Mudit Bansal and Vikas Kundu

Bhatia, like other experts, has over the years seen increasingly effective versions of AI-based apps, with their misuse ranging from bad to dangerous. Last month, a short video circulated on WhatsApp, where two well-known news anchors were singing a popular Bollywood number. It was created using an app that ‘dubs’ a song over a picture, turning a static still into a video. Not only were their lips moving in perfect sync to the lyrics, their expressions, too, matched the mood. The male anchor was nodding as he sang, “…lekin chup chupke milne mein jeene ka mazaa toh aayega.”

While the video was received in the light vein that it was created in, it displayed the advanced AI possibilities. Cyber experts and researchers have been chronicling this change with increasing concern, ever since the phenomenon first appeared in 2017 in the form of ‘deepfakes’ or deeply convincing fake pictures.

Last week, this picture of an explosion near the Pentagon went viral. It was so convincing that some Indian news channels even ran a news story on it before it was found to be a fakeLast week, this picture of an explosion near the Pentagon went viral. It was so convincing that some Indian news channels even ran a news story on it before it was found to be a fake

“The inception of deepfakes can be linked to a Reddit account that emerged in 2017. Under the pseudonym “deepfakes,” a user began circulating AI-generated explicit videos. The visages of celebrities were digitally substituted with those of adult film performers. This Reddit account swiftly garnered notice and ignited curiosity about the underlying technology powering deepfakes. Though the motivations behind the account may have been questionable, its presence proved instrumental in raising public awareness about deepfakes,” says Mudit Bansal, security researcher with CloudSEK.

As was expected, the technology was quickly replicated and used for generating extremely realistic photos. The trend began with celebrities but quickly moved to target common people. Jilted lovers and stalkers used it to create nudes of their love interests to defame them. Before police forces around the world, including in India, could wrap their heads around the concept, the technology moved from pictures to videos. By 2020, seeing your favourite teacher or your toxic boss engaged in a pornographic act was not just revenge fantasy, it was possible.

Sharing the findings of routine sweeps of porn sites and porn-related internet user behaviour over the years, a senior Cyber Police officer, says, “Back in the early 2000s, when the first porn sites surfaced on the internet, celebrities were the favourite category. When porn videos became the norm, users began to prefer watching porn stars dressed or made up as celebrities. Today, the preference has shifted to deepfake videos of celebrities.”

Now, there are dedicated websites that offer convincing deepfake pictures and videos for premium users. On January 30 this year, a US based live streamer, Atrioc, was exposed while buying explicit deepfakes of his female competitors. The discovery was by accident—he shared his screen while live streaming and viewers saw the deepfake website open in another tab on his browser. One of the targeted women later took to social media and, in tears, spoke about how shattered it leaves the ‘target’. 

In the last few years, AI deepfake technology has emerged as the backbone of two of the biggest cybercrimes that India has seen. The first was predatory money lending apps using deepfakes to blackmail victims into paying more than they borrowed. The second is sextortion. From using deepnude videos to convince victims to undress on camera, which is then recorded to blackmail them, to using their pictures to develop deepnude videos of them engaged in sexual acts, sextortion continues to target scores of Indians.

Commercial enterprises were quick to wake up to the profits to be made using AI-based photo and video editing, leading to the release of apps that could alter anything from the subject’s hairdo, skin colour, facial features and even gender. The same technology is now used for posting manipulated content; the wrestlers’ picture is just one example. A week before this, a US-based AI researcher had released a video of actor Tom Cruise, that was entirely AI generated. The Hollywood star was seen talking, smiling and laughing exactly like the real Cruise.

The internet underground, too, has realised the benefits. CloudSEK, for example, came across dark web discussion forums where one user asked how to bypass security features requiring facial recognition, and others replied explaining in detail how to use the deepfake technology for this purpose.

As of today, there are no dedicated laws to govern the use of AI for picture or video manipulation. The existing laws, meanwhile, are not easy to enforce. “Morphing someone’s photo without their consent is punishable under the Indian Penal Code as well as the Information Technology Act. The perpetrators can be prosecuted for defamation, harassment and impersonation. It is also not mandatory that the affected person come and give a complaint. 

Anyone can register an FIR. However, and herein lies the rub, the victim’s testimony will be the strongest at the trial stage. Hence, even if we act suo moto or on someone else’s complaint, without the survivor’s cooperation, we have no case in court,” says a senior officer with the Mumbai Cyber police.

What, then, is the way out? Bhatia says, “Earlier, we used to say, trust but verify. Then, the axiom changed to distrust and verify. But with deepfakes, how do you verify if you even want to? The person you’re seeing in the video looks and acts exactly like the actual person. Thanks to the advent of audio deepfakes, even their voice is similar, if the perpetrators have enough voice samples to work with. As the usage of deepfakes becomes more common, cybercrimes will rise and detecting them will become harder.” 

CloudSEK’s Vikas Kundu adds, “Although tech giants like Microsoft, Facebook, and Google are creating tools to detect them, it would be a game of cat and mouse. The deepfake technology itself can be delightful when used fairly. For instance, at the Dali Museum in St Petersburg, a deepfake of Salvador Dali greets the visitors and enhances the museum experience. Sooner or later, legislation will need to catch up.” 

A timeline of the development of deepfakes

2017
The term “deepfake” is coined by a Redditor named “deepfakes” who creates and shares explicit videos featuring celebrity faces swapped onto adult fan actors’ bodies

2018
Deepfake technology gains widespread attention after the release of non-consensual explicit deepfake videos featuring celebrity faces, raising concerns about privacy and ethics. 

Reddit and other platforms ban the sharing of deepfake content due to concerns over non-consensual use and harassment 

Researchers at the University of California, Berkeley, introduce Face2Face, a real-time facial reenactment system, showcasing the potential of deep learning for facial manipulation

2019
Deepfake detection challenges are launched to encourage the development of deepfake detection methods

Google releases a large dataset of deepfake videos to support the development of detection methods and spur research in combating deepfakes. The Al Foundation launches Realty Defender, a tool that aims to identify and flag deepfakes in real-time, offering a potential solution for detecting manipulated media

2020
California introduces two bills aimed at addressing deepfake technology. These bills focused on criminalizing the creation and distribution of malicious deepfakes without the consent of the individuals depicted

Deepfakes gained significant media coverage leading to increased public awareness about the technology’s capabilities and potential risks. News outlets featured stories discussing deepfake-rated concerns such as privacy, security and trust media

Several incidents involving deepfake videos depicting celebrities in explicit or compromising situations garnered attention. These instances highlighted the potential for deepfakes to be used for malicious purposes, such as revenge porn or defamation

2021
Facebook releases the Deepfake Detection Challenge (DFDC) dataset, consisting of deepfake videos with accompanying metadata, to further advance research in deepfake detection

Stanford University introduces DeepFaceLab, an open source deepfake creation and manipulation framework, expanding accessibility to deepfake technology

2022
The EU announces plans to introduce new regulations to combat deepfakes and safeguard elections and public discourse
Deepfake detection technologies continue to advance, with the introduction of more sophisticated methods leveraging both visual and audio cues for identifying manipulated

2023
Deepfake technology continues to evolve rapidly, with ongoing research focused on improving detection methods, refining synthesis techniques, and addressing the ethical implications associated with the technology

"Exciting news! Mid-day is now on WhatsApp Channels Subscribe today by clicking the link and stay updated with the latest news!" Click here!

Register for FREE
to continue reading !

This is not a paywall.
However, your registration helps us understand your preferences better and enables us to provide insightful and credible journalism for all our readers.

Mid-Day Web Stories

Mid-Day Web Stories

This website uses cookie or similar technologies, to enhance your browsing experience and provide personalised recommendations. By continuing to use our website, you agree to our Privacy Policy and Cookie Policy. OK