Decoding the threat of deep fake....by Gurjot Singh Kaler
Navigating the Era of Synthetic Realities
Chandigarh: Can you really believe what you see or hear in today’s digital age of technological advancements? The answer is not easy in the light of the fact that the term ‘Deep Fakes’ has become quite common these days. We tend to watch and hear all kinds of stories across the globe wherein people’s identities are being generated or replicated through the use of Artificial Intelligence (AI).
Although technology can have both positive as well as negative consequences, the concept of deep fakes has come to acquire a negative connotation in the recent times. Movie actresses like Rashmika Mandanna, Kajol,Katrina Kaif, Priyanka Chopra and Alia Bhatt etc., have
expressed serious concern about the deep fakes as they have been at the receiving end of the misuse of deep faketechnology through which their fake videos have been generated, circulated and made viral by miscreants for nefarious ends. Recently, deepfake videos of prominent
politicians like Barack Obama, Donald Trump, etc., have also gained millions of views as these have blurred the line
between fiction and reality.
India’s Prime Minister, Shri Narendra Modi, had also recently cautioned against the ill effects of Artificial Intelligence (AI) as he pointed towards deep fakes and the urgent necessity to tackle the pressing issue at hand.
Lately, a deepfake video of businessman Ratan Tata was misused by criminals to deceive and defraud the public through fake investment plans and shady betting apps. These are highly popular individual in society and hence, when they flag any issue, it automatically captures the attention of the world at large. But the question is what about the tens of thousands of victims of deep fakes whose
voices generally remain unheard and unattended to by the law enforcement agencies? The reality is that the privacy, security and online safety of millions of internet users stands at a grave risk due to the emergence of deep fakes and thus, warrants our immediate attention to stem the rot before it’s too late.
It is imperative to understand what exactly is understood by the concept of deep fakes and how to spot them. Deepfake represents a type of artificial intelligence (AI) capable of producing persuasive deceptive images, sounds, and videos which actually never happened in reality. Deepfake refers tothat multimedia content (image or video or audio), wherein a person’s face, voice or body is modified to appear as a
different person.
Deepfake content is also known as synthetic media which is created via Artificial Intelligence using deep learning algorithms and initially, it was used for comedic
content but later on, it began to be misused for maligning the social image and reputation of popular politicians, movie stars etc. The concept of "deepfake" gained public attention in 2017 when a Reddit user, known as "deepfakes," shared
edited explicit videos on the platform. This individual
employed Google's open-source deep-learning technology to interchange the faces of celebrities with those of performers in adult content.
Deepfakes are synthetic media that are made using artificial production, modification, or manipulation of images, videos, or audio with the help of AI. Earlier, the deepfakes were used
for entertainment purposes, but lately, these have gained notoriety for being misused for phishing attacks, financial frauds, scams, automated disinformation attacks, election results manipulation, social engineering, identity thefts, revenge pornography, sexual exploitation, reputational damage, extortion, harassment, intimidation, create mass hatred and religious tensions.
Deepfakes can be used to spread misinformation or propaganda through distortion of reality and misrepresentation of facts and thus, it poses a grave threat to the individual and society in 21st century.
How the deepfake video gets created -
The creation of a deepfake video entails a series of steps. Firstly, an AI algorithm, referred to as an encoder, processes thousands of facial shots of the two subjects.
The encoder identifies common features between the faces, reducing and compressing the images in the process. Following this, a second AI algorithm, known as a decoder, is trained to reconstruct the faces from the compressed images.
Given the distinctiveness of the faces, separate decoders are trained for each person. To execute the face swap, encoded images are fed into the "incorrect" decoder. For example, a compressed image of person A's face is input into the decoder trained for person B.
The decoder then recreates person B's face, incorporating the expressions and orientation of face A. This meticulous process must be applied to every frame to achieve a convincingly altered video. Autoencoders, a neural network variant, stand as the prevalent deep learning architecture employed in the creation of deepfakes.
An alternate technique for developing deepfakes involves utilizing a Generative Adversarial Network (GAN). This methodology sets two artificial intelligence algorithms inopposition. The first algorithm, designated as the generator, takes random noise as input and transforms it into an image. This generated image is then incorporated into a series of authentic images, such as those featuring celebrities, which are presented to the second algorithm, known as the discriminator.
Initially, the synthetic images may diverge significantly from actual faces. However, through numerous iterations of this process, with performance feedback shaping their progress, both the discriminator and generator refinetheir capabilities.
To summarise, deepfake content emerges through a dynamic interplay between two algorithms: the generator and the discriminator. The generator is tasked with crafting fabricated digital content and challenges the discriminator to distinguish between real and artificial elements. As the discriminator accurately identifies the authenticity of the content, this
feedback is looped back to the generator, driving improvements in subsequent deepfakes.
Together, these algorithms form a generative adversarial network (GAN), utilizing a set of algorithms for self-training. This enables the GAN to adeptly recognize patterns, ultimately honing its ability to capture authenticcharacteristics crucial for producing convincing fake images.
Creating a top-tier deepfake presents difficulties on a typical computer. Most are generated using advanced desktop
systems equipped with powerful graphics cards or, even more efficiently, by harnessing the computational capabilities of cloud-based resources.
This shift significantly reduces processing time from days and weeks to mere hours. However, expertise is crucial in this endeavor, especially in fine-tuning completed videos to address concerns like flickering and other visual discrepancies.
In the realm of audio deepfakes, a Generative Adversarial Network (GAN) is harnessed to replicate the distinct nuances of a person's voice. By constructing a model based on vocal patterns, creators can then manipulate the voice to articulate any desired content.
There are several apps available which can be used to create deepfakes in seconds like Deep Art Effects, Deepswap, Deep Video Portraits, FaceApp, FaceMagic, MyHeritage, Wav2Lip, Wombo and Zao.
How to spot a deepfake -
Although the deep fake videos can be recognized and differentiated by the internet users from the real videosthrough a lot of telltale signs, however, there are millions of unsuspecting individuals who might not easily notice the difference between real versus fiction and this can create a lot of problems if deep fakes remain unchecked.
The truth is that the deep fake videos have the capacity to ruin people’s reputation, destroy the goodwill of the businesses, create enormous confusion, damage the society and imperil the careers of political leaders and popular celebrities.
With the great advancement in technology and generative artificial intelligence, it has become quite difficult to differentiate between a real video versus a fake video but still, if careful attention is paid, there can be many telltale signs to spot a deepfake video from the original. Some tips are:
- Keep a close eye on the start of the video till the end. Watch very carefully when the video starts. Keep an eye for jerky movements in the video, the sound and lighting, strange blinking of eyes or other digital artifacts. Also, the deepfake videos look odd when magnified or zoomed in.
Deepfakes struggle to accurately represent intricate elements like hair, especially when individual strands are visible at the edges. Detectable flaws may also arise in the rendering of jewelry and teeth, while anomalies like inconsistent lighting and peculiar reflections on the iris can serve as indicators. Careful observation is the key to detect deepfakes.
- Focus closely on the facial expressions and body language of the persons in the video from the start to the end. If you notice any differences in skin tone or abrupt or irregular changes in the facial expressions of
persons during a conversation or an act, it is a sure sign of a deepfake video. If there are minor variations in the body posture of a person which is not consistent to their real behavior, it can be a sign of a deepfake video.
- Notice the lip sync issues. In most of the deepfake videos, there will be some kind of mismatch betweenthe audio/video and movement of lips of characters. Try to watch the video a few times to find out if its an original or a deepfake.
- Always try to find out about the original source of image or video to know its authenticity. It is also better to search and verify about the original source of particular content on various search engines to compare and confirm the findings. Also, one should develop a permanent habit to read news from reliable publishers.
- There are also many online tools which can be used to identify deepfake images/videos like Sentinel, WeVerify, Reality Defender, and NewsGuard Misinformation
Fingerprints. However, these are not free services and require a subscription fees.
Some tips for social media users to prevent their photos and videos from being misused by criminals for making deep fakes are as follows –
- Educate yourself and others around you – The first step towards tackling the threat of deepfakes is to spread awareness about the concept of deepfakes including how these get created, what are the dangers of misinformation or challenges these can pose for the individuals and society. Ensure to refer to quality news resources and authentic information while researching about deepfakes.
- Stay vigilant as to what you share online
- It is important to exercise due diligence and caution before you share any of your photos or videos to the public. The deep fakes technology mines raw public data which is freely available. Therefore, it’s time to become sensible, keep the social media settings private and share only those photos and videos to the public which are absolutely necessary. Always remember that our privacy, security and safety lie foremost in our hands to protect.
- Use strong passwords – It is very important to use strong passwords for all your online accounts and also, develop a habit to change your passwords from time to time.
Having strong passwords can prevent any type of unauthorized access to your images and videos which can be misused for creating deepfakes. Also, make it a habit to utilize two-factor authentication for all your social media and online accounts so that your accounts are safe and secure from any kind of breach of security.
- Use digital fingerprints or watermarks – If you use the technology of digital fingerprints or watermarks on your images or videos, it not only prevents their theft but
also, it acts as a good deterrent against deepfakes as it is very difficult to make synthetic media from the watermarked images/videos.
- Management of metadata – Always ensure that the metadata in your digital content of images/videos containing essential information like location, date of creation is accurate and updated. These details may prove to be quite useful in laying claims of ownership over any content.
Indian Law to tackle deepfakes -
Although the Indian law doesn't explicitly acknowledge
deepfakes, the matter is indirectly dealt with through Section 66E of the IT Act. This section renders it unlawful to capture, publish, or transmit an individual's image in the media without their consent.
It can lead to imprisonment which may extend to three years or with fine not exceeding two lakh rupees. Additionally, individuals can be prosecuted under Sections 67, 67A, and 67B of the IT Act for disseminating or transmitting obscene deepfakes or those containing sexually explicit acts.
Both the Information Technology Act and Information
Technology Rules provide explicit guidelines that hold social media intermediaries responsible for promptly removing deepfake videos or photos. Failure to comply may result in penalties, including imprisonment for up to three years or a fine of Rs 1 lakh.
Under the Information Technology Rules, 3(1)(b)(vii) stipulates that social media intermediaries are obligated to prevent users on their platform from hosting content that spreads disinformation or impersonates another individual.
Additionally, Rule 3(2)(b) mandates the removal of suchcontent within 24 hours upon receiving a complaint against it. Platforms are mandated to expeditiously take down misinformation upon receiving notice, with a 36-hour timeframe for compliance.
Failure to adhere to this can result in legal action by the aggrieved party, particularly in cases involving government or court orders. In instances of
artificially manipulated images, platforms must remove such content within 24 hours upon receiving a complaint from the victim, in accordance with Rule 3(2)(b) of the IT Rules, 2021.
As per the Rule 3(2)(b) of the Information Technology Rules, it is the duty or legal obligation of social media platforms to expeditiously take down any mischievous and defamatory content within 36 hours (or earliest) after filing a complaint by a user or government authority and disable access to the content or information.
Failure to comply with this requirement would invoke Rule 7, which empowers aggrieved individuals to take platforms to court under the provisions of the Indian Penal Code (IPC) and could render the organisation liable to losing the safe harbor protection available under Section 79(1) of the Information Technology Act, 2000. Safe harbor immunity protects the platforms from being held liable for third-party content they host, provided they comply with the Indian laws.
Also, those who are responsible for creating such fake videos with impersonation technology (like the deepfake) by using computer and related gadgets, are punishable up to three years of imprisonment and pay a fine up to Rs 1 lakh, the under the Section 66D of the IT Act.
Future goals-
It is imperative to tackle the threat of deepfake/generative artificial intelligence/synthetic media in a coordinated and multi-faceted manner. The strategy to neutralize the threat of deepfakes should revolve around four key pillars: detection of deepfakes, prevention of publishing and viral sharing of deepfake and deep misinformation content, strengthening reporting mechanism for such content, and spreading of awareness through joint efforts by the government, experts, researchers, media and industry entities.
India’s Ministry of Electronics and Information Technology (MeitY) has rightly identified ‘detection, prevention, reporting and awareness’ as the four-pronged approach to curbing deepfakes. Any regulation for deepfakes will have to necessarily ensure that it (i) discourages dissemination, (ii) incentivises early reporting, (iii) penalises delay in addressing complaints and taking down deepfakes by online platforms and (iv) restricts avenues for creation of deepfakes.
The government regulations are not enough unless the social media platforms and internet companies develop robust frameworks and cross-platform deepfake detection tools to verify the misinformation content and curb/remove the deepfake videos. Most of the deepfake detection services are quite expensive and cannot be afforded by the general public.
Therefore, it will be very helpful if the social media platforms start displaying the warning on any synthetic media content by writing- ‘This image/video is deepfake and the viewers are advised to exercise due caution in viewing/forwarding it.”
Recently, the Google has introduced a new watermark technology and metadata labelling solution to help theviewers identify and distinguish the original images from the ones created by the generative artificial intelligence. Google is actively employing machine learning alongside human reviewers to swiftly detect and remove content violating
guidelines, enhancing the accuracy of its content moderation systems. The search engine platform is also developing a
‘privacy request process’ enabling users to take down content that utilizes AI to imitate an individual’s face or voice.
It would be really beneficial if the government or social media platforms can make available to the internet users a
mechanism of free online tools to detect deepfakes and curb the spread of fake news or misinformation campaigns. It should also be mandatory for the deepfake video creators or the synthetic media apps to include watermarks for quick and easy identification.
It is very important for the law enforcement agencies to train their police personnel and investigation teams in the field of deepfakes and equip them with the necessary technology to identify the original videos versus fake videos. There should be regular mass learning sessions and media literacy campaigns on this emerging threat of deepfakes and the police should proactively work towards organizing seminars, educational workshops and conferences to educate the
general public about it.
Also, there should be counsellingcenters in the police stations to help and counsel the victims of deepfake videos, especially women and children who can be a harsh target of artificially generated sexual content to defame and harass them.
At the same time, victims should be encouraged to openly come out of their inhibitions or fears and lodge police complaints if they discover themselves to be a target of deepfake images or videos. The rising threat of deepfake content can be tackled only when all stakeholders including the government, community members, socialmedia and online companies, law enforcement agencies, technological experts and researchers join their handstogether and develop a holistic framework to deal with it.
As deepfakes blur the lines between fact and fiction, our commitment to truth and authenticity becomes the beacon light guiding us through the dark and dangerous shadows of digital deception. Let due vigilance and caution be ourprotective shield against the silent seduction of synthetic realities.
December 26, 2023
-

-
Gurjot Singh Kaler,, Serving Punjab Police Officer
kalerforall@yahoo.com
Disclaimer : The opinions expressed within this article are the personal opinions of the writer/author. The facts and opinions appearing in the article do not reflect the views of Babushahi.com or Tirchhi Nazar Media. Babushahi.com or Tirchhi Nazar Media does not assume any responsibility or liability for the same.