Experts shed light on how cops and experts are fighting fire with fire to stop those targeting the vulnerable in their tracks
Lawyer Aishwarya Kantawala at her office in Fort. Pic/Anurag Ahire
The cyber police and experts are using artificial intelligence (AI) to counter cybercriminals who employ the technology to create deepfake movies, clone voices and tailor phishing emails. According to sources in the Cyber Cell of the Mumbai and the Maharashtra Police, officials are employing AI algorithms to keep one step ahead of cybercriminals by examining big databases, spotting trends in fraudulent activity and validating the authenticity of videos by correctly identifying matter generated by AI.
ADVERTISEMENT
Cyber expert Yasir Shaikh at the Macksofy Technologies office in BKC. Pic/Ashish Raje
Ritesh Bhatia, a leading cyber expert, founder and director of V4WEB Cybersecurity, said, “AI has now become a tool for cybercriminals in sophisticated cyber frauds, especially those involving voice cloning, phishing emails, and deepfakes. Cybercriminals are using AI to create realistic voice clones and videos of well-known personalities, which they then use to deceive people,” said. He added, “For example, a deepfake video of a business tycoon was recently circulated, falsely claiming that he had invested in a particular trading firm. This led many people to trust the firm and subsequently fall victim to fraud.”
Tool for enforcing law
“AI systems help in analysing large datasets to detect patterns indicative of fraudulent activities, aiding in the identification and prevention of cybercrimes in real-time. We are using counter apps to determine if videos are AI-generated, particularly in cases related to deepfakes, identity theft, phishing emails and voice cloning where crooks use individual’s voices to extort large amounts from their parents,” said a Cyber Cell officer.
“Using these tools enables the identification and analysis of patterns, detection of anomalies, and prevention of cyberattacks before they inflict substantial harm. By harnessing AI, law enforcement agencies can bolster their capacity to trace and apprehend cybercriminals, thereby improving digital safety for all. It is crucial for law enforcement agencies to integrate AI into their investigative protocols, particularly in combating cyber fraud. Cybercriminals utilise AI extensively, necessitating a proactive response from agencies to effectively combat these threats,” Bhatia added.
Ritesh Bhatia, founder and director, V4WEB Cybersecurity
Explaining further about the use of AI in cyber frauds, Bhatia stated, “AI algorithms can automate the creation of highly personalised phishing emails, making them more convincing. AI tools can also rapidly analyse vast amounts of data to identify vulnerabilities in security systems, enabling cybercriminals to exploit these weaknesses before they are patched.”
According to Bhatia, AI-driven bots can execute large-scale fraud activities, such as credential stuffing attacks, where stolen usernames and passwords are used to gain unauthorised access to multiple accounts. “The ability of AI to learn and adapt makes it a powerful tool in the hands of cybercriminals, who can use it to continuously refine their tactics and evade detection,” he said.
Forensic applications
According to the experts, AI-powered forensic tools help investigators reconstruct digital evidence and identify perpetrators. Additionally, natural language processing (NLP) can be used to monitor and analyse communications for suspicious activities. “By automating routine tasks, AI allows law enforcement agencies to allocate resources more efficiently, focusing on complex cases that require human intervention,” said Yasir Shaikh, founder of Macksofy Technologies.
Shaikh said, “In recent years, image fraud has become common in social media applications and the online community. In particular, applying deep learning (DL) techniques to identify particular visual characteristics and overlay them on a different image or video has gained popularity on various platforms, including Facebook, Instagram, Snapchat and Reddit. This idea is often referred to as “deepfake,” where “deep” denotes the use of deep learning neural networks (DLNN) and “fake” denotes the definition of DLNN as being deceptive with respect to the original input.”
According to Shaikh, deepfake detection solutions are ordinarily employed for recognising deepfake content and deep learning networks. “But they can also be image and video analysis tools that can identify inconsistencies between facial expressions, lighting and lip movement; audio analysis tools that can identify speech patterns, voice characteristics that have differences or audio track parts that do not make any sense; and, finally, behavioural analysis tools that focus on the evaluation of aspects, such as ductile and facial expressions,” said added.
According to Shaikh, even deepfake generation models create identical replicas, these fakes can still be identified using deep learning techniques or specialised forensic procedures. The convolution procedures of nearly all deepfake generators are visible in the image. Even while these traces can be found with meticulous forensic investigation, the amount of data being shared on social media these days demands quick and automated solutions.
“If a system does not implement automatic triage, it will be hard to do forensics analysis due to the sheer volume of data involved in assessing deepfakes. Supervised deep learning techniques can readily identify convolutional traces remaining in deepfake images if the user supplies sufficient training data. The investigating officer can determine whether using a deep learning technique based on MesoNet, a tool used to identify whether the video is fake.”
Administrative tasks
According to a Mumbai police official, apart from its use in investigations, AI tools are utilised in administrative tasks such as drafting content, preparing letters to send notices to accused individuals and witnesses, as well as generating letters seeking assistance from counterpart agencies. “It makes work easier and less time-consuming. Sometimes, we use apps like ChatGPT to generate questions to be asked to the accused in certain cases, making the work more efficient,” said a Crime Branch officer.
AI and the legal profession
According to advocate Aishwarya Kantawala, the legal profession in India is evolving with the advent of new technologies, including AI-powered tools like ChatGPT. Reliance on such tools is increasing. Among the advantages of using AI are prompt responses, efficiency and accuracy in information, accessibility and the fact that a lot of software is free. “The disadvantages are that there is a risk of inaccuracy/error, all data might not be reliable and can be un-updated. There is a risk of job displacement and severe dependency,” she said.
“Limitations are language barriers, non-digitisation of several documents and poor technological advancement among the majority of the populace. Although the Indian legal system is grasping the idea of AI and software being used such as the Supreme Court Vidhik Anuvaad Software (SUVAS), an AI-powered translation tool, no guidelines exist on the use of AI in legal practice. Ultimately, AI should be used mindfully and responsibly as one cannot thereafter shift the burden of responsibility for error/issues/inaccuracy upon a software,” she added.
What is it?
It is a novel deepfake detector used for zeroing in on impostors during video conferencing and manipulated faces on social media. FakeBuster is a standalone deep learning- based solution, which enables a user to detect if another person’s video is manipulated or spoofed during a video conference-based meeting.
How it works?
It employs a 3D convolutional neural network for predicting video fakeness. The network is trained on a combination of datasets such as Deeperforensics, DFDC, VoxCeleb, and deepfake videos created using locally captured images (specific to video-conferencing scenarios).
Who it affects/benefits?
Cyber experts and law enforcement agencies, who often use specialised apps while investigating deepfake cases