Skip to main content

What are digital arrests, the newest deepfake tool used by cybercriminals?

11 تشرين الأول 2024
EXPLAINER

What are digital arrests, the newest deepfake tool used by cybercriminals?

Scammers are increasingly using deepfake technology to fraud people. Experts say awareness is key to prevent online scam.

A logo is seen during the AI for Good Global summit on artificial intelligence, organised by the International Telecommunication Union (ITU), in Geneva, Switzerland, May 30, 2024 [FILE: Denis Balibouse/Reuters]
By Dwayne OxfordPublished On 11 Oct 202411 Oct 2024

An Indian textile baron has revealed that he was duped out of 70 million rupees ($833,000) by online scammers impersonating federal investigators and even the Supreme Court chief justice.

The fraudsters posing as officers from India’s Central Bureau of Investigation (CBI) called SP Oswal, chairman and managing director of the textile manufacturer Vardhman, on August 28 and accused him of money laundering.

For the next two days, Oswal was under digital surveillance as he was ordered to keep Skype open on his phone 24/7 during which he was interrogated and threatened with arrest. The fraudsters also conducted a fake virtual court hearing with a digital impersonation of Chief Justice of India DY Chandrachud as the judge.

Oswal paid the amount after the court verdict via Skype without realising that he was the latest victim of an online scam using a new modus operandi, called “digital arrest”.

So what is a digital arrest and what measures are required to stop it?

What exactly is a digital arrest?

Digital arrest is a new form of online fraud, in which scammers convince victims they are under a “digital” or “virtual” arrest and the victim is coerced into staying connected with the scammer through video-conferencing software. The fraudsters then manipulate their targets into maintaining continuous video contact, effectively holding them hostage to the fraudulent demands of the scammers.

Similar to phishing, a digital arrest, is a type of cyber-attack that involves tricking individuals into revealing sensitive information that may involve identity theft, financial loss, or stealing data for malicious purposes. The techniques have become more sophisticated with the advent of AI-generated audio and video.

Phishing is a cyberattack in which an attacker impersonates a legitimate organisation or person to deceive the individual or organisation into divulging sensitive information.

The scammer will dangle an extreme loss, whether financial or some other legal consequence, convincing the victim they are “here to help”. Many victims are lulled or coerced into lowering their guard and following the instructions of the scammer.

What makes many of these scams seem legitimate is the use of video-conferencing software. Most scams are faceless, with the interactions happening through a simple phone call. With video-conferencing software, an individual using sophisticated deepfake video technology can appear as a completely different – and often real – person participating in the video call.

Moreover, with a snippet of audio, perhaps from a judge or high-level police officer, an audio AI engine can replicate a person’s voice, which can then be used by the scammer.

“‘This is just a newfangled spear-phishing, is the way I would put it, because it’s highly targeted and it shows far greater awareness of the victim’s circumstances than the old phishing, where some prince from somewhere says he needed to send money to the US and somehow, you’re the only way he can do it,” VS Subrahmanian, professor of computer science at Northwestern University, told Al Jazeera.

“So the phishing scams have gotten much more sophisticated and in fact, there are words for these. Vishing is video phishing, phishing is fishing through SMS.”

What do we know about the SP Oswal story? Have other digital arrests happened?

According to an interview with NDTV new channel, Oswal received a call from an anonymous individual claiming there were financial irregularities on one of his bank accounts while claiming his account was linked to a case against Naresh Goyal, the former chairman of Jet Airways who was arrested in September 2023 for laundering 5.3 billion rupees ($64m).

The fraudsters were able to convince Oswal to pay $833,000 to a specific bank account after issuing fake arrest warrants and fake Supreme Court documents stipulating the alleged amount owed.

Oswal submitted a complaint to local police after the incident. With help from cybercrime officials, Oswal was able to recover $630,000 of the $833,000. According to local police, this is the largest recovery in India for a case of this nature.

Although Oswal is the latest victim to experience a digital phishing scam, digital arrests have been on the rise in recent years in India. The proliferation of many of these digital arrests gained traction around 2020 after many services moved online due to lockdowns during the COVID-19 pandemic.

Last month, an employee who works for Raja Ramanna Advanced Technology Center (RRCAT) under the Department of Atomic Energy was defrauded of 7.1 million rupees (approximately $86,000) following a digital arrest.

In another incident last month a senior official from the National Buildings Construction Corporation was duped of 5.5 million rupees (approximately $66,000) via WhatsApp video call after being accused of trafficking fake passports, illegal ATM cards, and illegal drugs.

Why are sophisticated deepfake AI video scams rising?

Although deepfake technology has been around since 2015, the use of deepfakes for fraudulent schemes has become more frequent and more sophisticated due to the acceleration of machine learning and various AI tools.

These new deepfake technologies allow a fraudster to embed anyone in the world into a video or photo, even adding audio using a deepfake AI multimedia stream, then to pose as the individual in a video conference call like Zoom, Skype or Teams. Unless the host of the call has anti-deepfake software, the deepfake can be hard to spot.

According to a Wall Street Journal (WSJ) article published in March 2019, fraudsters used deep fake voice AI to defraud the CEO of a UK-based energy firm of 220,000 euros ($243,000).

Some deepfake software only needs 10 seconds to a minute of audio of a person talking to replicate various speech patterns, emotions, and accent of the subject. AI voice software will even account for natural pauses, inflexion of certain letters, and voice pitch, making the replica virtually indistinguishable from the audio that is actually coming from the real person.

According to a New York Times article, last month a caller posed as former Ukrainian Foreign Minister Dmytro Kuleba, in a video-conference call with Senator Benjamin L Cardin, the chairman of the Foreign Relations Committee.

Although there was no monetary fraud, this raises dangers that fraudulent actors can manipulate key political leaders to influence certain outcomes of political elections or high-stakes foreign policy initiatives.

Although the incidents of digital arrests have happened in different countries around the world, according to Subrahmanian, the professor from Northwestern University, these scams tend to be pervasive in India due to a lack of awareness about deepfakes.

In addition, Subrahmanian said a significant part of India’s population operates exclusively on their mobile phones. “They think of the phone as something that they should trust, which provides good information. So when they get a call like this, they don’t necessarily distrust it right off the bat.”

He added that India’s telecommunication sector has failed to take cybersecurity seriously.

How can this be stopped?

Most deepfake software is created using a type of artificial intelligence (AI) model called generative adversarial networks (GANs). These GANs often leave a unique “artefact” behind in the deepfake.

The deepfake detection system can pick up these artefacts and can be detected. Such artefacts embedded in the audio can be recognised by a deepfake detection system.

As deepfake technology becomes more sophisticated, the detection systems will have to move in step with these innovations.

However, Subrahmanian suggested relying only on deepfake-detection software is not enough. There will need to be awareness-building about these deepfake technologies, and possibly a global initiative, similar to General Data Protection Regulation (GDPR) privacy law enacted by the European Union.

“One is to use existing agreements that already exist. So to give you an example, Interpol can put out warrants for people who are committing transnational scams, regardless of whether these scams are based on financial fraud through generative AI or something else.”

Organisations responsible for enforcing international laws and cooperation agreements need improved training and more effective tools, he said.

Source: Al Jazeera

For more details: Click here