The increasing use of artificial intelligence (AI) by criminal organizations for nefarious ends, such as deepfakes, voice spoofing, and financial market manipulation, has alarmed the police.
Director of the Federal Commercial Crime Investigation Department (CCID), Comm Datuk Seri Ramli Mohamed Yoosuf, stated in a report published in The Star today that a recent deepfake video purporting to show a Malaysian leader endorsing a scheme to get rich quick was an obvious example of this kind of AI manipulation.
A political figure would never advocate for something like that; the promises made in that video are simply too good to be true, indicating that it is an investment scam. It’s nonsensical.
Because AI is already a reality, it is critical that the public and law enforcement alike are aware of the risks that AI may present in the near future.
“Today’s world has demonstrated how AI is progressively replacing roles and tasks that were previously performed by humans.
“Although certain individuals may be aware of the progress made in this area, others may remain ignorant of the possible hazards that accompany the rapid advancements in artificial intelligence,” he was cited as saying.
Ramli added that as early as mid-2024, the police expect syndicates to use AI in their illegal operations against Malaysian citizens. “Once this happens, every online service in the financial industry could potentially face the risk of being infiltrated.
“AI has the potential to be used to develop algorithms that can breach computer systems, and other algorithms could be used to analyze data and manipulate outcomes, which could be used to impact or destroy financial markets,” he stated to The Star.
He went on to say that artificial intelligence (AI) might be used to manipulate audio and video in complex ways, which could lead to dangers like identity theft and the creation of deepfake videos. “In this scenario, there are countless ways that criminal syndicates could trick individuals and organizations by using deepfake photos, videos, or voices.
According to the news outlet, he stated, “They could use such deepfakes in bogus kidnap-for-ransom cases, where they trick families into believing they have kidnapped a loved one. Some could even use it to create lewd or pornographic images of victims that could in turn be used to blackmail them.”
He added that criminal syndicates could impersonate people to obtain money or trick victims into thinking that a family member was in danger by creating convincing false identities using images or videos.
According to Ramli, this could also be used to spread misinformation and propaganda, which could increase public unease. He was quoted as saying, “If the public is aware of the potential applications of AI, they will be extra cautious and not be easily duped by syndicates employing such tactics.”
Police in the Special Administrative Region uncovered a syndicate on August 25, according to a report published by the Hong Kong Free Press. The syndicate used eight identity cards that had been stolen to apply for 90 loans and open 54 bank accounts.
Deepfake techniques were used in a first-of-its-kind case for the area, imitating the people on the identity cards at least 20 times to fool facial recognition software. Six people were reportedly taken into custody in relation to the fraudulent activities.