AI in Southeast Asia: As new frontier opens in scams and cyberbullying, authorities wage high-tech battle
Artificial intelligence is carving out a worrying new turn to cyberbullying and scams. In the latest of a regular series examining AI's development in the region, CNA finds out what’s the state of play - and how defences are evolving.

The rapid rise of AI means deepfake videos could soon become easier to create, and harder to detect - with worrying global implications. (Illustration: iStock/wildpixel)
This audio is generated by an AI tool.
SINGAPORE: Investment opportunities being promoted by Singapore Prime Minister Lee Hsien Loong. Indonesia’s late president Suharto endorsing candidates from a political party in the recent elections.
As these videos circulated online, they were promptly called out as deepfakes - clips manipulated through artificial intelligence (AI).
While these recordings were swiftly identified as fake, the rapid rise of AI means such doctored clips could soon become easier to create, and harder to detect.
This could spell big trouble for Southeast Asia as scammers exploit the technology, experts warn, with their actions already beginning to be seen.
Deepfakes and AI in general are also carving out a worrying new face to cyberbullying, potentially worsening an already persistent issue in the region.
That being said, counter-efforts are advancing as well - and with the help of AI to boot. But that’s only a piece of the overall puzzle, analysts say.
AI-POWERED CRIME
Cybersecurity firms and analysts told CNA they have noticed the increased use of AI in crime in Southeast Asia.
Increased accessibility to the technology has led to more “sophisticated fraudulent practices”, said Assistant Professor Nydia Remolina Leon at Singapore Management University’s (SMU) Yong Pung How School of Law.
She pointed out a notable trend of scammers using AI for celebrity impersonations targeting the more vulnerable groups, such as those who are older and not tech-savvy.
This has been observed in the US. Hollywood star Tom Hanks was deepfaked to promote a dental plan, while an AI-generated version of singer-songwriter Taylor Swift peddled cookware sets.
Here in the region, cybersecurity experts referenced the recent videos involving deepfakes of Singapore’s leaders to push investment scams.
According to a report by UK verification platform Sumsub, the number of deepfakes increased by a whopping 1,530 per cent in the Asia-Pacific region between 2022 and 2023. The Philippines and Vietnam made up the top two spots on the list. The report did not state the absolute number of deepfakes.
Sumsub cited Vietnam’s growing digital economy and large online population as factors that make the country an attractive target for fraudsters.
Slightly farther afield in Hong Kong, a multinational company lost HK$200 million (US$25.5 million) after scammers posed as the company’s chief financial officer and other colleagues on a Zoom call. Everyone on the call, except the victim, was a deepfake recreation.
Videos could be the “next frontier” for generative AI, as chatbots and image generators have made their way to consumers and businesses, said Mr Jonas Walker, director of threat intelligence at FortiGuard Labs, the in-house security research and response team at cybersecurity firm Fortinet.
OpenAI’s new AI model Sora made global waves when it was unveiled in mid-February, demonstrating photorealistic videos generated solely from text prompts.
While noting that the creative opportunities will “excite AI enthusiasts”, Mr Walker also cautioned that the new technologies present “serious misinformation concerns”.
WORRYING NEW FACE TO CYBERBULLYING
AI is also carving out a worrying new face to cyberbullying, experts warn, even as there have been limited cases reported in the region so far.
The worry is that the technology could worsen the cyberbullying situation in Southeast Asia, where it is already a real and present problem.
In Singapore, a recent government survey found that about one in five youths aged 13 to 18 who play online video games has experienced in-game bullying from other players.
A 2022 survey in Malaysia polling more than 33,000 secondary school students showed that one in five teenagers aged 13 to 17 has bullied and harassed someone through the internet and mobile devices.
And in Vietnam, Microsoft’s research in 2020 showed that out of every 10 internet users in the country, more than five were involved in bullying, local media reported.
“Like its physical counterpart, cyberbullying can take a profound toll on the physical and mental health of its victims,” Mr Walker told CNA.
“The increasing ease of use of and access to AI through large language models has made this problem worse.”
In the span of just minutes, generative AI can analyse a target’s social media posts, online activities or personal information to generate highly specific and threatening messages or content, he added.
“Before AI, individuals bullying others online would have to take the time to craft posts and messages, and risk being identified and held to account for their actions.
“But AI and its inherent automation have significantly widened the scope, severity and speed at which cyberbullying can now take place.”
What is even scarier is that generative AI is “continually learning”, and this can intensify online abuse, Mr Walker added.
FIGHTING CYBERCRIME WITH AI
As scams and cyberbullying powered by AI loom large, cybersecurity firms and governments are working to stay ahead - with the help of the very same technology they are defending against.
Singapore’s Home Team Science and Technology Agency (HTX) is currently developing a suite of AI-assisted assessment tools.
These include detecting malicious websites and content, “website defacement”, as well as deep audio and video fakes.
The agency is also looking into large language models that have led to an increase in “potency and proliferation of phishing scams”, said Ms Cheryl Tan, deputy director of sense-making and surveillance centre of expertise at HTX.
Cybersecurity firms CNA spoke to have also adopted AI to defend against increasingly sophisticated cyber attacks.
An example is the analysis of behaviour patterns to detect anomalies that can indicate potential attacks, said Mr Johan Fantenberg, a Principal Solutions Architect APJ at Ping Identity.
Another cybersecurity firm, Infoblox, also makes use of AI to help security teams detect threats.
“On an average day, security teams could look at anywhere from 500,000 to a million security reports, varying from false positives to serious threats,” said Mr Paul Wilcox, Vice President of Infoblox Asia Pacific and Japan.
Infoblox uses AI-driven analytics to help distil the number to a much more manageable figure, allowing security teams to concentrate their attention on these.
AI in cybersecurity is “increasingly critical” to protecting online systems, said cybersecurity firm Fortinet. If used correctly, AI systems can be trained to detect threats automatically, identify new strands of malware and protect sensitive data, it added.
“However, organisations also need to be aware that cyber criminals adjust their methods to resist new AI cybersecurity tools,” said Fortinet in an article on its website.
While some governments in the region are establishing rules to deal with the potential misuse of AI, these may not be enough to deter criminals, Mr Wilcox from Infoblox noted.
“Proactive early detection for crime prevention is far more effective than responding to cyber threats only when it happens,” he added.
Still, having AI rules is still better than none, analysts pointed out, highlighting a regional guide on AI governance and ethics that was launched this month.
Despite being voluntary, the guidelines by the Association of Southeast Asian Nations (ASEAN) are likely to be influential on organisations as well as policymakers, pointed out Mr Benjamin Wong, a lecturer at the National University of Singapore’s Faculty of Law.
Mr Wong added that the endorsement of the AI guidelines by ASEAN member states shows that governments are aware of the risks, and are aligned on principles including transparency, security, privacy and data governance.
THE ROAD AHEAD
Analysts point out that cooperation is key as bad actors up their game through AI.
Asst Prof Nydia from SMU called for a more “complex, multi-faceted strategy” to combat increasingly sophisticated fraud.
This should integrate financial and technology literacy on the consumer’s end, as well as advanced tools to detect AI-manipulated content, she told CNA.
On a global scale, financial and law enforcement agencies must work together to address such threats, much like they do with other risks that affect the financial sector, Asst Prof Nydia added.
At the same time, individual efforts also form a piece of the puzzle.
It is vital for people to “keep abreast of digital trends” and have the skills to navigate the digital age, said Mr Shem Yao, a manager of digital wellness at Singapore non-profit agency TOUCH Wellness Group.
For parents and students, beyond an understanding of AI, Mr Yao also emphasised the importance of discussing its ethical use.
“Guidance from parents is pivotal in shaping children's digital experiences,” he said.
“While AI deepfakes are a relatively new phenomenon, the concept of cyberbullying and cyber scams is not.”
He pointed out TOUCH’s belief that tech tools and platforms on their own are “neither good nor bad”, citing instances where deepfakes were used to help grieving individuals connect with deceased loved ones and have a sense of closure.
“What matters is the intention behind the creation and the application of it.”