The Dark Shadow Shrine

If u need coaching in GP or 'O' level English, u can reach me at 91384570. In Singapore only hor....Scan QR code in profile pic for testimonials by ex-students; or click: https://tinyurl.com/4r3rf2wf

Wednesday, September 03, 2025

A new generation of ‘AI-native’ extremists is rising. Can they be stopped?

Click HERE

the 2008 Mumbai terror attack.

The attackers didn’t just wield machine guns and grenades that killed more than 160 people. They used then nascent handheld GPS devices to navigate the Arabian Sea from Karachi, studied their targets on Google Earth (made public just three years earlier), and spoke to handlers in Pakistan via voice-over-internet protocol – the forerunner of today’s WhatsApp calls – bamboozling investigators unfamiliar with the technology.


What worries practitioners now is AI’s immediate utility in three areas: 

producing propaganda at scale, fast and with wider reach; supercharging mis- and disinformation by lowering costs and eroding trust; and, of greatest concern to security agencies, opening more channels for radicalisation and recruitment via chatbots and translation tools.


The latest report from ISD noted two recent cases here. A 17-year-old ISIS supporter detained in September 2024 used an AI chatbot to generate a bai’ah, a pledge of allegiance in the Islamic context, to ISIS and a declaration of armed jihad against non-Muslims intended to inspire others in Singapore to violence.


A 17-year-old far-right supporter detained in March 2025 meanwhile searched an AI chatbot for instructions on producing ammunition and considered 3D-printing firearms for a local attack.


Recent events prove that reactive scrambling, rather than proactive safety measures, is the industry playbook. The modus operandi seems to be: wait for scandal, then patch.


After a recent Reuters investigation exposed Meta chatbots engaging with minors in “romantic or sensual” conversations, the company swiftly added safeguards.


OpenAI, meanwhile, is rolling out gentle reminders for long chats to prevent steering vulnerable users towards self-harm. This follows a highly publicised wrongful death complaint in San Francisco where a 16-year-old boy, initially using ChatGPT for homework help, became dependent on the app as a confidant for hours daily and asked it for advice on suicide methods.


Even with laws like Singapore’s Online Criminal Harms Act – which can act swiftly against platforms that aid terror 


Qns:
1. To what extent should governments regulate the development and use of artificial intelligence? (NYJC Prelim 2025)
2. 'Technology has given us a false sense of hope in solving problems. To what extent is this true? (RI Prelim 2024)