Singapore’s IMDA Places X and TikTok Under Enhanced Supervision Over Online Safety Failures
Singapore’s Infocomm Media Development Authority (IMDA) has issued Letters of Caution to social media platforms X and TikTok after identifying significant weaknesses in how the services detect and remove harmful content. The regulator has placed both platforms under enhanced supervision following findings in its 2025 Online Safety Assessment Report. The review highlighted gaps in the detection and removal of child sexual exploitation and abuse material (CSEM) and terrorism-related content affecting Singapore users.
According to IMDA’s announcement on 31 March 2026, the regulator found a sharp rise in CSEM cases associated with X, alongside the emergence of terrorism-related material shared by Singapore-based TikTok accounts. Both types of content are considered severe harms under Singapore’s Code of Practice for Online Safety – Social Media Services (SMS Code), which requires designated platforms to proactively detect and remove such material before users encounter it.
Findings from Singapore’s 2025 Online Safety Assessment
The findings form part of IMDA’s second Online Safety Assessment Report covering designated social media services operating in Singapore. The report evaluates whether platforms have implemented sufficient safeguards to mitigate the risks posed by harmful online content.
In its review, IMDA identified a 120% increase in CSEM cases on X involving a Singapore nexus, rising from 33 cases in 2024 to 73 cases in 2025. The cases involved the sharing or linking of exploitative material, including self-generated content. IMDA noted that all 73 cases breached X’s own policies but were removed only after the regulator alerted the platform.
The regulator also detected 17 instances of terrorism-related content shared by Singapore-based TikTok accounts in 2025. These posts mainly consisted of edited videos or audio referencing known transnational terrorist organisations. In some instances, TikTok initially determined that the reported material did not violate its community guidelines, removing it only after IMDA intervened.
Enhanced Supervision and Required Remedial Measures
Both platforms have accepted IMDA’s findings and committed to strengthening their detection and moderation systems. Planned improvements include enhancing automated detection tools through artificial intelligence and incorporating additional signals to improve the proactive identification of harmful content.
To ensure accountability, IMDA has placed X and TikTok under enhanced supervision. The platforms must provide regular updates on their progress in implementing the corrective measures and submit supporting data demonstrating effectiveness in their next annual online safety reports, due by 30 June 2026.
If the platforms fail to demonstrate sufficient improvements in addressing CSEM or terrorism-related content, IMDA may consider further regulatory action under the Broadcasting Act.
Broader Concerns About Child Safety on Social Media
Beyond the specific cases involving X and TikTok, the 2025 report highlights broader weaknesses in child safety protections across several designated social media services. IMDA found that Facebook, YouTube and HardwareZone had shortcomings in the effectiveness of their child safety measures, potentially allowing minors to access age-inappropriate content.
The comprehensiveness of safeguards also varied widely across platforms. Instagram and TikTok reported the most extensive child protection measures, while HardwareZone and X were assessed as having only baseline safeguards in place.
At the same time, most platforms improved the speed and effectiveness of their responses to user reports. Action rates for legitimate reports ranged from 54% to 93% in 2025, compared with around 50% or less the previous year. TikTok was the only platform to record a decline, with its action rate falling from 39% in 2024 to 25% in 2025.
Evolving Regulatory Approach to Online Safety
IMDA stated that its primary objective is to maintain a safe online environment for users in Singapore, particularly children. The authority continues to engage social media platforms to address weaknesses, flag harmful content and respond to emerging risks.
The regulator has also been expanding its policy tools as digital risks evolve. For example, Singapore has been working to strengthen governance frameworks around emerging technologies, including efforts to support responsible deployment through initiatives such as AI risk management frameworks in financial services and broader discussions on how advanced systems can be deployed responsibly, as explored in debates on practical applications of data and agentic AI.
In 2025, IMDA introduced requirements for designated app distribution services to implement age assurance measures to prevent children from downloading applications that are unsuitable for their age. The authority is now studying how similar age assurance requirements could be extended to social media services.
IMDA said it will continue working with social media companies throughout the year while monitoring the effectiveness of safety measures and reviewing whether existing regulations remain fit for purpose as online risks evolve.