AI & Emerging Tech
Singapore warns against unrestricted use of OpenClaw AI agents on sensitive systems

While acknowledging productivity benefits, the authority stressed that OpenClaw currently lacks sufficient built-in safeguards and requires careful deployment planning.
Singapore’s Infocomm Media Development Authority (IMDA) has issued a strong warning to organisations and consumers against granting unrestricted access to sensitive systems, files and applications to the rapidly growing AI platform OpenClaw, citing mounting cybersecurity, operational and data governance risks.
The advisory marks the first formal warning by IMDA regarding OpenClaw deployments in Singapore and reflects growing international concern over “agentic AI” systems capable of autonomously performing complex multi-step digital tasks.
According to the authority, poorly configured OpenClaw implementations could cause systems to “run amok”, potentially disrupting business operations, halting transactions and exposing confidential corporate and personal data.
Developed by Austrian software engineer Peter Steinberger and launched in November 2025, OpenClaw has rapidly gained global attention as an AI-powered personal assistant platform. The tool allows users to connect large language models such as OpenAI’s ChatGPT, Google’s Gemini and Anthropic’s Claude to workplace tools, messaging services and email systems to automate workflows.
IMDA said the platform is increasingly being used across enterprise environments for tasks such as customer support responses, business reporting, software debugging and workflow coordination. While acknowledging productivity benefits, the authority stressed that OpenClaw currently lacks sufficient built-in safeguards and requires careful deployment planning.
“Deploying OpenClaw safely requires careful set-up, particularly given the limited built-in security controls,” IMDA said. “Users should understand the risks involved and be prepared to implement appropriate guard rails themselves.”
The advisory highlighted several risks associated with OpenClaw, including weak authentication measures, insufficient access controls, inadequate testing and potential exposure of sensitive information to external systems.
Citing intelligence platform OpenCVE, IMDA said roughly a quarter of the more than 400 reported OpenClaw vulnerabilities and exposures as of April were classified as high severity, potentially enabling data theft and operational disruption.
The authority warned that OpenClaw inherits the privileges of the user account that installs it, meaning the AI agent may gain unrestricted access to files, applications and internal systems available to that user.
IMDA also raised concerns over integrations with workplace collaboration platforms such as Slack. According to the advisory, OpenClaw connected to Slack channels could execute instructions posted by any participant without additional authentication safeguards, creating opportunities for accidental or malicious actions.
To reduce risks, IMDA recommended restricting posting permissions within connected channels and introducing approval workflows requiring explicit human authorisation before sensitive actions are executed.
IMDA warned that many publicly available OpenClaw skills had not undergone proper testing and may contain malicious code, hidden instructions or malware. The advisory referenced reports involving the malware Atomic macOS Stealer, which had reportedly been disguised as OpenClaw tools including cryptocurrency wallet trackers, YouTube downloaders and workplace utilities.
“Many skills on public marketplaces like ClawHub are currently flagged as malicious,” IMDA stated.
The authority urged users to install only trusted skills from verified publishers whose source code is publicly inspectable and actively maintained.
“Skills that lack transparent source code, verifiable provenance, recent maintenance activity, or that request permissions beyond their stated purpose should be treated as higher risk and avoided,” IMDA said.
IMDA advised organisations against creating a single “all-powerful” OpenClaw agent with unrestricted access across systems and applications. Instead, it recommended deploying multiple narrowly scoped agents dedicated to specific functions such as scheduling, coding or administrative tasks.
The authority also urged users to avoid installing OpenClaw on primary workstations or personal devices containing highly sensitive information.
Additional recommendations included implementing human approval mechanisms for high-risk activities such as financial transactions, data deletion and infrastructure changes, as well as creating separate digital identities for AI agents rather than reusing employee credentials.
“Managed identity for agents should be recognised as a foundational control layer, particularly as agents increasingly act as proxies for human users across systems,” IMDA said.
The advisory is based on Singapore’s Model AI Governance Framework for Agentic AI released earlier this year and incorporates input from the Government Technology Agency of Singapore, Cyber Security Agency of Singapore, Grab, Microsoft and Tencent.
Singapore’s warning comes amid rising global scrutiny surrounding OpenClaw and other autonomous AI systems over concerns involving cybersecurity, unauthorised communications and data governance.
In March 2026, Chinese authorities reportedly instructed government agencies and state-owned enterprises to avoid installing OpenClaw on office devices due to concerns about cyberattack risks and external data exposure.
Despite the warnings, interest in OpenClaw remains strong in Singapore, where more than 20 community-led events focused on the platform have reportedly been held, attracting developers, entrepreneurs and technology professionals exploring practical AI applications.
Topics
Author
Loading...
Loading...







