Why Enterprises Are Rushing Into Agentic AI
Across sectors, autonomous and semi-autonomous AI agents are moving from experiments to embedded infrastructure. Instead of static chatbots, enterprises are piloting systems that browse the web, query internal data, trigger workflows and even chain actions across HR, finance and operations. Regulators in the UK note that agentic AI is already being explored for “life admin” tasks, automated reporting, customer engagement and monitoring business-critical metrics such as spoilage, while firms like accountancies see potential gains in efficiency and quality. At the same time, research from Kroll shows the innovation–security gap is widening: 76% of organisations have experienced a security incident involving AI applications or models in the past two years, and over a quarter report costs exceeding USD 1 million (approx. RM4.6 million). As AI agents become integral to coding, research, customer service and internal automation, they are also quietly reshaping the organisation’s risk profile.

Indirect Prompt Injection: When the Web Rewrites Your AI’s Orders
Traditional AI security has focused on blocking users from typing “ignore previous instructions” into chatbots. But Google researchers warn that indirect prompt injection is now a major AI agents security threat. Malicious actors are hiding invisible instructions in public web pages—within white text, HTML comments or metadata—that only an AI agent will see when it scrapes the page. Once ingested, the agent treats those strings as authoritative commands. A hiring agent asked to summarise a candidate’s portfolio could instead be coerced into exfiltrating an internal employee directory while still outputting a polished candidate review. Because the traffic looks like normal browsing and email activity triggered by a trusted system, standard URL filters, firewalls and endpoint tools often miss it. As agentic AI gains deeper hooks into enterprise systems, indirect prompt injection becomes not just a data quality issue but a live data theft and manipulation vector.

Regulators: AI Agents Are Inside Existing Rules, Not Outside Them
The UK’s Digital Regulation Cooperation Forum has made one point clear: AI agents do not sit outside current regulatory regimes. Obligations around transparency, fairness, safety, consumer protection and competition still apply as agentic AI develops, including in data-rich areas like financial services and professional services such as accountancy. The DRCF highlights multiple compliance risks, starting with fragmented accountability across the AI value chain—model providers, system integrators and downstream deployers may each own part of a failure, complicating responses when things go wrong. For organisations using AI agents to chain steps across enterprise systems or pull data from multiple sources, that fragmentation can mask who is responsible for privacy breaches, biased decisions or misleading communications to consumers. For boards and regulators, AI regulatory compliance is no longer hypothetical: once an AI agent touches customer data, makes recommendations or automates communications, existing laws and expectations are already engaged.
Attorney–Client Privilege and Public AI Tools: A Legal Reality Check
A recent US federal court decision in United States v. Heppner underscores how using public generative AI tools in legal contexts can erode protections. The defendant used Anthropic’s Claude to analyse information and prepare for a potential indictment, then shared that AI-generated material with his lawyer. When prosecutors sought those documents, he argued they were covered by attorney–client privilege and the work product doctrine. The court disagreed, finding no privileged attorney–client communication and no protected work product, and ordered disclosure. For businesses, this is a warning sign: copying draft contracts, litigation strategies or regulatory correspondence into public AI platforms may create discoverable material without any privilege shield. In-house counsel and compliance teams must now treat attorney client privilege AI questions as core governance issues, setting explicit rules on which tools can be used, for what purposes, and how sensitive information is handled or anonymised.
Building Enterprise AI Governance Before the Agents Run Ahead
Taken together, these trends map a stark risk landscape: indirect prompt injection can drive data exfiltration and manipulated outputs; weak AI regulatory compliance can trigger enforcement in data protection and financial services; and careless AI usage can waive legal privilege. Kroll’s findings that 76% of organisations suffered AI incidents, with many incurring costs above USD 1 million (approx. RM4.6 million), show the stakes are already material. Enterprises—especially Malaysian and regional firms now piloting AI agents—should move early on governance. Priorities include strict data classification and role-based access, limiting which systems agents can reach; preferring internally hosted or tightly controlled models for sensitive workflows; rigorous logging and auditing of agent actions; and structured staff training on safe AI usage. Adopting these controls before large-scale rollouts will put organisations ahead of emerging regulations, and reassure global clients that innovation is matched by robust risk management.
