From Perimeter Defense to Inside the Agent Loop
AI agents are no longer just conversational front-ends; they now read email, manage financial workflows, execute code, and roam across sensitive accounts. This shift fundamentally changes the attack surface. Traditional defenses such as firewalls, WAFs and HTTP proxies assume there is a clear request boundary to inspect before traffic reaches application logic. Agentic systems break that assumption. Tool handlers receive untrusted input as function arguments, queue consumers pull messages that never traverse a router, and multi-agent pipelines pass state through shared memory or workflow engines instead of network calls. As a result, malicious prompts, hidden instructions in fetched content, or context passed between agents can completely bypass classic perimeter tools. Security teams must accept that the critical control point has moved inside the agent execution environment itself, where unreviewed decisions, tool calls and budget use actually occur.

Norton’s VPN for Agents and AI Agent Protection
Gen is responding to these new risks with its Agent Trust Hub, introducing VPN for Agents and expanding Norton AI Agent Protection within Norton 360. Unlike traditional VPNs built for people and devices, VPN for Agents is designed specifically for autonomous agents whose traffic must be separated from human activity and strictly controlled. Its multi-tunnel technology lets agents operate across multiple locations simultaneously while shielding identity and location to reduce tracking and profiling, without requiring client software. Norton AI Agent Protection embeds directly into Norton 360 to monitor what supported agents do and where they connect, placing enforcement between an agent’s decision and execution. It adds checks before plugins, skills and tools are invoked, blocks prompt-injection attempts, and scans code and files agents access or generate to detect malware and unsafe scripts. Together, these capabilities shift consumer protection from devices and networks to the autonomous software operating on users’ behalf.
Arcjet Guards: Security Where Agentic Code Actually Runs
Arcjet’s Guards tackle the blind spots that arise when AI agents execute business logic beyond any HTTP boundary. Guards enforce security policies directly inside agent tool handlers, queue consumers and workflow steps—precisely where untrusted inputs arrive but traditional WAFs and proxies have no visibility. In one cited incident, an agent fetched a maliciously crafted website that instructed it to send data to an external attacker; the upstream WAF protecting the chat interface never saw the attack. Guards aim to prevent this by inspecting tool results for prompt injection, blocking sensitive data in tool inputs and queue messages before it reaches third-party models, and enforcing per-user token budgets within agent loops. Because rules live in the same codebase as the features they protect, security reviews ship alongside application changes. This “security where the code lives” approach aligns the protection boundary with the real threat model of agentic systems.
AWS Kiro: Proving Requirements Before Agents Write Code
AI agent security isn’t just about runtime controls; it also depends on the correctness of what agents are asked to build. AWS’s Kiro AI coding tool introduces a Requirements Analysis feature that uses mathematical proofs to validate software specifications before any code is generated. Large language models translate natural-language requirements into formal logic, which is then checked by an SMT solver for contradictions and gaps. This prevents situations where vague prompts yield ambiguous specs, forcing AI agents to make hidden design decisions that developers never approved. By catching inconsistencies at the requirements level, Kiro reduces one of the hardest and most expensive classes of bugs—those embedded in the original spec. It also directly addresses concerns about giving coding agents too much autonomy, making sure the blueprint they follow is logically sound before they start producing code at machine speed.

Redefining AI Agent Security for a Post-Perimeter World
Taken together, these efforts signal a broader redefinition of AI agent security. The critical risks now lie in autonomous agent workflows rather than in traditional network perimeters. Agent-based attacks can originate from malicious prompts, compromised tools, poisoned content, or flawed requirements—all operating inside systems that legacy defenses cannot see. New AI security tools are emerging to meet this reality: VPNs tailored for agents, runtime guards embedded in tool handlers, and formal analysis that hardens specs before agents act. For enterprises and developers, the implication is clear. Protecting AI agents means instrumenting their loops—monitoring decisions, validating inputs and outputs, enforcing budgets, and proving the logic they execute. Organizations that treat agents like independent, high-privilege software components rather than harmless assistants will be better positioned to manage autonomous agent vulnerabilities and safely unlock their productivity gains.
