MilikMilik

The New AI Security Arms Race: How Tech Giants and Startups Are Battling AI-Powered Cyber Threats

The New AI Security Arms Race: How Tech Giants and Startups Are Battling AI-Powered Cyber Threats

From Human-Centric Defenses to AI Security Agents

AI-powered cyberattacks are reshaping how defenders think about scale, speed and autonomy. At Google Cloud Next, the company positioned AI agents—not human analysts—as the only realistic way to keep pace with models like Anthropic’s Mythos, which promise to uncover software flaws at unprecedented rates. Google introduced new Google AI security agents inside its Security Operations platform to automate detection and incident response, alongside Wiz-powered multicloud protection and controls for the fast-expanding AI attack surface. The Gemini Enterprise Agent Platform aims to provide a defensive layer against shadow AI, where unsanctioned models and tools proliferate inside organisations. Together, these AI cybersecurity tools signal a pivot from dashboards and playbooks to continuously operating, semi-autonomous systems. The emerging assumption is that security operations centres will need fleets of cooperating agents to triage alerts, hunt threats and enforce policy in environments where both attackers and defenders are increasingly machine-augmented.

The New AI Security Arms Race: How Tech Giants and Startups Are Battling AI-Powered Cyber Threats

Securing the AI Software Supply Chain Becomes Board-Level Priority

As coding assistants and agents flood development pipelines with auto-generated code, the software supply chain has become one of the most critical front lines in the AI security arms race. Belfast-based Cloudsmith has captured that urgency, landing a USD 72 million (approx. RM331 million) Series C round to expand its role as a control layer for modern artifact management. The Cloudsmith AI software supply chain story reflects a broader concern: every application now depends on thousands of components, from open-source libraries to internal packages and third-party services. AI tools accelerate this dependency sprawl, raising the risk of insecure or malicious artifacts slipping into production. Regulators are increasingly demanding proof that AI-touched code is secure by design and traceable end to end. Cloudsmith’s bet is that visibility, provenance and policy enforcement across all software artifacts will become as fundamental to AI cybersecurity tools as endpoint protection or identity management.

The New AI Security Arms Race: How Tech Giants and Startups Are Battling AI-Powered Cyber Threats

CrowdStrike’s Project QuiltWorks and the Vulnerability Deluge

Offensive AI is not only writing exploits; it is also poised to discover vast numbers of new vulnerabilities. CrowdStrike’s Project QuiltWorks is an explicit response to this looming wave. Triggered by Anthropic’s disclosure that tools like Claude Mythos could exponentially increase vulnerability discovery, QuiltWorks combines frontier AI models with CrowdStrike’s Falcon Spotlight to find and prioritise software flaws at massive scale. Early participants have reportedly identified tens of millions of vulnerabilities, prompting a shift from “Patch Tuesday” to near-continuous patching cycles. The initiative also integrates remediation guidance from major system integrators, acknowledging that discovery without rapid fix is no longer acceptable. CrowdStrike’s framing is telling: the specific AI model matters less than the “harness” around it—the workflows, context and guardrails that turn raw model power into an operational capability. In this view, AI becomes both the accelerant for new bugs and the only feasible tool to remediate them fast enough.

The New AI Security Arms Race: How Tech Giants and Startups Are Battling AI-Powered Cyber Threats

When Everything Trains AI: Data, Privacy and Agent Governance

Defensive strategies are increasingly built on a stark premise: assume everything users do online may be used to train AI. Reports of companies planning to log keystrokes, emails and mouse movements to feed internal models illustrate how deeply AI training is encroaching on everyday digital activity. Past incidents, such as dating profiles being repurposed to train facial recognition systems, show how quietly this data repurposing can occur. At the same time, a new class of AI agents with delegated authority is emerging inside enterprises. These agents act on behalf of human and machine identities, inheriting fragmented permissions and access paths. Security researchers describe this as an “AI Agent Authority Gap”, where ungoverned delegation chains turn agents into amplifiers of hidden access and risk. The response is a push for continuous observability: real-time monitoring of which agent invoked what authority, under which conditions, and with what downstream impact.

The New AI Security Arms Race: How Tech Giants and Startups Are Battling AI-Powered Cyber Threats

From Corporate Networks to Geopolitics: What CISOs Must Do Now

The AI security contest no longer stops at the enterprise firewall. A White House memo has accused foreign entities, principally based in China, of “industrial-scale” theft of advanced AI technology via model distillation campaigns, allegedly using tens of thousands of proxy accounts and jailbreaking techniques. This underscores that AI security spans corporate IP protection, cloud infrastructure and national strategic advantage. For CISOs, the implication is clear: traditional security playbooks are insufficient against AI-powered cyberattacks. In 2026, preparation means adopting AI-native tools such as Google AI security agents, supply chain control platforms like Cloudsmith, and vulnerability orchestration initiatives including CrowdStrike Project QuiltWorks. It also means tightening identity and access governance before deploying powerful agents, treating training data pipelines as high-value assets, and scenario-planning for adversaries with automated reconnaissance and exploit capabilities. In this new arms race, standing still is equivalent to falling behind.

The New AI Security Arms Race: How Tech Giants and Startups Are Battling AI-Powered Cyber Threats
Comments
Say Something...
No comments yet. Be the first to share your thoughts!