MilikMilik

IBM Unveils an Operating Layer to Turn Enterprise AI Pilots into Scaled Systems

IBM Unveils an Operating Layer to Turn Enterprise AI Pilots into Scaled Systems

AI Spend Surges While Enterprise ROI Stalls

Enterprises are pouring resources into artificial intelligence, but results are lagging behind ambition. IBM’s CEO study finds that only around a quarter of AI initiatives deliver the expected return on investment, and just 16% have scaled across the enterprise. Other market research points to the same imbalance: while more companies talk about AI, only a minority can point to measurable business benefits. At the same time, large organizations are ramping up their spending on large language models, with average budgets rising significantly over the past two years and expected to jump again. This disconnect between escalating investment and limited proof of value has created an urgent need for a more disciplined operating model. IBM is positioning its new platform as a way to close that gap by making it simpler to move from isolated pilots to governed, repeatable AI-driven processes.

An Integrated Operating Layer for Enterprise AI Orchestration

At its Think conference, IBM introduced an operating layer that combines enterprise AI orchestration with data, automation and hybrid cloud capabilities. The core idea is to provide a unified control plane that coordinates AI agents from multiple sources while enforcing consistent policies, security and accountability. IBM’s next-generation watsonx Orchestrate, now in private preview, is central to this approach. It is designed to let enterprises deploy and manage a growing number of AI agents as a coherent system rather than as isolated tools. IBM Bob, now generally available, plays the role of an agentic development partner, helping teams build agents with built-in cost and security controls. Together, these tools aim to address the AI agent scaling problem by giving enterprises a way to standardize how agents are created, governed and operated across complex, distributed environments.

Real-Time Data and AI Operations Platform as the Second Pillar

Beyond orchestration, IBM is targeting the data and operations bottlenecks that often block AI from reaching production. Following its acquisition of Confluent, IBM is integrating real-time event streaming with watsonx.data, Kafka and Flink-based pipelines, along with a new context layer that applies semantic meaning and enforces governance at runtime. This is intended to support explainable AI decisions and make enterprise AI systems more transparent and auditable. IBM highlights a proof of concept with Nestlé, where a global data mart reportedly achieved 83% cost savings and a 30x price-performance improvement. On the operations side, IBM Concert—now in public preview—extends the same operating model into infrastructure and security operations, correlating signals across applications, infrastructure and networks without forcing enterprises to rip and replace existing tools. Concert Secure Coder aims to further embed security management and automated remediation into developer workflows.

Embedding Sovereignty and Compliance into Enterprise AI

Regulated sectors face an additional barrier to AI scaling: strict governance, sovereignty and compliance requirements. IBM’s Sovereign Core is designed to address this by embedding policy controls at the infrastructure runtime level. The platform promises workload portability across hybrid and partner environments while maintaining customer-operated control, in-boundary identity, encryption and data services. It also supports continuous compliance monitoring, audit evidence generation and governed AI execution, helping organizations demonstrate that AI workloads adhere to internal policies and external regulations. This focus on AI sovereignty compliance is timely, as organizations grapple with cross-border data flows and complex regulatory environments. By making sovereignty an integral part of the AI operating layer rather than an afterthought, IBM aims to give enterprises the confidence to move sensitive workloads from pilot stages into scalable, production-grade systems without undermining risk controls.

Managing Agent Sprawl Before It Derails AI Initiatives

As AI agents proliferate across business functions, governance challenges are multiplying. Analyst forecasts suggest that large enterprises could be running well over a hundred thousand agents within the next few years, yet only a small fraction believe they have appropriate governance frameworks in place. At the same time, a significant share of agentic AI projects is expected to be canceled due to rising costs, unclear business value or weak risk management. IBM’s operating-layer strategy is explicitly aimed at preventing this kind of agent sprawl. By tying together agent orchestration, real-time data, operations management and sovereignty controls, IBM wants to give enterprises a single AI operations platform rather than a patchwork of tools. This integrated approach is meant to help organizations move beyond experimentation, align AI initiatives with clear business outcomes and maintain control as agent deployments scale.

Comments
Say Something...
No comments yet. Be the first to share your thoughts!