MilikMilik

IBM Unveils New Operating Layer to Unlock Enterprise AI at Scale

IBM Unveils New Operating Layer to Unlock Enterprise AI at Scale

AI Spend Surges While Enterprise ROI Stalls

IBM is positioning its latest AI portfolio as an answer to a widening gap: enterprises are pouring money into AI, but many are not seeing returns. IBM’s own CEO study reports that only around a quarter of AI initiatives deliver the expected ROI, and just 16% have scaled across the enterprise. Other industry research paints a similar picture, with relatively few large companies publicly citing tangible AI benefits even as capital expenditure forecasts climb sharply. At the same time, surveys of senior leaders show average spending on large language models rising quickly and projected to grow further in the near term. This tension between escalating investment and modest payoff is driving demand for an approach that can move AI from experiments to robust, governed production systems—precisely the space IBM aims to occupy with its new operating layer for enterprise AI orchestration.

An AI Operating Layer Built Around Agents, Data and Automation

At its Think conference, IBM introduced an AI operating layer designed to help enterprises standardise how they run AI at scale. The company frames the platform around four pillars: agents, data, automation and hybrid cloud. Central to this is the next-generation watsonx Orchestrate, described as an “agentic control plane” that lets organisations deploy AI agents from multiple sources while enforcing consistent policies and accountability. This makes the platform function as an agent orchestration tool rather than just another collection of models. IBM Bob, now generally available, extends the concept by acting as an agentic development partner that helps teams build AI agents with built‑in security and cost controls. Together, these offerings aim to give enterprises a common control surface to manage agent sprawl, apply governance uniformly, and turn isolated pilots into reusable, scalable AI services embedded into core workflows.

Real-Time Data and Intelligent Operations as the Scaling Backbone

IBM’s operating layer leans heavily on a reworked data and operations stack to address one of the biggest blockers to AI pilot scaling: timely, governed access to data. Following its acquisition of Confluent, IBM is tying real-time event streaming directly into watsonx.data and Kafka- and Flink-based pipelines. A new context layer adds semantic meaning and enforces governance at runtime, aiming to support more explainable AI decisions without sacrificing speed. IBM cites a proof of concept with Nestlé that reportedly achieved substantial cost savings and a dramatic price‑performance improvement on a global data mart spanning 186 countries, underscoring the performance upside of this architecture. On the operations side, IBM Concert extends the same operating model into infrastructure and security operations, correlating signals across applications, infrastructure and networks while working alongside existing tools rather than replacing them.

Embedding Sovereignty and Governance into the AI Stack

To tackle rising concerns over governance, compliance and operational control, IBM is adding a sovereignty layer to its AI operating stack. IBM Sovereign Core is pitched as a way to embed policy at the infrastructure runtime level and enable workload portability across hybrid and partner environments. The platform is designed to give enterprises customer‑operated control, in‑boundary identity, encryption and data services, alongside continuous compliance monitoring, audit evidence generation and governed AI execution. This approach is meant to address a looming risk: analyst forecasts suggest that a significant share of agentic AI projects could be cancelled due to cost overruns, unclear business value or inadequate risk controls. With some projections indicating that large enterprises could be running tens of thousands of agents within a few years, IBM’s sovereignty tools aim to ensure that scale does not come at the expense of control.

From AI Pilots to Production: Will an Operating Layer Close the Gap?

IBM’s operating layer represents a bet that enterprises need more than powerful models—they need a coherent way to orchestrate them within existing business and technology structures. The focus on enterprise AI orchestration, from agent control planes and real-time data to intelligent operations and embedded sovereignty, is explicitly targeted at the problem of taking AI pilots into production. Surveys show that only a minority of organisations have successfully moved a significant share of AI experiments into production, though many expect to achieve that within a few years. At the same time, most executives report some benefits from AI, yet relatively few see strong ROI from generative AI or AI agents specifically. IBM’s thesis is that by providing an integrated agent orchestration platform and AI operating layer, enterprises can redesign how their operations run, closing the gap between experimentation and consistent, measurable outcomes.

Comments
Say Something...
No comments yet. Be the first to share your thoughts!