Modern infrastructure is being reshaped as artificial intelligence drives new levels of AI complexity across the stack.
Kubernetes has moved into the center of AI-driven operations, but the shift is exposing a stubborn reality. Teams are still dealing with skill gaps, fragmented tooling and rising operational pressure. Adding AI is accelerating those challenges instead of resolving them. What looked like a maturing ecosystem is now being stress-tested in real time as AI pushes systems beyond their original design assumptions, according to Rob Strechay, principal analyst at theCUBE Research, during an AnalystANGLE segment recapping theCUBE’s KubeCon + CloudNativeCon EU coverage.
“I think when you start to look at where this is going … I would say the number of open-source activities going on this year is insane,” Strechay said. “Standardization absolutely helps. It helps from a security perspective. It helps from an abilities perspective. It is leveling the playing field, which I think has to happen for AI to really be what it needs to be.”
Strechay spoke with fellow hosts Paul Nashawaty and Rebecca Knight at KubeCon + CloudNativeCon EU, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. During the event, they talked with various industry experts about how Kubernetes and open source — driven in large part by the Cloud Native Computing Foundation — are evolving to support AI while teams grapple with growing complexity, governance demands and skills gaps, among other topics. (* Disclosure below.)
Here’s theCUBE’s complete video analysis of the event:
Here are three key insights you may have missed from KubeCon + CloudNativeCon EU:
Insight #1: AI complexity is reshaping how inference and platforms converge.
AI is moving out of isolated experimentation and into core IT operations, where it has to run reliably at scale. That transition is shifting ownership from data science teams to infrastructure teams, forcing organizations to rethink how AI systems are built, managed and supported over time. The moment AI begins delivering business value, it becomes part of the operational baseline — not something that can live on the edge, explained Brian Stevens (pictured), senior vice president and AI chief technology officer of Red Hat Inc., in an interview with theCUBE.
“What we realized is that AI is being developed by data scientists, and as part of that, they’re building their own infrastructure to run it on as well,” Stevens said. “The way we thought about it was, eventually it’s going to be a CIO’s problem if AI gets successful, they’re going to be the ones managing it and scaling it and operating it. What language do CIOs speak these days? They speak KubeCon and Kubernetes and Kubernetes-based platforms.”
Cloud-native adoption is expanding alongside AI complexity, often without being a deliberate choice. Developers building AI systems are naturally pulled toward platforms that handle distributed data, networking and compute, which is making cloud-native architecture feel less optional and more foundational. In many cases, organizations don’t “choose” cloud-native — they arrive there because AI workloads demand it, noted Liam Bollman-Dodd, primary market research consultant at SlashData Ltd.
“The cloud-native technologies are … the tools that you have to use to do AI inference, do ML pipelines,” he said in a discussion with theCUBE. “You just end up incidentally having to use a lot of cloud-native stuff, not only because you’re dumping it all to the cloud most of the time — because you need to compute and the power — but because they’re just designed to allow this to happen. It wasn’t like they were built for this solely. They were built to handle all of the networking and all the data and all the modeling, and the communications were all designed around normal problems.”
Vendors are now trying to reduce the operational burden this creates by shifting intelligence into the platform itself. Instead of expecting teams to manage every layer manually, the focus is turning toward infrastructure that can adapt to AI-driven workloads and respond dynamically to change, according to Peter Smails, general manager of cloud native at SUSE S.A. That shift hints at a broader evolution, where infrastructure becomes less static and more participatory in how systems operate.
“I think we have a very unique approach … we see using AI for intelligent infrastructure,” Smails told theCUBE. “The other piece is it’s being the right infrastructure for running AI workloads. That’s the domain of SUSE AI.”
Here’s theCUBE’s complete video interview with Red Hat’s Brian Stevens and Robert Shaw, director of engineering at Red Hat:
Insight #2: Sovereign AI and platform engineering reshape control.
Sovereign AI is expanding beyond compliance into everyday operations, as organizations look to control how data and models behave across environments. This shift is making visibility, portability and governance core infrastructure concerns rather than secondary requirements, explained Vincent Caldeira, CTO for Asia-Pacific at Red Hat, in an interview with theCUBE. Control is no longer just about where data resides — it’s about how systems behave under different regulatory, operational and economic constraints.
“I think the way we actually define sovereignty is the ability to exert control over your digital destiny,” Caldeira said. “The control over the data has always been a very key topic. I think everyone agrees that any organization needs to have established security and controls around it.”
Platform engineering is emerging as the layer that makes this complexity usable for both developers and AI systems. Internal developer platforms are becoming essential because they provide the structure needed to manage fragmented systems, standardize workflows and reduce cognitive load across teams. Without that structure, the growing number of tools, services and dependencies quickly becomes unmanageable, according to Chris Aniszczyk, CTO of the CNCF.
“I think [Internal Developer Platforms] and Backstage and the rise of agentic systems is only going to increase in importance, because agents need to feed off generally structured information, things that they understand will help them to be more effective,” Aniszczyk told theCUBE. “Every organization’s going to have to have this, in my opinion, to be effective in the new world.”
There is also a growing recognition that technology alone won’t solve the problem. Expanding participation through mentorship and open-source contribution is becoming part of how the industry addresses persistent skill gaps, which continue to slow adoption even as demand accelerates. The health of the ecosystem increasingly depends on how accessible it is to new contributors, pointed out Anastasiia Gubska, lead software engineer at JPMorgan Chase & Co.
“For me, mentorship has played a huge role in the development of my career,” she said. “The first time I met up with the Argo Project Maintainer and a Helm Project Maintainer, I didn’t really have confidence on stage. Potentially, I have an aim now to become a principal engineer in the future. My mentor has been fantastic at pushing me to kind of push my goals and move forward more.”
Here’s the complete video interview with Chris Aniszczyk and Tyson Singer, SVP and head of technology and platforms at Spotify Technology S.A.
Insight #3: Security, scale and platform consolidation are converging.
AI adoption is accelerating faster than the safeguards around it, creating a widening gap between innovation and protection. Foundational practices such as identity, access and data governance are struggling to keep pace with the speed of deployment and the increase in AI complexity, leaving organizations exposed in ways that are both familiar and newly complex. The difference now is the velocity — issues that once unfolded over years are emerging in months, explained Christopher “CRob” Robinson, CTO of the Open Source Security Foundation.
“There’s been an accelerated use over the last year, especially AI, which has been around for decades,” he told theCUBE. “It used to be called machine intelligence and machine learning. It’s kind of evolved into more buzzwordy terms. It’s been around forever, but in the last year, in particular, the growth has accelerated and the different variables and techniques and tools have exploded.”
At the infrastructure layer, Kubernetes is becoming the operational backbone for AI because it can handle unpredictable workloads at scale. AI systems introduce variability in compute demand, data movement and performance requirements, and cloud-native platforms are one of the few environments capable of absorbing that volatility. What began as a container orchestration tool is now acting as a control plane for modern AI operations, according to Jonathan Bryce, executive director of cloud and infrastructure at The Linux Foundation.
“I think it’s something where CNCF projects are really meeting the moment for AI,” he said about increasing AI complexity. “AI is going to drive the next 10, 20 years of technology the way that cloud did the last 10 or 20 years.”
At the data layer, enterprises are moving away from fragmented tools toward more unified platforms that can support AI complexity at scale. This consolidation is being driven by the need to manage cost, performance and operational overhead as systems grow more interconnected and agent-driven. Rather than adding more point solutions, organizations are starting to prioritize cohesion across the stack.
“We’re not looking at a solution for observability and a solution for search and a solution to build their AI apps and a solution to monitor,” Bianca Lewis, executive director of the OpenSearch Software Foundation, said in an interview with theCUBE. “I think now what we really need is we’re building an AI data infrastructure that you can build those use cases on. Super exciting things with agentic AI that we’re doing, platform-wide that we’re getting into and has been adopted by the hyperscaling companies.”
Here’s the complete video interview with CRob Robinson and Greg Kroah-Hartman, Linux kernel maintainer at The Linux Foundation:
For more of theCUBE’s coverage of KubeCon + CloudNativeCon EU, check out these segments:
Bob Killen, senior technical program manager at the CNCF
Roberto Carratala, principal AI platform architect within the AI Business Unit at Red Hat
Donia Chaiehloudj, software engineer at Isovalent, a Cisco Systems Inc. company
Bill Mulligan, Celium and eBPF maintainer at Isovalent
Kevin Cochrane, chief marketing officer of Vultr
Francesco Giannoccaro, head of high-performance computing at the UK Health Security Agency
Johan van Amersfoort, chief evangelist and AI lead at ITQ Consultancy
Jeffrey Kusters, CTO of ITQ Consultancy
Andrew Burden, community facilitator at Red Hat
Ľuboslav Pivarč, principal software engineer, KubeVirt maintainer at Red Hat
Jeroen van Gemert, DevOps engineer at Koninklijke KPN
Joe Gardiner, assistant VP of cloud and data sales, EMEA and LATAM, at Portworx by Everpure
Daniel Messer, senior manager for product management at Red Hat
Siamak Sadeghianfar, senior manager for product management at Red Hat
Phil Trickovic, SVP of Tintri by DataDirect Networks
Michael Beemer, principal product manager at Dynatrace
Jonathan Norris, director of software engineering at Dynatrace
To watch more of theCUBE’s coverage of KubeCon + CloudNativeCon EU, here’s our complete event video playlist:
https://www.youtube.com/watch?v=videoseries
(* Disclosure: TheCUBE is a paid media partner for the KubeCon + CloudNativeCon NA event. Neither Red Hat Inc., the headline sponsor of theCUBE’s event coverage, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)
Photo: SiliconANGLE
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
- 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
- 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.