Reshaping the stack: Inside the open-source forces driving AI’s next chapter

AI and Kubernetes Are Coming Together to Reshape the Entire Stack

What will this mean for the future roles of cloud-native and open-source? This is the central question on the minds of key leaders from the open-source community as they gather in Atlanta for the annual KubeCon + CloudNativeCon NA event, kicking off on November 10.

The past year has seen continued growth across areas such as platform engineering techniques and new tools for cloud-native observability. Yet, AI has emerged as the top topic and is poised to continue dominating the conversation.

Many enterprises resemble ducks on a pond—calm on the surface but paddling furiously beneath. Even the most well-funded and experienced IT organizations find themselves barely keeping up with the AI tsunami.

Jonathan Bryce, executive director of the Cloud Native Computing Foundation (CNCF) and Linux Foundation, recently highlighted this challenge in an interview with SiliconANGLE. He recalled meeting with leadership from one of the world’s most advanced IT organizations. Despite their sophistication, they expressed concern about keeping pace with evolving AI solutions.

> “The whole time we were talking, they felt like they were behind,” Bryce said. “The field is moving so quickly that every week there is a new thing. We’re at the point where people are trying a lot of different approaches simultaneously. We’re all actually learning very quickly.”

This feature is part of SiliconANGLE Media’s ongoing exploration into the evolution of cloud-native computing, open-source innovation, and the future of Kubernetes. Be sure to watch theCUBE’s analyst-led coverage of KubeCon + CloudNativeCon NA from November 11-13. (* Disclosure below.)

### Cloud-Native Protocols for Agents

One of the fastest-evolving areas of AI today involves agents—autonomous software designed to perform key enterprise tasks. The cloud-native world has played an active role in shaping this autonomous technology’s future, as interoperable, secure, and specialized multi-agent systems take shape.

Two recent developments highlight the growing role of the open-source community in agentic AI:

– **Google Cloud contributes the Agent2Agent Protocol (A2A) to the Linux Foundation.** This protocol enables data transfer between agents, relieving developers from writing code to facilitate cross-agent communication.

– **The Linux Foundation’s adoption of Solo.io’s Agentgateway project.** Announced in August, Agentgateway is an AI-native proxy that optimizes connectivity and observability for agentic environments. Additionally, the foundation accepted a proposed standard for agentic workflows, Agntcy, created by a consortium including Google and Red Hat Inc.

“Agents are capturing people’s attention,” Bryce said. “Where we are right now is the very early stage of coming up with the right frameworks and protocols. When I look at the agent space, I see that these frameworks and projects are open-source.”

### Building Permissions through OpenFGA

Alongside protocols for agentic AI, the cloud-native community is advancing new tools around security and model management. One promising tool is Open Fine-Grained Authorization (OpenFGA).

OpenFGA is an open-source authorization engine derived from Google’s Zanzibar system for role-based access control. Last month, it entered the Cloud Native Computing Foundation’s incubation maturity level.

Designed to define who can perform what actions within systems, OpenFGA provides authorization checks in milliseconds. This flexibility makes it appealing to developers as organizations build increasingly complex AI-based workflows.

“We kind of put these permissions into boxes,” Bryce explained. “OpenFGA can layer different permissions together. It’s a more complex model, but it is the right kind of model when you think about autonomous AI systems.”

### Reasoning Models on the Rise

Language models themselves have garnered growing focus within the cloud-native community, particularly as enterprises increasingly adopt open-source AI solutions.

A recent McKinsey & Company study revealed that more than half of over 700 technology leaders globally use open-source AI models, with that figure rising to over 70% among those in the technology sector.

A pivotal moment came nearly a year ago when Chinese startup DeepSeek Ltd. released its first low-cost large language models (LLMs). This marked the catalyst for the open-source wave in AI over the past year.

> “DeepSeek really kickstarted that,” Bryce noted.

Since then, not only have many new open-source models emerged, but reasoning capabilities have been increasingly incorporated into them. Unlike standard LLMs, reasoning models can “fact-check” responses on-the-fly.

For example, DeepSeek’s R1 model, released last November, compared favorably against similar offerings from OpenAI. In April, Chinese tech giant Alibaba Holding Group launched its open-source Qwen-3 AI reasoning models, claiming superior performance over competitors.

Robert Shaw, director of engineering at Red Hat, emphasized this development as a key growth opportunity within open-source AI.

> “Reasoning capabilities that have been added to some of the proprietary models are starting to show up in the open-source models,” Shaw told theCUBE. “We’re seeing really large open-source models being released that have that frontier capability. It’s just the continued improvement in the overall quality of those open-source models, which is what actually unlocks the business and consumer use cases powering all this frenzy. That’s one really, really great trend.”

### Open Source for Inference Platforms

Another notable trend in the open-source community is the shift toward open-weight models. “Weights” refer to the learned parameters from model training made publicly available, enabling users to download and run models locally.

These models represent a middle ground between fully open-source and closed-source AI offerings.

In March, OpenAI announced plans to launch its first open-weight model since 2019. Subsequently, Amazon Web Services (AWS) made OpenAI’s open-weight models available through Amazon Bedrock and Amazon SageMaker JumpStart.

This growing interest in open-source models, alongside early moves to open weights, is driven by the reality that AI platforms won’t rely on a single model or inference server. Instead, emerging platforms will blend proprietary and open architectures.

“Companies are really starting to think through what their inference platform looks like,” Shaw said. “Many customers today run proprietary APIs, but I see many planning for the future. As these open-source models improve and costs rise, they ask: What does my platform look like? What foundation will support the next 20 years of my private LLM cloud?”

### AI Infrastructure Takes Shape

Questions like Shaw’s have spurred growth in companies focused on providing infrastructure tailored to evolving AI demands.

One example is Spacelift Inc., an infrastructure-as-code orchestration platform enabling codeless provisioning for cloud workloads. Last month, Spacelift released enhancements that introduce an agentic infrastructure deployment model, allocating cloud resources through natural language commands.

The latest open-source release, Spacelift Intent, allows more precise control over cloud infrastructure using agentic provisioning, guided by policy safeguards.

> “Spacelift will control the capabilities,” said Marcin Wyszynski, co-founder and Chief R&D Officer at Spacelift, in a recent theCUBE interview. “We’ll have policies to prevent your LLM from doing things you don’t want it to do, but otherwise, we let your coding agent continue with the magic. If you want to deploy your project on AWS for others, Intent enables that seamlessly.”

Another innovator redefining AI infrastructure orchestration is Vast Data Inc. The company recently introduced serverless compute functions and triggers that operate directly on incoming data, enabling enterprises to run inference and indexing workloads in real time across multi-tenant environments.

> “The combination of those two things—functions and triggers—allows us to invoke compute in real time as data enters the system,” explained Alon Horev, co-founder and CTO of Vast Data Inc., during an interview with theCUBE. “That lets us index information and trigger applications to complete their tasks as quickly as possible.”

Cloud-native infrastructure firm Backblaze Inc., known for cloud storage and developer-centric services, recently announced enterprise web console and role-based access controls aimed at simplifying cloud-native security and management.

Backblaze’s extensive involvement in cloud-native development gives it a direct view into how firms are constructing the foundation for AI implementations.

> “We’re seeing the entire workflow of building and using AI in processing pipelines,” said Gleb Budman, CEO of Backblaze, in conversation with theCUBE. “Companies need to collect data, label it, build models, perform inferencing, then log and track information. Backblaze is involved in all these parts of the pipeline.”

### The Open-Source Ecosystem’s Growing Influence

As the open-source community prepares for KubeCon this month, it’s clear that the rapid acceleration of AI is transforming the cloud-native landscape.

Companies are busily building new tools to support an AI infrastructure that is still taking shape for a technology still in its infancy.

But there is also a strong tailwind propelling the open-source ecosystem to new heights—one in which it will significantly influence the future direction of enterprise AI.

> “We’re talking in completely new paradigms here, and the new paradigms are best built in the open,” Wyszynski said. “If people don’t understand how it works, can’t see the code, can’t experiment with it, or can’t extend what’s happening, then they won’t trust it, and they won’t adopt it. It’s open standards that do win.”

*Disclosure: TheCUBE is a paid media partner for the KubeCon + CloudNativeCon NA event. Neither Red Hat and Google Cloud—the headline and premier sponsors of theCUBE’s event coverage—nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.*

*Image: Microsoft Designer / SiliconANGLE*
https://siliconangle.com/2025/11/07/open-source-cloud-native-kubecon-ai-future-kubeconna/

Leave a Reply

Your email address will not be published. Required fields are marked *