Hybrid cloud has become the operating model, not a stepping stone. Instead of choosing between environments, enterprises are building platforms designed to run across both, with the flexibility to move workloads as conditions shift. The advantage now lies with organizations that treat portability, control, and consistency as core requirements. In 2026, that shift is being accelerated by pressures few teams anticipated, from sovereignty demands to the changing economics of AI.
We spoke to Henrik Løvborg, Tech Sales Leader for Denmark at Red Hat. He has worked in software development and architecture since 2003, helping organizations adopt modern application platforms built on containers, Kubernetes, and cloud-native infrastructure. Løvborg's work focuses on translating open source technology into practical enterprise outcomes across hybrid environments. He believes the organizations gaining ground are those building platforms that remove dependency on any single environment.
"Most organizations already have a cloud strategy, but the reality is they're still running huge parts of their infrastructure on-premises. The companies that are furthest along are the ones building environments that allow them to operate efficiently in both," Løvborg says. The primary driver behind this shift, he says, is no longer cost; It is digital sovereignty. Organizations want the ability to operate without dependence on a single hyperscaler, with flexibility to respond to geopolitical uncertainty, vendor changes, and evolving regulatory requirements. The EU is already outspoken about the role open source can play in achieving this, and hyperscalers themselves are launching region-specific offerings in response.
Sovereignty as strategy: "Right now the biggest driver toward hybrid architectures is the digital sovereignty conversation. Organizations want options, flexibility, and the ability to operate without being tied to a single hyperscaler," Løvborg says. That includes uncertainty around stability and cost, particularly as AI workloads introduce new spending patterns that organizations want to control on their own terms.
Complexity demands platforms: The second driver is operational complexity. As enterprises add governance, reporting, golden path templates, and developer portal integrations, they need a uniform layer that delivers those capabilities regardless of where workloads run. "You want a setup that can deliver these benefits and hide a lot of the complexity, whether you're in your own data center or with a hyperscaler," Løvborg says.
Kubernetes sits at the center of this strategy, but Løvborg draws a clear distinction. Kubernetes is not the platform. It is the engine that makes it possible to build one. The self-service and flexibility that originally drew organizations to cloud are what Kubernetes-based application platforms now deliver across any environment, without reliance on hyperscaler-native PaaS services.
Open source is what makes the sovereignty argument concrete rather than theoretical. Løvborg points to transparency, contribution, and licensing as the three pillars. Organizations can inspect the source code, contribute features they need through CNCF projects, and maintain the freedom to run or modify the technology even after a commercial relationship ends.
Freedom to leave: "With open source, the license is the license. It doesn't belong to a single vendor. Organizations can continue running the technology exactly as they have while taking the time to decide what direction they want to go next," Løvborg says. That assurance, he adds, is what separates sovereign infrastructure strategy from marketing language about flexibility.
Two unexpected workloads: The workload conversations transforming platform investment in 2026 are AI and virtualization. AI fits naturally because platforms already handle shared services like GPU access, inferencing, governance, and data controls. Virtualization is the surprise. "A few years ago, I wouldn't have predicted that VMs would be part of the modern Kubernetes platform conversation. Now it's part of nearly every discussion we have," Løvborg says.
When an organization already has a platform in place, many of the foundational problems that AI and cloud-native database workloads need to scale are already solved. Governance, data access controls, resource sharing, and delivery mechanisms are built in. Løvborg points to CloudNativePG and EnterpriseDB as examples of operator-based, cloud-agnostic solutions that fit directly into this model, giving enterprises a way to run production databases across hybrid environments with the same portability and control they expect from the rest of the platform. "When you already have a platform in place, you've solved many of the core challenges around governance, data access, and resource sharing, which is exactly what modern workloads like AI and cloud-native databases need to scale effectively," Løvborg concludes.