As enterprise teams keep adding tools, the layers of automation and orchestration begin to overlap to the point where the stack itself becomes the primary risk. From a DevSecOps perspective, each addition creates a new seam for potential vulnerabilities and misconfigurations. Across cloud, on-prem, and bare metal environments, the pattern keeps repeating: organizations overcomplicate infrastructure that is fundamentally simple. The teams getting real value from GitOps in 2026 are the ones that simplify first.
Paris Kejser is an Advanced Platform Engineer in DevSecOps at Terma Group, the Danish aerospace and defense technology company. A CNCF Kubestronaut and international KubeCon speaker, Kejser designs and operates Kubernetes-based platform ecosystems in air-gapped and high-security environments. He also maintains a private multi-node lab to test distributed systems, networking patterns, and GitOps workflows beyond production constraints. That hands-on depth shapes a perspective grounded in system-level understanding rather than tool-level adoption.
"We tend to overcomplicate infrastructure at enterprise scale. In the end, a server is just a computer with memory, CPUs, and disks. When you simplify the basics and understand the system underneath, that's when automation and GitOps can actually start delivering real value." Kejser draws a clear line between organizations where GitOps succeeds and where it stalls. The differentiator is not tooling. It is whether leadership trusts new technology enough to give engineers time to understand and experiment with GitOps principles rather than forcing blind adoption.
Trust over mandate: "Where it succeeds is where companies really want to be innovative and are trusting new technology and giving employees the time to understand how GitOps works," Kejser says. "Where it doesn't fit is old classic companies where everything is tightly controlled from the top and they don't allow new ways because it changes how everyone in the company knows things work."
Not a unicorn: The most common pitfall Kejser observes is organizations treating GitOps as a solve-all. "People decide to adopt GitOps and think it's the unicorn product. It can do anything. They just go crazy and put everything into Git and deploy it," he says. "They forget the reconciliation loops. They forget the backup procedures. You still have persistent volumes. People just forget this because now everything is automated."
Those fundamentals are what separate a mature implementation from one that only looks the part. Every GitOps advocate promises easy recovery to a new cluster, but few talk about what happens to the data sitting in persistent storage when things actually break.
The direction for 2026 points increasingly toward on-prem and air-gapped environments, where the stakes for reliability are highest. Kejser sees cloud deployments as comparatively straightforward. The harder challenge is building GitOps into restricted environments that demand airtight trust in the automation layer.
Multi-cluster is real: Kejser confirms that multi-cluster deployment is scaling and delivering on its promise. "If you first set up a development environment and apply something and it's going well, then you just apply it on production in a different environment," he says. "The multi-scaling clusters and multi different environments on the same cluster, that is going to be a thing."
AI infrastructure, not AI workflows: On AI workloads, Kejser is specific about where GitOps fits and where it does not. "GitOps will be the baseline for getting the AI environment up and running. When you need to provision your AI clusters, you need something to get MLflow, your workflow, and Apache Airflow deployed," he says. "But I don't think Argo CD and Flux CD will take over what you're seeing from ML ops engineering. Not in 2026."
Day zero to self-management: Kejser describes a practical bootstrap sequence: Ansible provisions VMs and networking on day zero, then applies Flux into the Kubernetes cluster. Once the operator takes hold, GitOps becomes the self-management layer. "I don't see Flux or Argo replacing Ansible. I see them as a second step," he says. "When everything is done on day zero, you apply everything on day one. Then it's all automated and self-managing."
The cost conversation ties directly back to talent. Kejser argues that companies with highly skilled engineers who understand Linux, Kubernetes, and networking at the system level can dramatically reduce hardware, licensing, and maintenance costs. The problem is that most organizations invest in volume over depth. "A lot of companies see all developers and engineers as resources and not as skilled people with competencies," Kejser says. "If your team isn't stronger than its weakest link, you need to invest more in the top people and have them build the fundamental backbone for everyone else."
That is the real maturity test for GitOps in 2026. Not how many tools sit in the stack, but whether the people operating those tools understand what is happening underneath. Automation scales when the foundation is simple, the team is skilled, and the organization trusts both enough to get out of the way.