AI Data Press | Powered by EnterpriseDB © 2025
Compliance oversight is perhaps the most thankless job for cloud engineers today. For most, it's a relentless cycle of manual checks, chasing forgotten settings, and cleaning up temporary access accounts before they turn into audit nightmares. Many teams spend so much time on support issues and routine workflows that they barely have time to focus on more innovative work, like cloud engineering.
Fortunately for them, a new class of agentic AI is emerging to take on the menial compliance tasks holding most engineers back. Like tireless, autonomous copilots, AI agents can patrol cloud environments 24/7, catching errors in real time and correcting them before they cause harm. Some industry experts say the technology will be a force multiplier for engineers, especially in SRE environments.
We spoke with Arun Asok Kumar, Director of Cloud Infrastructure at Deepwatch, a precision Managed Detection Response provider, and former Senior Manager of Cloud and DevOps at PwC. With years of experience leading cloud transformation and security programs, Kumar has been on the front lines of building secure, compliant, and high-performing cloud infrastructures across enterprise environments for nearly two decades. For Kumar, the promise of agentic AI is less about replacing cloud engineers and more about liberating them.
Agents as freedom, not foe: "My staff do not have time to spend on what I would call 'admin-related activity.' If AI agents can handle it, then they should." However, rather than take jobs away from humans, Kumar said, the goal of agentic AI should be "helping humans do more meaningful work by eliminating the redundant tasks most people don't want to do anyway."
Kumar’s philosophy is grounded in the reality most cloud teams experience today. His work with complex compliance standards like NIST, SOC 2, and ISO 27001 lends an especially unique perspective on the transformative potential of agentic AI. In the high-stakes world of SOC 2 compliance, for instance, Kumar described a domain where even minor human errors can lead to major audit failures. But he also saw firsthand how much time teams spent "babysitting these accidental human errors." To solve this bottleneck, he developed a proof-of-concept AI agent designed to act as an autonomous compliance guardian for Azure environments.
The 24/7 compliance copilot: "One of the most important compliance requirements for SOC 2 is that all data services must have geographical redundancy," Kumar said. As an example, he pointed to a common scenario in which engineers might "create a database but forget to click the checkbox that enables Geo-Redundant Storage." Usually, that type of error is discovered only during quarterly audits. However, the agent Kumar built runs continuously, scanning the environment for such an error. "If it sees a storage account without GRS enabled, it can go ahead and make that change in the production environment where it's enabled."
From detection to resolution: Beyond infrastructure settings, the agent Kumar built also extends to security and access management. "Let's say someone was granted access for troubleshooting purposes, but those permissions weren't removed after a specific amount of time. If the agent sees a new user in the portal without approval, it can check its back-end algorithm to verify pre-approval. Without it, the agent can remove the unauthenticated user and create an ITSM ticket to explain why that decision was made."
Yet, despite the transformative potential of agentic AI, allowing any technology to take independent actions in a production environment should also raise questions about trust and control. To adopt these tools successfully, Kumar stressed, organizations must fall back on fundamental enterprise security practices instead of abandoning them.
Trust via due diligence: The key is selective governance over which tools are let through the door. "Choosing the right vendors is very important. Conduct a thorough vendor evaluation, perform a penetration test, and request a compliance report. Most importantly, establish restrictions on the AI tools your team can use. Are those approved tools? Are they backed by pre-approved vendors? Am I using an agent backed by Microsoft, or is it an open-source agent that could take data and upload it somewhere else?"
Making this diligence more complex is what Kumar called the "race towards agentic AI." In his view, the chaotic, fast-moving market for AI tools is one of the most underestimated challenges for most leaders today.
The political tool race: A mix of internal and external interests that often have little to do with solving a real business problem adds pressure on leaders to select the right tool. "There is a plethora of tools out there right now. What's best-in-class today can be easily replaced by something better tomorrow." Unfortunately, this uncertainty creates a less-than-ideal environment for making long-term strategic decisions. "It's not about comfort and convenience right now. It's about proving your company is miles ahead of the competition in this race for power."
While potentially transformative, Kumar also sees agentic AI as just the latest wave in a series of technological booms. In his view, leaders should focus less on mastering AI and more on building organizations that are resilient enough to withstand whatever comes next. "I'm excited about the next big thing after AI. It's exciting to know that what we're doing right now will be eclipsed by something else in just five or ten years."