AI is not simply accelerating software development. It is fundamentally changing what developers produce and how much of it they produce. The speed gains that dominate most conversations about AI in the SDLC obscure a more consequential shift: engineering teams are outputting significantly more features, more test cases, and more documentation in the same number of person-days. That volume explosion is forcing a reckoning with how teams review, govern, and sustain the code AI helps them write.
Gaganjot Singh, Principal Engineer at Nagarro, a global digital engineering company with over 18,000 employees, architects large-scale systems and drives AI adoption across enterprise development teams. A five-time recipient of Nagarro's Brightest Mind award and winner of both the NAGP Gen AI Hacktelligence Hackathon and The Singularity AI Hackathon, Singh brings hands-on experience leading the cultural and technical transformation AI demands of modern engineering organizations.
"Now your job is not just writing code. You are becoming like a team lead managing a technology that can sustain a development life cycle in and of itself," Singh says. He sees the developer's role shifting from writing software to overseeing systems that write software, a transition that requires cultural adaptation far more than technical skill.
Coder versus developer: Singh draws a distinction between engineers who simply push code to a repository and those who understand the full application and its business impact. "If you consider yourself simply a coder, someone who pushes code to your repository, then you have a skill issue," he says. But for developers who already grasp the end-to-end system, the challenge is cultural. They need to unlearn old habits and accept that their role now resembles managing a team of AI agents.
More output, same timeline: The productivity gains are real, but they show up as volume rather than velocity. "Let's say I delivered one feature in one week. I'm delivering two features or maybe three features in that one week," Singh explains. Product managers still see the same timeline. What changes is how much gets shipped within it, whether that means new features, cleared technical debt, or expanded test coverage.
The downstream effects of that output surge are already visible. A CodeRabbit analysis reported by The Register finds that AI-authored pull requests carry roughly 1.7 times more issues than human-written code, validating what Singh sees firsthand: code review is now one of the most significant bottlenecks in the development cycle.
Review as the new chokepoint: "Code review is one of the most major bottlenecks that we now face," Singh says. The math is straightforward. More features and more test cases per sprint mean more surface area for human reviewers to cover, and that oversight gap grows faster than most teams can staff for it.
Testing at scale: The volume problem extends into quality assurance. "Earlier, you were spending maybe one day writing 10 test cases. Now you're spending one day on 100 test cases. But that one day still belongs to testing," Singh notes. AI makes authoring tests trivial, but reviewing them is still a human task.
Singh sees agent-to-agent communication as the next frontier for managing this complexity. He points to Meta's internal deployment of personal agents that communicate across teams on behalf of individual developers. "Your agent will do it on your behalf," he says. "If we can automate these things, the context becomes automated. And we all know that agents require context. If we deliver enough context to them, they don't need a human in the loop anymore." Enterprise tooling is already moving in this direction, with platforms like EDB adding Model Context Protocol support for agentic database access.
But Singh also flags a risk that few teams are discussing openly. AI generates documentation, test cases, and code at a pace humans cannot review. That creates a feedback loop: more AI to review more AI output, each layer adding token costs that are currently masked by venture capital subsidies.
"AI has a cost related to it. The tokens are not cheap," Singh warns. "Right now we have VC funding and the tokens are cheaper. But it will get much more costly. That is where I think in the future we might see more problems coming up." The teams that build sustainable workflows now, with clear accountability structures and real-time oversight, will be the ones positioned to absorb those costs without losing the productivity gains AI delivers today.