For many organizations, the adoption of AI is intensifying a long-standing issue: unclear ownership. Without clear decision owners, AI tangles accountability and expands operational risk, fueling the need for documented human governance. As a result, the human-in-the-loop concept is being reframed from a simple technical safeguard into a core principle of organizational design.
Juanita Álvarez is the Founder and CEO of Ciph Lab, a research lab on the forefront of AI readiness and governance. Her expertise is built on designing and building mission-critical systems at Apple, Amazon Lab126, and Okta. As a current Principal at Figma, she spearheaded the Intelligence Resources™ (IR) function, a new model for managing human-AI integrity. She says that with growing global AI regulation, the ability to trace decisions back to a human owner is moving from a best practice to a core business priority.
"Legally, an AI cannot be held responsible when something goes wrong. A human must be," Álvarez explains. "With all the rules and laws that are changing, you need a system that makes it transparent who is actually responsible for decisions and who can explain why things have happened the way they have." For highly regulated industries, frameworks like the EU’s AI Act mandate specific measures for human oversight, requiring that a person be able to meaningfully override an AI’s output to have a defensible legal position.
On the same team: Álvarez makes the case that the model can only succeed if it's built on a foundation of human accountability. "Humans are afraid of AI taking over their jobs, but I see it more as a partnership," she says. "For that to succeed, people need to get good at working with AI, and having that partnership as part of the foundation is a good starting point."
Your next promotion: The operational change Álvarez describes is a practical framework built on a "highly visible, continuous flow" where AI agents handle preliminary work, positioning a human as the final gatekeeper. That change in operations transforms a simple process into an opportunity for career evolution. Managing this change requires cultivating employee curiosity and showing them where this new path leads. "It's about elevating your role to that of a manager. In the future, you're going to be managing multiple agents, not just people. The person who can work with both is the one who will benefit."
As automated systems scale, unclear ownership becomes a legal and operational liability, especially in regulated environments where responsibility cannot be delegated to software. In this light, human-in-the-loop is not about reviewing outputs, but about designing systems that make decision ownership explicit, traceable, and defensible from the start. For most companies, the operational and human side is still catching up to the technology, leading to an adoption climate Álvarez says can feel hurried, with employees fearful of missteps.
The gap between our present reality and the agentic future sets the stage for Álvarez's core takeaway for leaders. "One piece of advice is to know where you currently are and what your goals are," she says. "Don't just implement AI out of fear that you're going to be losing out. You have to have problems that you're trying to solve, and only then see if the AI you're considering will actually help solve that problem."