AI Data Press | Powered by EnterpriseDB © 2025
With AI advancing far faster than government policy, the world is left with a messy and contradictory patchwork of regulations.
Temi Odesanya, the Responsible AI Leader at AIG, argues that the answer isn’t a single set of rules but a multi-layered framework built on shared accountability.
She proposes a universal "do no harm" principle, modeled on the human rights framework, to create a global consensus for responsible AI development and self-regulation.
Odesanya makes the case for continuous, expert-led enforcement, arguing that static, point-in-time audits are not enough to keep pace with emerging risks.
While AI advances by the week, governments take years to respond. Now, that growing disconnect is fueling the central question in global tech policy: Who, exactly, should regulate it? Around the world, a fragmented patchwork of frameworks is emerging, from the EU AI Act and America's AI Action Plan to distinct policies in China, Russia, and India. But the answer may not be a single entity at all. Instead, some experts say the solution lies in a multilayered approach built on shared accountability.
On the frontline of solving the problem is Temi Odesanya, Responsible AI Leader at leading global insurance organization AIG. With a track record of building and implementing AI governance strategies at major corporations like Thomson Reuters and Ciena, Odesanya believes steering AI safely into the future demands a universal moral compass as its foundation, one that precedes and informs any technical or regulatory standard.
"The foundational mantra should be to 'do no harm.' What’s harmful to me might not be harmful to you, so the principle must always be understood in context. 'Do no harm' depends on the use case and the people involved," Odesanya says.
In her view, the contextual nature of harm makes a single, rigid rulebook unworkable. Instead, her proposed framework for shared accountability mirrors the layered regulatory structure of the pharmaceutical industry. Here, federal laws, local regulators, and industry associations all operate in parallel.
The ripple effect: For the system to work, governments must first define what "do no harm" means in law, Odesanya says. By creating a baseline to guide industry self-regulation, eventually, those standards can be internalized by civil society. "If there is a 'do no harm' model at the national level, that will dictate and guide the industry on what self-regulation means. Then, for civil society, it becomes a shared norm. We all know what 'do no harm' means. It means watch out for your neighbors."
But that ideal is a long way from our current reality. Odesanya knows this better than most, describing a chaotic environment of competing "sovereign silos" of AI regulation.
Experts in the room: Principles on paper are one thing, but enforcement is another, she says. True accountability means moving beyond static checklists to continuous, technically informed oversight. Odesanya envisions a global body, like a UN for AI, whose legitimacy depends on deep technical expertise. "The people providing oversight have to be very knowledgeable. You can’t just bring in auditors who know nothing and have them ask questions that don’t make sense, like demanding every model be explainable when, for some systems, the right standard is traceability."
Borders and blind spots: The path forward requires a global consensus on core principles to combat the regulatory mess Odesanya navigates daily. "Some of these regulations already contradict each other, even between states. That’s why we should look to the human rights framework as a guide. Its core principles are universal, even if their application differs. AI needs the same global consensus on what responsible use truly means."
Because risk isn’t static, it shifts with the user. That makes one-time audits meaningless, Odesanya says. As an example, she tells the story of a teenager who used Meta’s video-recording glasses to build a facial recognition tool that scraped the web for personal data. Then, the military bought it. Now, she wonders, "Who’s actually investigating if they’re using it responsibly?"
For her, the answer is clear: only continuous checks across every stage of the AI life cycle, from government oversight to end-user behavior, can ensure accountability. While waiting for this global system to materialize, Odesanya offers candid advice for executives tasked with leading the change internally. The key, she says, is for responsible AI leaders to reframe their role.
More partner, less police: The function of a responsible AI leader should be to help the business balance speed with trust, Odesanya continues. Ideally, they should position themselves as an innovation partner rather than a compliance-driven "person with a stick." "Responsible AI shouldn't be a 'no' within an organization. It should be a 'yes, but we need to do X, Y, and Z,'" she explains. "And it needs to come from someone with real influence. That's the true game-changer."
The method she proposes is two-fold. First, convene key stakeholders from strategy, legal, risk, and product. Second, build a shared strategy collaboratively, so that accountability is distributed and everyone is aligned. "We need to put everyone at the table," Odesanya says. But even the best strategy has its limits, she concludes. A well-designed framework cannot thrive in a culture that rewards speed over safety. For her, this is the more profound reckoning beneath every debate on responsible AI. "Modern life is built around urgency. We celebrate how fast something ships, not how carefully it’s made. Until that changes, responsibility will always be fighting uphill."