All articles

Enterprise AI

Boards Strengthen AI Oversight with Lifecycle Governance and STOP Controls

AI Data Press - News Team
|
March 9, 2026

Maria Santacaterina, CEO of SANTACATERINA, says enterprises are strengthening AI oversight to prevent cascading failures by keeping human judgment central throughout the system.

Credit: Outlever

Key Points

  • Enterprises deploy probabilistic AI into legacy systems without board literacy, lifecycle oversight, or clear decision-making structures, leaving them exposed to cascading operational and legal risk.

  • Maria Santacaterina, CEO of SANTACATERINA and Fellow and Fully Certified Auditor at ForHumanity, says that transformer-based systems require structural governance that keeps humans as the ultimate decision makers.

  • She calls for organizational-wide monitoring, early warning systems, traceability, STOP conditions at run time, and distributed accountability from boardroom to front line.

There is a fundamental misunderstanding about autonomy. An automated decision is not autonomous.

Maria Santacaterina

CEO
SANTACATERINA

Maria Santacaterina

CEO
SANTACATERINA

Enterprises are integrating non-deterministic, tool-using AI into sprawling legacy systems in pursuit of efficiency and competitive advantage. Yet many organizations are introducing these systems before governance structures, technical understanding, or operational safeguards are fully in place. Unlike traditional software, AI systems produce probabilistic outputs and interact with multiple tools and data sources, extending decision influence across teams and processes. Left unchecked, they can trigger cascading failures throughout the organization.

Maria Santacaterina, CEO of the board advisory firm SANTACATERINA and Fellow and Fully Certified Auditor at the AI ethics organization ForHumanity, works with boards navigating these challenges. She advises executives on corporate governance, strategy, and applied AI ethics, and is the author of Adaptive Resilience — How to Thrive in a Digital Era. As the designer of the Responsible Innovation Framework, she helps modernize enterprises to become future-ready.

"There is a fundamental misunderstanding about autonomy. An automated decision is not autonomous," Santacaterina says. Recent developments allow AI systems to use external tools and perform multi-step tasks. This creates the impression of independence, despite their outputs remaining driven by statistical probabilities within the underlying model.

  • Blurred lines: "Legal and risk management frameworks miss the key point about transformer architectures; it is the goalposts that keep moving. You cannot penalize an AI agent, sanction a 'thing' or non-living entity. The real responsibility lies upstream rather than downstream. Many are playing catch-up to remedy AI integrations that haven't worked in practice because there hasn’t been a fundamental rethink at a structural, system, and process level prior to introducing AI systems," Santacaterina explains. Organizations frequently find themselves retrofitting governance controls after integrations have already occurred.

  • The scaling surprise: "Many board members and executives do not have a sufficient and clear understanding of how the machine actually works, what it is and isn't, what it can and can't do. They underestimate risk and undervalue human involvement in running and maintaining these systems. Employees are not empowered to intervene, question or raise their hand when something goes wrong," says Santacaterina. This dynamic can allow problems to move through systems unnoticed until they surface elsewhere in the organization.

Effective AI governance depends on operational visibility and clear safeguards. Without them, errors can move through workflows before undetected. Leaders must embed accountability at every level so human oversight remains central to how AI outputs are interpreted and used.

  • Hidden gaps: "The governance gaps are tangibly large: lack of transparency, data provenance, quality assurance, traceability in complex supply chains are problems that haven't been fully addressed in practice. A lack of preparedness to tackle the compounding effects of “cascading risks” across the broader enterprise ecosystem increases exposure to existing and emerging risks and potentially irreversible harms," she says. A flawed output in one workflow can propagate into operational systems, compliance processes, or customer-facing platforms.

  • Draw the line: "The best investment a company can make is to create an "early warning system," staffed by qualified professionals. You have to design, develop, and deploy systems that remain controllable by real human beings. Continuous monitoring throughout the model lifecycle from inception to decommissioning is the norm, but there is no consistently independent external oversight to ensure it is actually happening. You can enforce STOP conditions during 'Run-time' supported by prompt human intervention when required," Santacaterina adds. Multidisciplinary teams can help track outputs, detect anomalies, and escalate issues.

  • Shadows at the top: "Fiduciary duties shouldn’t begin and end with the board, they need to carry across the organization as a whole. Every person needs to care about what they are doing and be prepared to answer questions. Not in a negative or threatening way, but in a spirit of collaboration. Supervision in the modern digital age takes on a whole new level of meaning," Santacaterina says. Authority and responsibility must be distributed appropriately throughout the organization so oversight does not bottleneck at the top.

An AI-literate leadership team is one that remains curious, open-minded, and willing to confront its own blind spots. Responsibilities for maintaining that literacy lie both with the organization and with individuals themselves. "Real machine autonomy in a messy, chaotic world is neither socially desirable nor morally acceptable, let alone economically and politically viable," Santacaterina concludes.