All articles

Security & Governance

International Policy Leader Calls for Impact-Based Risk Tiers as AI Authority Expands

AI Data Press - News Team
|
February 18, 2026

Farhan Sahito, Ph.D., Partner at Privanova and advisor to the European Commission, UN, and INTERPOL, argues AI security success hinges on impact based autonomy and firm executive control.

Credit: Outlever

Key Points

  • AI now detects and acts at machine speed in security operations, but unclear autonomy and weak oversight increase the risk of silent failures and unchecked authority.

  • Farhan Sahito, Ph.D., Partner at Privanova and advisor to the European Commission, UN, and INTERPOL, outlines how impact based limits and clear ownership reshape AI security governance.

  • Organizations succeed when they tie AI authority to defined risk tiers, enforce human checkpoints, and embed executive level control into daily operations.

Leaders are starting to ask different questions. Instead of just asking is a system is secure, they now ask who owns the model, what data shapes it, and how we know when its behavior changes.

Farhan Sahito

Partner
Privanova

Farhan Sahito

Partner
Privanova

With AI driving security operations at machine speed, control has become the defining issue. As organizations grant intelligent systems greater authority to detect, decide, and act, the margin for error narrows. The critical question is how far that autonomy should extend and who remains accountable when it does. The future of security will be shaped by leaders who set clear boundaries, embed governance into operations, and ensure human judgment stays firmly in command.

Farhan Sahito, Ph.D., is a Partner at Privanova, a tech policy firm that advises governments and multinational organizations on privacy, digital trust, regulatory compliance, and responsible AI deployment. Also serving as an Expert Evaluator for the European Commission and advising both the United Nations and INTERPOL, Sahito brings perspective on how AI can amplify security capabilities while maintaining clear boundaries and human accountability. He says that while AI can uncover hidden patterns, detect anomalies, and surface early warning signals that might otherwise go unnoticed, automation is not a replacement for human expertise.

"AI helps human teams see things they simply cannot see at scale. It can scan huge volumes of data, spot weak signals, and connect threats much faster than any human being. The organizations that get this right treat AI as a co-pilot in the security team, not something on autopilot. That's where the real benefit is, and where the risk stays manageable," Sahito says. The strongest outcomes, he explains, come from companies that design AI as a partner rather than a replacement.

  • Governing the model: Governance is becoming an embedded operational discipline woven into daily security practice. "Leaders are starting to ask different questions. Instead of just asking if a system is secure, they now ask who owns the model, what data shapes it, and how we know when its behavior changes," Sahito says. Clear ownership, visible data lineage, defined escalation paths, and routine reviews prevent silent drift. Without that structure, AI can introduce new blind spots. With it, automation stays accountable and aligned with risk tolerance.

  • Defining autonomy: "The threshold for AI autonomy should be based on impact, not confidence. It can safely handle low-impact, high-volume tasks, like flagging alerts or closing obvious false positives, delivering value and reducing fatigue. But whenever a decision affects people, critical systems, or business continuity, humans must remain in the loop, even if the AI is highly confident," Sahito says. Clear impact tiers, documented guardrails, and enforced human checkpoints prevent autonomy from quietly expanding beyond its mandate. Without defined boundaries, risk scales as fast as the technology itself.

Looking ahead, AI is ushering in the Agentic SOC, a model where intelligent systems collaborate with one another at machine speed, sharing signals, triggering responses, and coordinating actions in real time. That acceleration changes oversight. Humans are no longer reviewing every alert but defining objectives, constraints, and stop conditions that shape how agents behave. Engineers move closer to the role of supervisors and interpreters, validating outputs, stress testing safeguards, and ensuring automated decisions remain aligned with policy and risk tolerance. Speed increases, but so does the responsibility to govern it.

  • Human judgement: "The security engineer's role is changing from just blocking threats to explaining AI behavior. The most resilient teams are those where leadership sets clear boundaries and empowers engineers to challenge AI outputs, question assumptions, and escalate when something feels off. In the new world, human judgement becomes even more valuable," Sahito says. This evolution reflects a shift toward leadership-driven AI governance, with technical teams playing a central role in shaping how automation aligns with organization's risk tolerance.

  • Control defines success: "As we move toward agent-to-agent systems, humans are no longer interacting with every decision. Agents talk to each other, trigger actions, and negotiate priorities far faster than people could. That changes the nature of control. Teams shift from approving individual actions to defining goals, limits, and stop conditions. Agents can act, but humans must still govern," Sahito says. Without the ability to understand why decisions occur, trust and adoption can falter. When boundaries are clear, adoption proceeds more effectively; when they are not, the system breaks down.

  • When AI fails silently: "AI failures don’t always look like failures. There is no outage, no alert, no clear error. Something just feels off. High performing organizations elevate their response, treating an AI incident as an operational crisis that demands leadership level responsibility. Escalation is not about assigning blame to an algorithm; it’s about maintaining human control," Sahito says. Subtle model drift, skewed outputs, or degraded decision quality can quietly compound risk if left unchecked. Defined reporting channels, behavioral monitoring, and executive ownership ensure those weak signals are surfaced early and addressed decisively before small distortions become systemic exposure.

Those who lead in this next phase of security will be defined by discipline, not speed. Rapid deployment may generate headlines, but sustainable advantage comes from deliberate integration, strong governance, and a culture that treats oversight as a strategic function. “AI’s speed is not the risk; losing control is. The organizations that understand this distinction are the ones that will succeed. In the end, AI does not need more power. It needs better leadership,” Sahito says.

As automation grows more capable and more autonomous, leadership becomes the control plane. The future of security will belong to organizations that treat AI authority as something to be actively governed, continuously questioned, and firmly anchored in human accountability.