AI Data Press | Powered by EnterpriseDB © 2025
In the rush to implement top-down AI governance frameworks, many leaders are overlooking the messy reality of how employees actually use AI tools.
Khullani Abdullahi, J.D., Founder of Techné AI, describes how the rise of "shadow AI" can expose firms to unmanaged liabilities that most policies fail to address.
She explains why a culture of transparency and trust is essential for effective AI governance and risk management.
Abdullahi concludes that a multi-stakeholder AI Oversight Committee and a comprehensive Compliance Register are prerequisites for responsible AI governance.
In the rush to build AI governance frameworks, most leaders are overlooking a fundamental flaw in their approach. Without a shared baseline, an organization-wide understanding of how AI tools work, their capabilities, and the risks they pose is impossible, even with the most polished policy or top-down committee. Conventional wisdom says order comes from control, but the reality inside most organizations is far messier. But compliance doesn't start with frameworks; it begins with literacy.
According to Khullani Abdullahi, J.D., Founder of responsible AI consultancy Techné AI, most governance conversations start in the wrong place. With more than a decade of experience bringing emerging technologies to market and as Host of the AI in Chicago podcast, she has built a reputation for turning complex technological change into clear business strategy. For her, the path to responsible AI begins not with a framework, but with a new definition of baseline professional skills.
The real challenge isn’t policy design but the lack of a standard level of AI fluency across the workforce, Abdullahi explains. Just as typing and office software evolved from specialized skills to everyday expectations, AI proficiency is quickly becoming a non-negotiable competency for professionals at every level. Without that shared fluency, companies can neither manage the risks AI introduces nor capture the full value it offers.
Skill before scale: "We are now in an era where companies must define the AI tools and skills their employees need before they can even discuss risk and strategy. That fluency will vary by department and role, but only by knowing where teams stand can organizations scale effectively and unlock real value."
As Abdullahi points out, much of the risk companies face comes from employees turning to personal AI tools when sanctioned options fall short. But the more effective response is a culture of transparency and psychological safety, she explains. In environments rooted in fear, companies often fail to recognize the risks associated with unsanctioned use and overlook the value that those same experiments could unlock. In Abdullahi's view, trust is what transforms a potential liability into a scalable asset.
Lurking in the shadows: "Shadow AI is dangerous. When employees use personal accounts for work, they can inadvertently or intentionally expose proprietary data, confidential information, or even PHI and HIPAA-protected records. To truly gain employee compliance and de-risk the organization, you must understand why employees are using these technologies, why the tools you've provided are insufficient, and how you can meet those needs in a compliant manner."
Opportunity vs. risk: "If your employees don't feel comfortable admitting how they use AI, you fail to manage the risks you don't understand and you fail to capitalize on the value you can't see. If an employee discovers a 40% efficiency gain, that value cannot be scaled across the organization because it remains invisible."
Only with that foundation of widespread literacy and trust can an organization build effective, top-down governance. Abdullahi’s toolkit is built on two key components: a multi-stakeholder AI Oversight Committee and a comprehensive Compliance Register. She describes these structures as collaborative by nature and dependent on an informed workforce to be effective, connecting high-level regulatory requirements to on-the-ground product decisions.
C-suites, unite: "Responsible AI is built on a multi-stakeholder collaboration, and the way you formalize that is with an AI Oversight Committee. This structure must have buy-in from the highest C-suite level, and even members of the board, because compliance and ethics cannot be bolted on after the fact."
On the compliance register: "One of the first things any regulated company must do is create a compliance register, which is a comprehensive list of every local, state, national, and international law you operate under. That document then becomes the foundation for a multi-stakeholder conversation at the AI Oversight Committee."
Finally, Abdullahi explores the core “why” behind AI adoption, noting that the imperative takes a different shape depending on one's position within the organization. For employees, the question is how to work productively with AI. For boards, it is about overseeing a technology they often do not fully understand. "AI will soon be the operating system of business, which means employees are incentivized to master it to create value for both their organizations and themselves. At the same time, board members carry oversight responsibilities they are not yet equipped to meet, since you cannot have effective oversight of a technology where you have significant gaps in insight."