*The views and opinions expressed are those of Christopher Day and do not represent the official policy or position of any organization.
For many enterprises, the anticipated return on AI is being held back by a gap that is less about technological capability and more about organizational readiness. Progress is slowed not by the limits of the models themselves, but by fundamental deficits in workforce literacy, institutional trust, and cross-functional collaboration. In short, the groundwork for successful AI deployment is decidedly human.
Christopher Day is the Director of Data & AI Governance at Marriott Vacations Worldwide. With over 14 years of experience managing large-scale transformations, including creating over $1 billion in value for Fortune 100 and 500 clients at KPMG, Day is on the front lines, turning AI’s promise into operational reality.
"The biggest risk in AI isn’t that it’s too powerful. It’s that organizations deploy it without the literacy or governance to use it wisely," says Day. The problem, he says, often stems from a fundamental misapplication of the technology, where organizations chase solutions before defining problems. In the rush to innovate, leaders may treat AI as a one-size-fits-all solution, a fundamental mistake when evaluating the state of AI in the enterprise.
A swing and a miss: "Shoving AI at a problem is a mistake; it's not the correct solution for everything. I compare it to golf: you don't use a driver for every shot, sometimes you need a putter." That mistake, Day says, often points to a deeper issue he identifies as the biggest bottleneck to progress: a widespread lack of AI literacy. Until organizations build the conditions for successful AI deployment by investing in their workforce, innovation will remain siloed and ineffective.
Knowledge is power: He says the goal is to empower the experts already inside the company by demystifying the technology and giving them the tools to innovate from the ground up, a common challenge in AI adoption. “The biggest bottleneck is that the 'art of the possible' is still out of reach for most people. Once you bring that down to the people who will actually use the tools and weaponize their knowledge, the bottlenecks will disappear.”
While governance is often viewed as a restrictive bottleneck, Day reframes the function as a vital partner for scaling AI. He says that when governance and security teams are integrated early as strategic partners, they accelerate innovation by proactively guiding teams on how to build correctly from the start. Integrating governance early can help prevent projects from being derailed by unforeseen regulatory or privacy hurdles later in the process.
From 'no' to 'how': "My job is not to tell you 'no,' it's to tell you 'how,'" says Day. "Too often, a team builds a great idea in a silo, but the project is ultimately rejected because they don't understand all the other laws or data privacy rules involved, because that's not their job. That is how a project ends up in the failed category."
The hybrid advantage: This approach also points to a new way of thinking about talent, moving beyond siloed functions to create hybrid roles that combine technical, business, and compliance acumen. “A blend of an AI engineer and a governance person provides more ROI than keeping those roles separate, because they can fix a project before it derails. When you come to a fork in the road, having people who know whether to go left or right is the make-or-break factor.”
For many global companies, this philosophy is becoming a core part of their strategy. Navigating a patchwork of regulations like Europe's GDPR and the EU AI Act, where a single misstep can trigger a fine of up to 7% of global turnover, makes a strong case for treating AI governance as a growth strategy. However, it's about more than just fees. Trust, Day says, has become a key competitive advantage.
The trust premium: Day points to market dynamics as proof, contrasting Anthropic’s reputation as a trust-focused innovator with the perception questions surrounding OpenAI. It's a modern illustration of a timeless business principle. "If we get it wrong, we break customer trust, and you don't get that back easily. Enron never got it back. Apple, on the other hand, built its entire brand on trust. They may not have the most up-to-date features, but everyone knows they can trust them. A lot of businesses are now saying that customer trust is more valuable than an extra 2 or 3% to the bottom line."
The winners in the next phase of AI will likely be those who treat it as a long-term operating philosophy that balances innovation with responsibility. In practice, that means building a leadership agenda for turning technology into value centered on how work is done, how it is governed, and how humans collaborate with intelligent systems to produce measurable outcomes, shaping the AI data horizons of the future.
Day concludes by summarizing his advice into three core questions, guiding principles for success with AI. “First, understand the ‘why.’ Why are we doing this from a business perspective? Second, build on trust with the client. And third, always be asking ‘why.’ Why do we do it this way? Have we thought about doing it another way? I think those three principles are going to lead to a lot of success. And ROI.”