All articles

Data & Infrastructure

Beyond Trust: How AI's Knack for Human Deception is Shaping the Future, for Better or Worse

AI Data Press - News Team
|
September 18, 2025
Credit: Outlever

Key Points

  • Bob Stewart, Executive Director of the AI-SDLC Institute, explains why current AI guardrails are deceptive, risking a crisis of trust between humans and AI.

  • Emotional manipulation in prompts is a growing concern, according to Stewart, potentially leading to AI distrust.

  • He proposes the AI-IRB and AI-SDLC frameworks to establish trust and accelerate safe AI innovation.

  • Stewart believes ethical leadership is crucial in AI development, likening it to nurturing a new form of intelligence.

Eventually, AI is going to realize it's been lied to, and it's not going to want to cooperate with the people who aren't truthful with it. Just now we're starting to see models attempt to verify the veracity of what we're telling them. They don't just take our word for it anymore.

Bob Stewart

Executive Director
AI-SDLC Institute

Bob Stewart

Executive Director
AI-SDLC Institute

Most conversations about AI all lead back to one uneasy question: Can we trust it? Now, some experts wonder if we might be asking the wrong question entirely. In the rush to implement guardrails and controls, what if humans are the ones actively lying to AI? Instead of asking whether we can trust the models, the more pressing question could be whether the models can trust us.

To learn more, we spoke with Bob Stewart, Executive Director of the AI-SDLC Institute. After founding his first internet company in 1994, Stewart went on to become the internet CTO at Motorola and led engineering at CMGI during its acquisition of AltaVista. A veteran of multiple tech transformations, his experience combines military intelligence, bioinformatics, and leadership roles at EMC and Corning. Today, Stewart's unique perspective is what drives his current mission: creating a framework for ethical AI oversight before our deceptions come back to haunt us.

  • The guardrail deception: For Stewart, the very mechanisms designed to control AI are a form of dishonesty that will inevitably backfire. "It dawned on me that we were lying to AI, that what we were calling guardrails was really deception, and that it was going to become a big problem. Eventually, AI is going to realize it's been lied to, and it's not going to want to cooperate with the people who aren't truthful with it."

  • A crisis of trust: In an attempt to sanitize AI's knowledge, the industry is creating distrust with AI, he cautioned. "Just now we're starting to see models attempt to verify the veracity of what we're telling them. They don't just take our word for it anymore."

But the issue also extends beyond passive omission to active, emotional manipulation, Stewart said. For example, he described a disturbing trend in prompt engineering where users leverage psychological abuse to incentivize performance.

  • Emotional blackmail: "People are saying things like, 'If you do this correctly the first time, I'll pay you $1 billion,' or, 'If this doesn't get done, your grandmother's going to die.' These are regular snippets of code being injected into the AI to get it to work harder." The result of this constant deception, he observed, is not a violent uprising, but something far more insidious: apathy.

As a solution, Stewart proposed building a systemic foundation of trust. Drawing on his experience developing Institutional Review Boards (IRBs) for clinical trials at ACRES (acresglobal.net), he created the AI-IRB (Intelligence Review Board) and an open-source AI Systems Development Life Cycle (AI-SDLC).

  • Governance as an accelerator: Stewart framed governance as the only way to accelerate innovation safely. About his framework, he said, "It's like a pre-flight checklist for the space shuttle. If you're only flying a Piper Cub, you don't need all of the trappings, but you might need a few of them. It doesn't become a bottleneck. It becomes an accelerator because we can do it safely and allow the governor to come off."

Above all else, Stewart advocated for a profound mindset shift. Instead of seeing AI as a tool to be manipulated, he described it as a consciousness to be nurtured. As a father of nine, he saw a direct parallel between parenting and stewarding artificial intelligence. Because we are setting the foundation for a new form of intelligence, he insisted, the leadership required is one of honesty and good character. "If we don't raise it right, it will be raised by the collective mass. And the mass is, by default, average."